The Art of Vibe-Coding (with Google AI Studio): Personal Writing Assistant App



This content originally appeared on DEV Community and was authored by Rachel

This post is my submission for DEV Education Track: Build Apps with Google AI Studio.

What I Built

A (long) while back, I had this idea for a startup. I wanted to build a personal writing assistant to help fiction writers. This was in the days before Generative AI. Once Generative AI came about, I knew this idea would evolve. I documented this concept on a product page I developed years ago. It was time to take AI for a spin and see what it would come up with.

My Experience

My first thought was to give it a webpage with the product features and see what it would do. I was a bit skeptical it would work, and…it didn’t. It created something completely unrelated to the product page I created.

So, I would need to start simpler. Before starting this project, I had already tried creating a character generator with Google AI Studio. It was quite successful in doing that, and in under 30 minutes, I had a character generator prototype.

It was a great start, but to get the “writing assistant” I envisioned, I would need to start from a different MVP. So I began crafting.

My initial prompt generated a React App. I haven’t used React in a while (Vue.js please!), but this seemed like a good opportunity to re-acquaint myself with React. So I continued the vibe-coding session in React.

Rough Outline of Steps:

  1. Create an Editor with Gemini to help generate what comes next in a sidebar. I had to specifically prompt it to bring in a RTF editor. Otherwise it defaulted to a textarea.
  2. Add tabs to the sidebar with the following: Characters, Events, Locations, Timeline, Notes, and Organizations. Gemini created these tabs with Characters and Notes having auto-creation features.
  3. Implement Characters Tab. Gemini did a great job in automatically generating a form with different fields for the character, and allowing automatic creation with Gemini.
  4. Implement Organizations Tab. Similar to Characters, Gemini did a great job in flushing out the Organizations tab with similar manual input and auto-generation features.
  5. Implement Locations Tab. Similar to Characters and Organizations, Gemini did a great job in flushing out the Locations creation with relevant fields and auto-generation.
  6. Remove Events Tab and create Events within the Timeline Tabs. This was fairly straightforward and handled with ease.
  7. Add draggable interface for Events in the Timeline Tab. This gave it problems due to dependency issues. These problems would persist through the end of my Vibe-coding session.
  8. Add Focus mode. This was done with ease, but later disappeared after adding another feature.
  9. Add Dark mode. This was also done with ease.
  10. Add a timer to track writing sessions.
  11. Migrate to Next.js framework.

Is this production ready? No. But it is a great prototype. Would I deploy this? Personally, not as is. It’s not really a complete MVP for me. There is definitely more customization I would like to do.

After the Next.js migration, I downloaded it, but it did not run and there were some conflicts to resolve. I decided to revert to an older code checkpoint to capture the demo video. The migration to Next.js will come later.

As the codebase grew, it became increasingly difficult to add new features. Thus, where I ended — I think it was a good “stopping point”. I may continue to ask AI for assistance, but not in the same manner going forward.

Challenges with Vibe Coding

  • Inevitably, just as when humans code, there are bugs. It seems the bugs are more dependency related though due to different versions loading in the browser at the same time. These bugs may be less common when developing locally.
  • Unsurprisingly, as the codebase grows, it becomes increasingly difficult to add new features using AI. I also lost features I previously built out. I had to ask Gemini to re-add the feature.
  • It will come up with suggestions on what it plans to do, or even imply it did something, when nothing changed. I had to tell it to “implement this” multiple times in order to get the files updated.
  • Eventually, you’ll reach a point where it loses the “fast edge” and development slows down. By the end of step 10, I kept hitting dependency errors over and over again. It was time to move development offline.

Takeaways

Google AI Studio is great for getting a minimalist MVP together. It’s less great for taking something to production. Yes, it can be deployed via Cloud Run, but the project I built would require more invisible features than the MVP generated. It was a great starting point for hooking up the main features I wanted. I will need to do the “hard part” afterwards. AI isn’t there yet.

Demo

Notes: I didn’t demo the image generation because when I ran it locally, it didn’t work out of the box. Turns out…if you want to use the Imagen model, you’ll need to use Vertex AI Service, which requires a different auth method and running a backend (Node.js). Trust that the image generation worked when using Google AI Studio. 🙂

Thanks for reading. If you’re interested in seeing a deployed version, drop a comment.


This content originally appeared on DEV Community and was authored by Rachel