I Ditched MyFitnessPal and Built an AI Agent to Track My Food



This content originally appeared on DEV Community and was authored by Juan David Gómez

I wanted to track my calories and protein for my training goals, but I got tired of existing apps. They lock you into their pretty dashboards, make it hard to export your own data, and you can’t cross-reference that nutrition data with your training logs easily. I just wanted to own my raw data and build custom reports for myself.

So I built NutriAgent. It’s an AI nutrition tracker that understands text and photos of my meals, logs everything into a database and Google Sheets that I control, and I can chat with it on Telegram or the web. This post is about my journey of turning a simple “call GPT” prototype into a real tool-using agent with memory—for myself, but built with proper product decisions.

Why Tools, Not Just Prompts

At first, I just asked a regular LLM to estimate calories from photos. It worked, but it was useless for real tracking. I needed to:

  • Save my meals and macros over weeks, not just one-off guesses
  • Ask “what did I eat last week?” and get real answers from my history
  • Connect my Google Sheets without leaving the chat

That’s when I realized I needed an agent – not just a model that talks, but one that can do things by calling tools.

I used LangChain’s create_agent because it handles a lot of the heavy lifting. The core setup looks like this:

PROMPT_FILE = Path(__file__).parent.parent / "prompts" / "food_analysis_prompt.txt"

class FoodAnalysisAgent:
    def __init__(self) -> None:
        self.llm = ChatOpenAI(
            model="gpt-4o-mini",
            api_key=settings.OPENAI_API_KEY,
            temperature=0.3,
        )
        self.system_prompt = self._create_system_prompt()

    def _create_system_prompt(self) -> str:
        template = PROMPT_FILE.read_text(encoding="utf-8")
        current_datetime = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
        return template.format(current_datetime=current_datetime)

I keep the prompt in a separate file because I edit it a lot. It’s easier to tweak the instructions without touching code. I inject the current datetime so the agent knows when we are – important for queries like “today” or “this week” in my conversations.

Making It Understand Photos and My Chat History

The agent needs to handle my messy real-world inputs: sometimes text, sometimes a photo, sometimes both. Plus, it needs to remember what we were just talking about.

Here’s how I normalize everything before sending it to the agent:

@traceable(name="FoodAnalysisAgent.analyze", run_type="chain")
async def analyze(
    self,
    text: str | None,
    images: list[bytes] | None,
    conversation_history: list[dict[str, Any]] | None,
    user_id: int,
    redirect_uri: str | None = None,
) -> str:
    messages: list[Any] = []

    # Pull my past conversation from DB and convert to LangChain format
    for msg in conversation_history or []:
        if msg["role"] == "user":
            messages.append(HumanMessage(content=msg["text"]))
        elif msg["role"] == "bot":
            messages.append(AIMessage(content=msg["text"]))

    # Add my current message (text + optional images)
    if images:
        content: list[Any] = []
        if text:
            content.append({"type": "text", "text": text})
        for img in images:
            content.append({
                "type": "image_url",
                "image_url": {"url": f"data:image/jpeg;base64,{base64.b64encode(img).decode()}"},
            })
        messages.append(HumanMessage(content=content))
    else:
        messages.append(HumanMessage(content=text or ""))

    agent = self._get_agent(user_id=user_id, redirect_uri=redirect_uri)
    result = await agent.ainvoke({"messages": messages})
    return str(result)

This lets me send a photo of fries and add context like “these were air-fried” to get a better estimate. The agent sees the image and text together, plus our conversation history, so it feels like a natural chat about my meals.

Designing Tools for My Own Use Cases

Each tool maps to something I actually want to do. I didn’t want abstract functions – I wanted “register this meal” or “show me my data.”

Saving My Meals to DB and Google Sheets

def create_register_nutritional_info_tool(user_id: int):
    @tool
    async def register_nutritional_info(
        calories: float,
        proteins: float,
        carbs: float,
        fats: float,
        meal_type: str,
        extra_details: str | None = None,
    ) -> str:
        record = await save_nutritional_info(
            user_id=user_id,  # This is me
            calories=calories,
            proteins=proteins,
            carbs=carbs,
            fats=fats,
            meal_type=meal_type,
            extra_details=extra_details,
        )

        spreadsheet_id: str | None = None
        config = await get_spreadsheet_config(user_id)
        if config:
            try:
                spreadsheet_id = await append_nutritional_data(
                    user_id=user_id,
                    calories=calories,
                    proteins=proteins,
                    carbs=carbs,
                    fats=fats,
                    meal_type=meal_type,
                    extra_details=extra_details,
                    record_id=record["id"],
                )
            except Exception:
                # DB is my source of truth; Sheets is best-effort
                logger.warning("Failed to append to my spreadsheet", exc_info=True)

        # Build a friendly summary for me
        ...
        return response

    return register_nutritional_info

My database is the source of truth. Google Sheets is a nice-to-have mirror. If Sheets fails, I don’t lose my data – the meal is already saved in Supabase. This gives me peace of mind because I know my data is always safe.

Querying My Past Meals

def create_query_nutritional_info_tool(user_id: int):
    @tool
    async def query_nutritional_info(
        start_date: str | None = None,
        end_date: str | None = None,
    ) -> str:
        records = await get_nutritional_info(
            user_id=user_id,  # Querying my own history
            start_date=start_date,
            end_date=end_date,
        )
        if not records:
            return "No nutritional records found."

        lines = []
        for r in records:
            date = r["created_at"].split("T")[0]
            lines.append(
                f"Date: {date} | Meal: {r['meal_type']} | "
                f"Calories: {r['calories']} | Proteins: {r['proteins']}g | "
                f"Carbs: {r['carbs']}g | Fats: {r['fats']}g"
            )
        return "\n".join(lines)

    return query_nutritional_info

I pre-format my records into simple text lines instead of dumping raw JSON. The model understands this better and can answer my questions like “what was my protein intake on Monday?” more reliably.

Connecting My Google Sheets via OAuth

def create_register_google_account_tool(user_id: int, redirect_uri: str | None):
    @tool
    async def register_google_account() -> str:
        config = await get_spreadsheet_config(user_id)
        if config:
            return "Your Google account is already connected. I'll keep saving meals there."

        if not redirect_uri:
            return (
                "I need a valid redirect URL to start the Google authorization flow. "
                "The server configuration seems incomplete."
            )

        authorization_url = get_authorization_url(user_id, redirect_uri)
        return (
            "To enable Google Sheets integration, please authorize access using this link:\n\n"
            f"{authorization_url}"
        )

    return register_google_account

This keeps all the OAuth complexity inside a tool. The agent just decides when I need to connect my account and triggers the flow naturally in our conversation.

My Memory System: Two Stores for Different Jobs

Supabase is my core memory: my chats, messages, and nutritional records all live there. It’s fast and reliable.

Google Sheets is for me: I can see my data, build custom charts, and truly own it. But it’s slower and sometimes fails, so it’s a mirror, not the primary store.

Here’s how I ensure my spreadsheet exists before writing:

async def ensure_spreadsheet_exists(user_id: int) -> tuple[str, Credentials]:
    config = await get_spreadsheet_config(user_id)
    if not config:
        raise ValueError(f"No spreadsheet config for my user_id={user_id}")

    credentials = await ensure_valid_credentials(user_id, config)
    spreadsheet_id = config.get("spreadsheet_id")

    if not spreadsheet_id:
        spreadsheet_id = await create_spreadsheet(user_id, credentials)
    else:
        try:
            await verify_spreadsheet_has_headers(credentials, spreadsheet_id)
        except HttpError as e:
            if e.resp.status == 404:
                spreadsheet_id = await create_spreadsheet(user_id, credentials)
            else:
                raise

    return spreadsheet_id, credentials

This dual-store approach balances reliability with my need for ownership. I get a spreadsheet I control, but the app doesn’t break if Google has issues.

Same Brain, Different Ways to Chat

The agent is just a class. I can talk to it however I want:

  • Telegram: I message my bot, it normalizes my messages (text, photos, documents), downloads media, and calls the agent. I use webhooks to keep it responsive.
  • Web UI: I built a simple web interface that hits the same agent API. It creates chats with chat_type="external" so the agent doesn’t care if I’m using Telegram or the web.

The agent interface is stable. I could add WhatsApp, SMS, or anything else without changing the core AI logic.

Tracing and Logging Saved My Sanity

I added @traceable from LangSmith around the main analyze method. Suddenly I could see:

  • Exactly what the model received from me
  • Every tool call and its arguments
  • Where errors happened and how long things took

I also log my user ID, spreadsheet IDs, and macros to debug production issues.

Real example: When I built the Web UI, my meals stopped showing images in the traces. I saw the model wasn’t receiving them. The format was wrong – I fixed it in 5 minutes because the trace made it obvious.

What I Learned Building This for Myself

Where agents are worth it: When they orchestrate real tools and stateful systems (like a database, Sheets, and OAuth), not just when they chat. Each tool should map to a clear, real-world action I want to take.

What surprised me:

  • You don’t need the most intelligent LLM to build a useful agent. A simple, well-written prompt and simple tools that capture the main features are often enough to create a reliable and good user experience.
  • Context engineering is key. Understanding the tools and what information or context each tool provides is more important than loading the prompt with ultra-detailed instructions.
  • Handling OAuth tokens, refresh flows, and “self-healing” spreadsheets (like recreating one if I accidentally delete it) was critical for making a reliable tool that depends on a third-party service.

The main takeaway: I’ve always loved building digital products that solve real problems; it’s been my main career motivation. But this project was different. I had a personal problem, and I wasn’t just building a “good enough” solution; I was able to build the perfect solution for my own needs. That gets me excited to build more and keep growing my skills with these new technologies.

I can’t say it was easy; I definitely leaned on my existing experience in software development. But it’s a total game-changer. The way we can build products today is so different from even just a few years ago.

The project is live at https://nutriagent.juandago.dev if you want to see what I built. The code is available on GitHub for the Agent and also for the Web UI
Disclaimer: Since this is still a personal project, the Google Cloud Account is not verified, so if you try to connect your Google account for the spreadsheet integration, you will face a scary warning, but I promise I won’t steal your data.

This was my journey, but I’d love to hear your thoughts. I’m excited to start sharing more updates on this project and other things I’m building. Let’s continue the conversation on X or connect on LinkedIn.


This content originally appeared on DEV Community and was authored by Juan David Gómez