Where Do Knowledge Graphs Fit in the World of LLMs and AI Agents?



This content originally appeared on Level Up Coding – Medium and was authored by Rohith Teja

Some insights into Agentic Graph RAG

Photo by Igor Omilaev on Unsplash

I've talked a lot about knowledge graphs in my previous articles. Knowledge graphs have been the most powerful way to represent and query structured information.

Now, with the rise in popularity of the large language models, we naturally ask:
👉 Do we still need knowledge graphs?
👉 Where do they fit in the new tech jungle of LLMs?

Short answer: We are not replacing Knowledge graphs, but we simply reposition them.

Long answer: Read on!

Knowledge graphs

Let's recap quickly about knowledge graphs.

Knowledge Graphs are usually represented as directed graphs, so each entity of this graph can be termed as a Triple.

Triple, as the name suggests, is a tuple made of 3 elements, namely: the source node, the relation, and the target node.

The elements of triples can be referred to using different names, such as:

  1. (s,p,o): Subject, Predicate, Object
  2. (h,r,t): Head, Relation, Tail
  3. (s,r,t): Source, Relation, Target

For instance,

    ("Elon Musk", "founded", "SpaceX"),
("SpaceX", "launched", "Falcon 9"),
("Falcon 9", "appearsIn", "Iron Man 2"),

("Tony Stark", "isInspiredBy", "Elon Musk"),
("Tony Stark", "owns", "Stark Industries"),
("Stark Industries", "creates", "Iron Man Suit"),

("Harry Potter", "studiesAt", "Hogwarts"),
("Hogwarts", "isLocatedIn", "Scotland"),
("Harry Potter", "defeats", "Voldemort"),

("Frodo", "carries", "The One Ring"),
("The One Ring", "wasForgedBy", "Sauron"),
("Frodo", "isFriendOf", "Samwise"),

("Luke Skywalker", "isTrainedBy", "Yoda"),
("Yoda", "belongsTo", "Jedi Order"),
("Darth Vader", "serves", "Emperor Palpatine")
Figure by Author

The power of this knowledge graph lies in its connections. They let us see not just isolated facts but how entities interrelate (this is a simple representation of a knowledge graph; modern graphs are usually way complex than this).

This helps in complex reasoning and doing queries.

You can ask stuff like:

“Which Marvel heroes were inspired by real-world billionaires?”
“Which characters have carried legendary objects that were forged by dark powers?”

Knowledge graphs are unique because they have the ability to connect dots across domains. They are not only for storage, but they are also reasoning frameworks.

Large Language Models

I believe you know more about LLMs at this point than Knowledge graphs, so I will keep this short and sweet.

LLMs like GPT, DeepSeek, Grok, Gemini, Claude (my fav) have dazzled us over the past few years to the point that we forgot Knowledge graphs (at least I did).

Just when I started asking the question, do we need knowledge graphs anymore; LLMs showed their limitations:

They hallucinate.

They don't know facts. They predict the most likely sequence of words based on training data.

If they are not trained on the data you asked a question about, they may fill the gaps with fabrications that sound confident. Voila, welcome hallucination.

Some examples of hallucination:

  • Citing a research paper that looks real but doesn’t exist.
  • Giving the wrong capital of a country, even though it sounds confident.
  • Saying that “Rohith (me) won a Nobel Prize in Physics”: not true, yet 😉

Knowledge frozen in time.

If the LLM has been trained on data collected until December 2024, it won't be able to answer the questions relating to January 2025, for example.

They also struggle with traceability (you can’t always ask why an answer is true). I am sure you have experienced these while talking to the AI chatbots.

So, a nice solution to this is a rag.

Hold my RAG

Photo by Tekton on Unsplash

Retrieval Augmented Generation (RAG) is a fancy pantsy way of adding an external fact database that LLMs can use to answer your questions.

There are 3 steps here:

  1. Retrieve: The LLM searches an external knowledge source (like documents, databases, or the web) to pull in the most relevant information.
  2. Augment: That retrieved information is combined with our original question, so the LLM has extra context.
  3. Generate: Using the extra context + the original question + its own model, the LLM generates a better answer.

The illustration below shows a simplified approach to RAG:

Figure by Author

Step 1: You ask a question.
Step 2: LLM figures it does not have the answer.
Step 3: It retrieves the answer from the external source.
Step 4: Augments the question and generates the answer.

Evolution of RAG

Vanilla RAG

At first, RAGs were pretty simple. Think of it as “LLM + a book.” You ask something, the model searches through a bunch of text documents (normally stored in a vector database), pulls the closest chunks, and blends them into its response. This worked better than just plain LLM responses.

But this simple RAG has its limits. It treats information as flat text. It doesn’t always understand relationships between facts. It can retrieve 10 different chunks of text and still miss the bigger picture because it can’t connect the dots.

Advanced RAG

Instead of just grabbing the “closest” chunks in the docs, we started using smarter tricks. Some of these include:

  1. Reorder search results so the most relevant chunks rise to the top (Rerankers).
  2. Combine semantic (vector) search with exact keyword search, so you don’t miss exact matches like names, dates, or technical terms (Hybrid search).
  3. Filter the retrieval by tags like author, time, or domain, to reduce noise and irrelevant information (Metadata filtering).

With this, RAG became more practical. The answers were sharper with fewer hallucinations. But it felt like talking to someone who skimmed through a book before answering the question. You would have the right context, but not deep knowledge of it.

Graph RAG

Enter the big boi knowledge graphs. Here, our goal is to use the structure of the graph to retrieve accurate facts and relationships.

Now, LLMs can have the right context and also see the connections like who founded what, who is related to whom, and what happened before what, etc.

This is more powerful. It's not pure retrieval but retrieval with some reasoning.

You can think of it like this. Vanilla RAG is like scrolling through sticky notes you made before the exam. Graph RAG is like having a mind map that shows how everything is linked.

AI Agents

AI agents love something called multi-hop queries. They constantly query and get additional information to complete a complex task.

Imagine you tell an AI agent to make a visualisation of all the songs Ed Sheeran has collaborated on and then to link those to the movies or shows where they appeared.

A vanilla RAG might pull random text chunks from online blogs like “Ed Sheeran featured with Taylor Swift,” “Song X was in Movie Y,” etc. But fitting that information together into something that makes sense could be messy.

If the agent has access to a Graph RAG, it can do the multi-hop queries by using the graph properties. It starts from Ed Sheeran, then checks the relation “collaboratedWith” and identifies the entities (e.g., Taylor Swift, Justin Bieber). Then does multi-hops to get more information on relations like “performedSong,” “appearsIn” to find movies, TV shows, or soundtracks.

This gives us a small connected subgraph to feed into the AI agent’s LLM to complete the visualization task.

Key Takeaways

Knowledge graphs are powerful. Treat them with love and care 🙂

Note: Here, I simplified some concepts to make the big picture easier to follow. In practice, knowledge graphs, LLMs, and RAG systems have more nuances and complexities than I’m covering in this article.

Thanks for reading, and cheers!

Want to Connect?
Reach me at LinkedIn, X, GitHub, or my Website!


Where Do Knowledge Graphs Fit in the World of LLMs and AI Agents? was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding – Medium and was authored by Rohith Teja