From 0 to 500 GitHub Stars in 60 Days: Fixing RAG Hallucination with a Symbolic Layer



This content originally appeared on DEV Community and was authored by PSBigBig

WFGY is an open-source AI reasoning framework that acts as a semantic firewall for LLMs. It reduces hallucination, prevents semantic drift, and keeps constraints locked without retraining or changing your infrastructure.

In the first 60 days the project crossed 500 GitHub stars and the archive passed 3,000 downloads. Over 80 engineers reported successful fixes in real pipelines. These numbers are public and can be verified through the links above.

TL;DR for busy developers

  • Problem focus: RAG hallucination, multi-turn semantic drift, vector search instability, broken constraints in agent chains
  • Approach: a symbolic math layer that your model reads from TXT or PDF; no retraining
  • Outcome: reproducible fixes and clearer failure diagnostics
  • Verify in one minute with the steps below

Why wrappers were not enough

During 2023 and 2024 the ecosystem saw a wave of GPT wrappers. Interfaces improved but the hard failures remained. Common symptoms included correct context stored in a vector DB while the model still fabricated details, or agents that drifted off topic after a few steps. Teams asked for something minimal that improves stability and remains verifiable.

What WFGY is

WFGY adds a small symbolic layer that the model can reference while reasoning.
Core ideas:

  • ΔS for semantic stability thresholds
  • λ_observe for state tracking across multi-step chains
  • Constraint locking and collapse recovery rules

Primary artifact: WFGY PDF
https://zenodo.org/records/15630969

Reference index: Problem Map
https://github.com/onestardao/WFGY/tree/main/ProblemMap/README.md

Everything is open source and MIT licensed.

One minute to verify the effect

You can reproduce the difference with any capable LLM such as GPT or Claude or local models.

  1. Download the PDF https://zenodo.org/records/15630969
  2. Start a fresh session. Provide the PDF as context or paste the TXT formulas from the repo.
  3. Run this minimal prompt:
You now have access to the symbolic math layer from the PDF.
Diagnose and fix this failure: RAG returns a plausible paragraph that does not contain the answer.
Steps: map to a Problem Map label, lock constraints, request the exact missing evidence, then produce a grounded answer with citations.
Return: diagnosis label, corrective retrieval plan, final answer with sources.
  1. Repeat the exact test without the PDF or TXT layer.
  2. Compare diagnostics and retrieval behavior.

Problem Map labels and fixes:
https://github.com/onestardao/WFGY/tree/main/ProblemMap/README.md

Minimal prompt template for your pipeline

System:
You are running with the WFGY symbolic layer from the attached PDF or embedded TXT.
Keep ΔS within stable bounds. Maintain λ_observe across the entire chain.
Never finalize an answer without grounded evidence. If retrieval is insufficient, request the exact missing material by key and location.

User:
Here is the task and corpus. Diagnose the failure mode using the Problem Map.
Apply the rules and return a grounded answer with citations.

Artifacts to include:

What developers reported

From the first two months of usage across RAG and agent pipelines:

  • Models admit when evidence is missing and ask for it, rather than fabricating
  • Constraint status becomes observable which reduces silent failure
  • Integrates as a drop-in layer so teams keep their existing stack

Case studies and before or after trails are in the Hero Logs
https://github.com/onestardao/WFGY/tree/main/HeroLog

How the project reached 500 stars

A short public timeline that you can verify in commits and threads.

  • Weeks 1 to 2: early testers validated drift and retrieval fixes in the open
  • Weeks 3 to 4: Hero Logs published with reproducible steps
  • Weeks 5 to 6: community endorsements including the creator of Tesseract.js Proof trail: https://github.com/bijection?tab=stars
  • Day 60: 500+ stars and more than 3,000 archive downloads

Repository and activity: https://github.com/onestardao/WFGY
Discussions index: https://github.com/onestardao/WFGY/discussions

Verification checklist

  • Can you map the failure to a Problem Map label
  • Do you see explicit constraint status in the chain
  • Does the model request the missing evidence instead of guessing
  • Can another engineer repeat your test with the same PDF and corpus
  • Do your citations point to the correct passages

Compatibility

  • Works with GPT and Claude through system prompts or tool documents
  • Works with local models such as LLaMA or Mistral through context files
  • No retraining required and no infra change needed

Roadmap

  • TXT Blur for symbolic control in text-to-image workflows
  • WFGY-Bench for public comparisons across GPT-5 and GPT-4 with or without WFGY
  • New Problem Map entries for bootstrap ordering and deployment deadlocks

Follow updates: https://github.com/onestardao/WFGY/discussions
Open issues: https://github.com/onestardao/WFGY/issues

Links


This content originally appeared on DEV Community and was authored by PSBigBig