This content originally appeared on DEV Community and was authored by CliffordIsaboke
This is a submission for the Redis AI Challenge: Real-Time AI Innovators.
What I Built
In this project, I developed a Real-Time AI Recommendation Engine that leverages Redis as the backend database for fast and efficient vector search. The system uses Sentence-Transformers to convert text data into dense vector embeddings, which are stored and searched using Redis. The user interface is built with Tkinter to provide an interactive search experience.
Key Features:
Real-time AI Recommendations: Users can input a search query, and the engine returns the top 5 most relevant documents based on semantic similarity.
Redis for Vector Search: The engine uses Redis for storing document embeddings and performs KNN (K-Nearest Neighbor) searches in real-time.
Text Embedding: Text data is transformed into embeddings using the all-MiniLM-L6-v2 model from Sentence-Transformers.
Interactive UI: The system features a user-friendly interface built with Tkinter, where users can search for documents and see results in a table.
Demo
You can test out the AI Recommendation Engine by running the app locally. Here’s a brief demo of the app’s functionality:
Search Interface: Enter a search query, and the app will fetch results based on content similarity.
Results Display: The most relevant documents are displayed in a table with a preview of the content.
Running Redis docker container:
Project Link: https://github.com/CliffordIsaboke/Real-Time-AI-Innovators-Redis-Beyond-the-Cache.git
How I Used Redis 8
Redis 8 played a central role in making the AI Recommendation Engine fast and scalable:
Vector Search:
I used Redis’s Vector Search capabilities (with RedisSearch module) to store and retrieve document embeddings. This allows me to search for the top N most similar documents based on the user’s query.
The embeddings are stored in Redis as FLAT vectors, which allows for efficient cosine similarity searches.
Semantic Caching:
Redis serves as a cache layer for frequently searched queries. Once a query’s embedding is processed and stored, subsequent searches for the same or similar queries can be returned quickly from Redis without reprocessing the embeddings.
Real-time Data Layer:
Redis’s high-performance in-memory architecture ensures that both embedding storage and search queries are handled in real-time, allowing for instant results even with larger datasets.
The KNN Search (using the Query API) enables a near-instantaneous response time for each search.
Data Persistence and Scalability:
Redis’s ability to persist the embeddings while maintaining high-speed access is key to scaling this application. Redis provides both in-memory storage for fast access and the option for durable persistence.
Redis Setup:
I set up Redis using the redislabs/redisearch Docker container, which includes the RedisSearch module for indexing and searching.
Redis was used to store document embeddings and metadata (title, content) as Redis hashes.
Thank you for the opportunity to participate in the Redis AI Challenge. Looking forward to more innovation with Redis!
This content originally appeared on DEV Community and was authored by CliffordIsaboke