This content originally appeared on DEV Community and was authored by KAMAL KISHOR
Large Language Models (LLMs) are transforming how we code, write, and interact with machines. But many devs still think LLMs are only for big players like OpenAI or Anthropic. Thatβs not true anymore.
With Smollm3 (a super lightweight LLM) + Ollama (a local runtime for LLMs), you can run AI models on your own laptopβno API bills, no vendor lock-in, no internet dependency.
Letβs explore how to do this with real-world examples that go beyond just saying βit works.β
Why Smollm3?
- Tiny but powerful β designed for local use
- Fast β runs even on consumer laptops
- Privacy-first β no data leaves your machine
- Flexible β you can fine-tune or extend for your needs
Perfect for devs who want hands-on AI without GPU farms.
Step 1: Install Ollama
Ollama is like Docker, but for AI models.
curl -fsSL https://ollama.com/install.sh | sh
Once installed, test with:
ollama run llama3.2
Boomβyouβve got a working local LLM.
Step 2: Pull Smollm3 Model
Smollm3 is small enough for everyday use.
ollama pull smollm3
ollama run smollm3
Now youβre chatting with a local AI model.
Real-Life Examples
Hereβs where it gets interestingβrunning Smollm3 for everyday dev and life tasks.
1.
As a Coding Assistant
ollama run smollm3
Prompt:
Write a Python script to monitor a folder and print any new files in real time.
Output:
import os
import time
folder = "./watch_folder"
seen = set(os.listdir(folder))
while True:
current = set(os.listdir(folder))
new_files = current - seen
if new_files:
for f in new_files:
print(f"New file detected: {f}")
seen = current
time.sleep(2)
Run instantly on your machine. No cloud latency.
2.
Summarizing Research Papers Offline
Letβs say you downloaded a PDF from arXiv.
Prompt:
Summarize this research paper in 5 bullet points: [paste abstract]
Smollm3 gives a digestible versionβno internet required.
3.
Personal Shopping Assistant
Imagine you copy-paste Amazon product descriptions.
Prompt:
Compare these 3 headphones and tell me which is best for bass lovers.
Smollm3 instantly gives pros/cons breakdowns. Perfect offline shopping buddy.
4.
Meeting Notes Summarizer
Paste your Zoom transcript into Smollm3:
Prompt:
Summarize key decisions and action items from this transcript.
Now youβve got meeting minutesβno Notion AI subscription needed.
5.
Learning Aid
Students can run:
Prompt:
Explain quantum entanglement as if Iβm 10 years old.
Or even:
Generate 10 practice questions for Python list comprehensions.
6.
Privacy-Preserving Journal
If you keep a private journal:
Prompt:
Rewrite this journal entry in a positive, motivating way: [paste text]
No servers. 100% private.
Step 3: Build Custom Workflows
With Ollama, you can integrate Smollm3 into apps:
Example: Local API Server
ollama serve
Send requests with curl
:
curl http://localhost:11434/api/generate -d '{
"model": "smollm3",
"prompt": "Write a haiku about DevOps"
}'
Local AI endpoint for your apps.
Final Thoughts
Running Smollm3 + Ollama makes AI feel:
- Personal β no one else sees your data
- Accessible β no expensive GPU cloud bills
- Hackable β integrate into your apps, workflows, or scripts
LLMs donβt need to live in a datacenter anymore. They can live on your laptop, right beside VS Code, Chrome, or Spotify.
If you found this useful, drop a comment with how youβd use a local LLMβI might build a follow-up with custom workflows for developers, students, and everyday creators.
This content originally appeared on DEV Community and was authored by KAMAL KISHOR