This content originally appeared on DEV Community and was authored by Jesus Fernandez
Working with large language models (LLMs) locally is excitingβbut also messy. Between GPU drivers, container configs, and model juggling, itβs easy to lose hours just getting things to run. Thatβs why I created ollama-dev-env: an experimental project designed to streamline local LLM development using Docker, NVIDIA GPUs, and open-source models like DeepSeek Coder.
Why This Project Exists
This started as a personal experiment.
I wanted to see how far I could push local development with LLMsβwithout relying on cloud APIs or heavyweight setups. The goals were simple:
Run models like DeepSeek Coder and CodeLlama entirely on my own hardware
Automate the setup with Docker and shell scripts
Create a reusable environment for testing, coding, and learning
What began as a weekend project turned into a full-featured dev environment I now use daily for prototyping and AI-assisted coding.
Key Features
Experimental but practical: Built for tinkering, stable enough for real use
Pre-installed LLMs: DeepSeek Coder, CodeLlama, Llama 2, Mixtral, Phi, Mistral, Neural Chat
GPU Acceleration: Optimized for RTX 3050 and compatible cards
Dev Script Automation: One CLI to manage everything
Web UI: Chat and interact with models visually
Security-first: Non-root containers, health checks, resource limits
Setup in Seconds
Full instructions are in the GitHub repo, but hereβs the short version:
git clone https://github.com/Jfernandez27/ollama-dev-env.git
cd ollama-dev-env
./scripts/ollama-dev.sh start
Access services:
Ollama API: http://localhost:11434
Web UI: http://localhost:3000
What You Can Do With It
Experiment with LLMs locally
Chat with models via CLI or browser
Analyze code with DeepSeek Coder
Pull and switch between models
Monitor GPU usage and container health
Extend the environment with your own tools
Built for Developers Like Me
As a backend-focused dev working in EdTech and SaaS, I needed a local playground for AI toolsβsomething fast, secure, and flexible. This project reflects that need. While itβs experimental, itβs already powering real workflows.
Want to Collaborate?
If you’re building something similar, exploring LLMs, or just want to geek out over Docker and GPUs, feel free to reach out or contribute. The repo is open-source and MIT licensed:
github.com/Jfernandez27/ollama-dev-env
This content originally appeared on DEV Community and was authored by Jesus Fernandez