This content originally appeared on DEV Community and was authored by Eelco Los
I love containerization.
From personal projects running at home to production-grade services, containers have transformed the way I build and ship software. They’re lightweight, consistent, and—when used correctly—secure. For local development, I usually prefer to work with full SDKs. But for deployments, I lean heavily on containers, DevContainers, and GitHub Actions.
This post will walk you through a solid workflow for building and running .NET apps in Docker using Alpine, preparing images with CI, and tuning for Kubernetes deployments with realistic resource limits.
Running .NET in Alpine Containers
Alpine is a super minimal Linux distro that makes for compact Docker images. Microsoft ships Alpine-based variants of .NET like this (at the time of writing this is dotnet
9):
FROM mcr.microsoft.com/dotnet/aspnet:9.0-alpine
To have a minimal container is what I feel containerization is really about: work with the OS that is minimal in scope and just focuses on the app execution.
But there’s a gotcha—cultural and timezone data isn’t included by default. To make your app work correctly across locales and timezones, add:
RUN apk add --no-cache icu-libs tzdata
See Andrew Lock’s excellent guide for deeper insights on this issue.
Docker Security: Running as Non-Root (And Doing It Right)
One of the most common but overlooked Docker security pitfalls is that containers run as root
by default. If someone breaks out of your app process, they’re root inside the container—bad news.
Defining the Non-Root User
Start by creating a lightweight user and group in the image:
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
-
-S
creates system users/groups (no home directory, no password). - This keeps the image small and secure.
Secure File Ownership: Use COPY --chown
I used to rely on fixing permissions like this:
COPY ./build/api .
RUN chown -R appuser:appgroup .
But this isn’t ideal:
- Adds an extra layer.
- Slower on large file sets.
- Messy.
Then I learned to assign ownership directly at copy time:
COPY --chown=appuser:appgroup ./build/api .
This:
- Instantly assigns correct ownership.
- Avoids extra
RUN chown
. - Makes your
Dockerfile
cleaner and more declarative.
Putting It Together
FROM mcr.microsoft.com/dotnet/aspnet:9.0-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY --chown=appuser:appgroup ./build/api .
COPY --chown=appuser:appgroup entrypoint.sh .
USER appuser
ENTRYPOINT ["./entrypoint.sh"]
- The app runs with the least privilege necessary.
- Files are owned properly the moment they’re brought into the image.
- Clean. Predictable. Secure.
CI Builds Artifacts, Docker Just Packages
One of the best things you can do is keep your Dockerfile lean. Not only to avoid compiling inside Docker—it bloats your image and slows builds, but also because of the what docker is for: containerizing your application. Therefore, your app should be ready to be containerized. That is, how I experience Docker to primarily be: the ‘containerizer’. So, to build then, use your CI pipeline to build and publish the app, then use Docker to package the output. This will give you inspectable artifacts of the build that.
GitHub Actions: Build and Upload Artifacts
- name: Build and Publish
run: |
dotnet publish -o ${{ env.PUBLISH_FOLDER_NAME }} ${{ inputs.publish-args }}
- name: Upload Build Artifact
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.artifact-name }}
path: ${{ inputs.project-folder }}/${{ env.PUBLISH_FOLDER_NAME }}
Then in your Docker build step, pull the artifacts back down:
- name: Download artifacts
run: |
IFS=',' read -ra artifacts <<< "${{ inputs.download-artifact }}"
for artifact in "${artifacts[@]}"; do
mkdir -p "${{ inputs.working-directory }}/build/$artifact"
gh run download --name "$artifact" --dir "${{ inputs.working-directory }}/build/$artifact"
done
Finally, build and push the image:
- name: Build and push
uses: docker/build-push-action@v6
with:
file: ${{ env.DOCKERFILE }}
context: ${{ inputs.working-directory }}
push: true
tags: ${{ inputs.container-tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
This artifact-first approach gives you:
- Reproducibility
- Cleaner build caching
- Easy debugging (you can inspect the build output separately)
Here’s the updated Kubernetes + Helm section with accurate Microsoft AKS references, including official baseline recommendations and pointers to ACA (where applicable):
Kubernetes + Helm: Resource Limits That Actually Work
Let’s be real—.NET isn’t the smallest kid on the block. You can’t slap a tiny resource limit on it without consequences.
What Microsoft Recommends for AKS
Microsoft’s official guidance for AKS firmly states:
“Set pod requests and limits on all pods in your YAML manifests. If the AKS cluster uses resource quotas and you don’t define these values, your deployment may be rejected.”
— AKS Best Practices (resource requests & limits)
They further caution:
“Pod CPU and memory limits define the maximum amount of CPU and memory a pod can use… avoid setting a pod limit higher than your nodes can support.”
— AKS Best Practices (resource guidelines)
Microsoft also provides a default starting configuration in their examples:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
This isn’t a strict minimum—but it is a realistic baseline that balances scheduling, performance, and cost .
My .NET-Focused Configuration
Here’s the setup that consistently works for .NET workloads I test:
resources:
requests:
cpu: 10m
memory: 20Mi
limits:
cpu: 100m
memory: 175Mi
While
.NET
can technically run with ~125 Mi memory, in practice this leads to:
- Sluggish cold starts
- Failing health probes
- Garbage collector thrash
Pushing memory to 175 Mi ensures decent startup times and runtime stability.
TL;DR Recommendations
-
Always define both
requests
andlimits
-
Memory: Setting
limit > request
improves stability—start around 175 Mi for .NET - CPU: A reasonable request (~100m) with a higher limit helps performance without causing throttling
- These aren’t arbitrary—they reflect Microsoft’s AKS baseline examples (Deployment and cluster reliability best practices for Azure, Resource management best practices for Azure Kubernetes Service, Deployment and cluster reliability best practices for Azure, What is the best practice to have request and limit values to a pod in)
Final Thoughts
- Use small sized base images, like Alpine, but patch it with you needs (ie:
icu-libs
andtzdata
) - Run as a non-root user inside your Docker containers
- Use CI to build the app, and let Docker just package it
- Tune your K8s Helm charts to keep .NETs footprint small, but still responsive under pressure of your required workload
Containers are amazing, but they’re even better when treated with care. With these practices, you’ll ship faster, safer, and smarter—whether it’s production, staging, or even your home lab.
Got questions or tweaks to share? Drop them in the comments—I’d love to hear your workflow!
This content originally appeared on DEV Community and was authored by Eelco Los