This content originally appeared on DEV Community and was authored by Athreya aka Maneshwar
Hello, I’m Maneshwar. I’m building LiveReview, a private AI code review tool that runs on your LLM key (OpenAI, Gemini, etc.) with highly competitive pricing — built for small teams. Do check it out and give it a try!
Today, my Ubuntu server started complaining:
Discord: “Low disk space — only a 3 gigabytes left.”
Some of my data was filling up fast, for the sake of this article, let’s just assume it was GitLab backups. (Yes, I know, nobody keeps backups on the same server — this is just an example!)
Now, upgrading the server by adding more disk space was one option. But then I remembered something:
I had free AWS credits just sitting there.
So I thought: “Why not use some space as required instead of paying monthly?”
That’s when I stumbled onto s3fs, a magical little tool that lets you mount an S3 bucket as if it were a local folder.
Suddenly, my tiny server had access to petabytes of storage, all for free (well, until the credits run out).
Here’s how I did it.
Step 1: Install s3fs
On Ubuntu:
sudo apt-get update
sudo apt-get install s3fs -y
Step 2: Add AWS Credentials
s3fs
needs your AWS keys. I stored them safely in ~/.passwd-s3fs
:
echo "AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY" > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
This file acts like your magic key to S3.
Step 3: Create a Mount Point
This is where my S3 bucket would show up:
sudo mkdir -p /mnt/gitlab_backups
Step 4: Mount the Bucket
Then the fun part — connecting S3 like a hard drive:
s3fs your-aws-bucket /mnt/gitlab_backups \
-o passwd_file=/home/ubuntu/.passwd-s3fs \
-o url=https://s3.ap-south-1.amazonaws.com \
-o endpoint=ap-south-1 \
-o use_path_request_style \
-o allow_other
Boom! Suddenly
/mnt/gitlab_backups
had virtually unlimited space.
Step 5: Verify
df -h | grep s3fs
Output:
Yep, that’s 64 petabytes available (S3 fakes this number, but still — it felt like Christmas).
Step 6: Allow Other Users (Optional)
By default, only the user who mounted can access it. I edited /etc/fuse.conf
and added:
user_allow_other
Then remounted with -o allow_other
so my GitLab jobs could write backups there too.
Step 7: Unmount & Cleanup
When done:
sudo umount /mnt/gitlab_backups
And if I ever ditch this setup:
sudo apt-get remove --purge s3fs -y
The Payoff
Instead of upgrading my VPS or deleting old backups, I redirected them all to S3.
My server now feels light and fast, while AWS handles the heavy lifting.
It’s like having a tiny flat with an infinite basement.
I just keep dumping boxes down there, and somehow there’s always more room.
Now whenever someone complains about “disk full” on their server, I can tell them:
“Don’t fight for space. Just borrow AWS’s basement.”
LiveReview helps you get great feedback on your PR/MR in a few minutes.
Saves hours on every PR by giving fast, automated first-pass reviews.
If you’re tired of waiting for your peer to review your code or are not confident that they’ll provide valid feedback, here’s LiveReview for you.
This content originally appeared on DEV Community and was authored by Athreya aka Maneshwar