This content originally appeared on DEV Community and was authored by Sheersh Sinha
After understanding how cloud computing works in my previous post, I was eager to move from concepts to creation. I wanted to actually build something in the cloud – and that’s when I discovered Amazon EC2 (Elastic Compute Cloud) – the beating heart of AWS infrastructure.
This was the first time I felt like I wasn’t just using the cloud… I was running it.
What is EC2?
EC2 is Amazon’s virtual server – it’s like renting a computer in the cloud that you can access anytime, configure however you want, and scale instantly.
Think of EC2 as your personal Linux or Windows machine – but hosted globally and available on demand. Each instance (server) you launch can host an application, a script, a database, or even an entire architecture.
When I launched my first EC2 instance, it wasn’t just a VM – it was a sandbox for experimentation.
Understanding AMI – The Blueprint of Your Cloud Machines
Every EC2 instance starts from an AMI (Amazon Machine Image).
An AMI is like a template that defines:
- The operating system (Ubuntu, Amazon Linux, Windows)
- Pre-installed packages
- System configuration
When I was testing my Log Analyzer project, I picked an Ubuntu 22.04 AMI, installed git, cron, gzip, and cloned my repository. I later created my own custom AMI – so I could launch pre-configured instances instantly, saving time in future experiments.
Key Lesson: Building custom AMIs means infrastructure can be versioned just like code.
Instance Metadata and User Data – Automating Configuration
One of the most powerful features of EC2 is instance metadata and user data. They allow your instance to know about itself and even auto-configure during boot.
Instance Metadata
This is the instance’s “self-awareness.” It stores dynamic information like:
- Instance ID
- Public/Private IP
- Security group
- Region
Command to view metadata (from inside EC2):
User Data
This is a script that runs automatically when your EC2 instance starts for the first time. I used it to automate dependency installation and log setup:
#!/bin/bash
sudo apt update -y
sudo apt install -y git cron gzip
git clone https://github.com/sheersh123/bash-log-analyzer.git
cd bash-log-analyzer
chmod +x log_analyzer.sh
(crontab -l 2>/dev/null; echo "0 0 * * * /home/ubuntu/bash-log-analyzer/log_analyzer.sh /var/log/syslog") | crontab -
When I launched this instance – it auto-setup my environment. No manual SSH, no copy-paste – everything was ready within minutes.
Key Lesson: Automation doesn’t start with tools like Ansible – it starts with User Data scripts.
EC2 Instance Types and Pricing Models – The Cost Optimization Game
AWS gives you flexibility not just in size, but in how you pay.
Instance Types
Each type is designed for a specific workload:
Family | Example | Use Case |
---|---|---|
General Purpose | t3.micro | Lightweight apps, testing |
Compute Optimized | c5.large | High CPU workloads |
Memory Optimized | r5.large | Databases, analytics |
Storage Optimized | i3.large | High disk I/O operations |
GPU Instances | p3.2xlarge | ML, deep learning |
Pricing Models
Model | Description | Use Case |
---|---|---|
On-Demand | Pay hourly – no commitment | Testing, short workloads |
Reserved | Commit for 1-3 years, lower cost | Long-term stable apps |
Spot | Use spare capacity at huge discounts | Flexible, interruptible tasks |
Savings Plan | Flexible compute commitment | Mixed workloads |
When I started, I used t2.micro (Free Tier) to experiment. It was enough for scripts, GitHub syncs, and learning automation.
Key Lesson: Cloud computing rewards those who understand efficiency – not just scalability.
AWS CLI – Controlling AWS from the Command Line
The AWS CLI became my favorite DevOps weapon. Instead of clicking through the console, I started managing everything through the terminal.
Installing AWS CLI
sudo apt install awscli -y
aws configure
You’ll be prompted for:
- AWS Access Key
- Secret Key
- Default region
- Output format (json, text, or table)
Common AWS CLI Commands
Command | Purpose |
---|---|
aws ec2 describe-instances |
List all EC2 instances |
aws s3 ls |
List S3 buckets |
aws ec2 stop-instances --instance-ids <id> |
Stop a specific instance |
aws s3 cp ./reports s3://my-devops-logs/ |
Upload files to S3 |
aws iam list-users |
View IAM users |
Key Lesson: The CLI is where DevOps engineers truly control the cloud – it’s scriptable, repeatable, and fast.
My AWS Task – Launching My First Windows EC2 Instance
After experimenting with Linux instances for scripting and automation, I wanted to test how DevOps workflows translate into Windows environments.
As part of my AWS Task, I decided to:
- Create a Windows VM on AWS EC2
- Connect via RDP (Remote Desktop Protocol)
- Open CMD inside the instance
- Verify system details such as hostname, architecture, and OS build
This task helped me understand how cross-platform management works in the cloud.
Tech Stack Used:
- AWS EC2 (Windows)
- RDP (Client pre-installed in Windows)
Commands Executed in CMD:
Once connected, I could view the instance’s IP, processor, and memory details – confirming that my Windows EC2 instance was successfully deployed and live.
Key Lesson: Managing Windows VMs on AWS gives a new perspective – it’s not just about Linux automation; DevOps engineers often maintain hybrid environments where both OS types coexist.
This small exercise boosted my confidence in handling multi-OS infrastructure – a key skill when working in enterprise-scale DevOps setups.
My Turning Point – The “Vanishing Instance” Moment
During one of my test runs, I accidentally terminated an EC2 instance without creating an AMI backup. All my logs and configurations vanished.
It was frustrating – but it also taught me one of the most powerful lessons in cloud computing:
“In the cloud, if you didn’t back it up, it never existed.”
Since then, I’ve built the habit of creating snapshots and AMIs before every experiment.
Key Takeaways
- EC2 is the core of AWS computing – your virtual data center
- AMIs are blueprints for consistent deployments
- Metadata & User Data enable automation from the first boot
- Understanding pricing saves money and mistakes
- The AWS CLI turns DevOps engineers into automation pros
- Hybrid environments (Linux + Windows) reflect real-world DevOps challenges
What’s Next – Load Balancing & Auto Scaling in AWS
Now that I’ve learned to launch and automate EC2 instances, the next step is understanding how to distribute traffic and maintain high availability.
In my next post, I’ll explore how AWS helps scale applications seamlessly through Load Balancers and Auto Scaling Groups.
Here’s what’s coming next:
- Load Balancer – The foundation of traffic management in AWS
- Application Load Balancer (ALB) – Handling HTTP/HTTPS traffic intelligently
- Network Load Balancer (NLB) – High-performance traffic routing at Layer 4
- Launch Templates – Predefined instance configurations for auto-scaling
- Types of Load Balancers – Understanding Classic vs Application vs Network
- Target Groups and Listeners – The logic behind routing and instance health checks
- Auto Scaling Group (ASG) – Automatically adjusting instance count based on demand
“Scaling isn’t about adding servers – it’s about maintaining stability while the world grows around your system.”
Stay tuned – the next post will be all about keeping your cloud architecture resilient, dynamic, and efficient.
This content originally appeared on DEV Community and was authored by Sheersh Sinha