This content originally appeared on DEV Community and was authored by bedil karimov
Our organization is embarking on the production deployment phase for a newly developed Blog web application. The culmination of a dedicated effort by our Fullstack development team, this project has now transitioned to the DevOps team, which is tasked with architecting and implementing a robust, secure, and highly available infrastructure on the Amazon Web Services cloud. The core objective is to take the provided Django application, which has been meticulously coded and version-controlled, and deploy it into a production environment designed for performance, scalability, and resilience.
The application itself is a feature-rich blogging platform. It is designed to allow end-users to register accounts and publish their own content, including text, images, and videos. The architecture of the application dictates a sophisticated data management strategy: sensitive user registration information must be securely stored in a dedicated MySQL relational database, which will be provisioned using the AWS Relational Database Service (RDS). Concurrently, all user-uploaded media files, such as pictures and video assets, are to be stored as objects within a secure S3 bucket. To facilitate efficient indexing and retrieval of these media files, a serverless mechanism will be employed, where a comprehensive list of all S3 objects is cataloged and maintained within a DynamoDB table.
The deployment of this Django-based web application necessitates a foundational network architecture built from the ground up within a new, logically isolated Virtual Private Cloud (VPC). To ensure high availability and fault tolerance, this VPC will be engineered to span two separate Availability Zones. Within each of these zones, the network will be segregated into a public subnet for internet-facing resources and a private subnet for protected backend components. Connectivity to the global internet will be managed through an Internet Gateway, while a dedicated NAT Instance, situated in one of the public subnets, will provide secure, outbound-only internet access for services running in the private subnets. A Bastion host will be established for secure administrative access, which can be a new dedicated instance or co-located on the NAT instance. The flow of traffic will be meticulously controlled through managed private and public route tables, with routing policies and subnet associations configured to enforce the strict separation between the public and private tiers.
At the heart of the application tier, an Application Load Balancer (ALB) will serve as the primary entry point for all incoming user traffic. The ALB will be configured to listen for both HTTP and HTTPS connections, with a rule to automatically redirect all insecure HTTP traffic to the secure HTTPS protocol. Security will be enforced by an AWS Certificate Manager (ACM) certificate attached to the ALB listener. This load balancer will distribute requests across a fleet of Ubuntu 18.04 EC2 instances managed by an Auto Scaling Group (ASG). The ASG is designed for elasticity and resilience, configured to maintain a desired capacity of two instances, with the ability to scale down to a minimum of two and out to a maximum of four instances. This scaling will be governed by a Target Tracking Policy based on an average CPU utilization target of 70%, with a 200-second warm-up period to ensure instance readiness. The health of these instances will be monitored by the ELB health checks, with a 90-second grace period upon launch. Furthermore, email notifications will be configured to alert administrators of any instance launch, termination, or failure events.
The blueprint for these instances will be defined in a Launch Template. This template will automate the entire instance provisioning process, including running a startup script to prepare the Django environment, clone the “clarusway_aws_capstone” application folder from its GitHub repository, and install all necessary dependencies listed in the requirements.txt file before deploying the application on port 80. The instances, specified as the t2.micro type and tagged for project identification, will be granted secure access to S3 via an attached IAM role providing full access. Their security group will be configured to only accept HTTP and HTTPS traffic from the ALB’s security group and SSH connections for administrative purposes.
The data tier is designed for maximum security. The RDS for MySQL instance, configured as a db.t2.micro running engine version 8.0.20, will be placed within one of the private subnets. Its network access will be restricted so that it can only communicate with the EC2 instances from the application tier. The database endpoint and credentials will be securely configured within the Django application’s settings file as detailed in the developer notes. To ensure secure and private communication between the application servers and the media storage, a VPC Endpoint for S3 will be established, preventing any traffic between EC2 and S3 from traversing the public internet.
For performance and high availability at a global scale, AWS CloudFront will be implemented as a caching layer and content delivery network, positioned in front of the Application Load Balancer. The CloudFront distribution will be configured for end-to-end security, communicating with the ALB exclusively over HTTPS and redirecting all viewer HTTP requests to HTTPS. It will be set up to allow all necessary HTTP methods (GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE) and to forward all cookies to support the dynamic nature of the blog application. The same ACM certificate used for the ALB will secure the CloudFront distribution.
Finally, the entire service will be made accessible through a custom domain managed by Route 53. This DNS configuration will feature a Failover routing policy to ensure maximum uptime. The primary destination for traffic will be the CloudFront distribution. Route 53 will continuously perform health checks on this primary endpoint. In the event of an outage or if CloudFront is deemed unhealthy, Route 53 will automatically reroute all traffic to a secondary, emergency endpoint: a static website hosted in a separate S3 bucket. This failover site will display a simple “under construction” page, ensuring users receive a response even during a primary system failure.
To complete the data architecture, the process of indexing S3 objects in DynamoDB will be automated by a Python 3.8 Lambda function. This function, triggered by S3 object creation events in the primary media bucket, will require IAM permissions to access S3, write to DynamoDB, and operate within the created VPC. The Lambda will extract the necessary metadata from new uploads and populate the DynamoDB table, which is structured with a primary key of ‘id’, ensuring a reliable and scalable index of all user-generated media.
This content originally appeared on DEV Community and was authored by bedil karimov