Taking The Cloud Resume Challenge: GCP Style



This content originally appeared on DEV Community and was authored by Shawn Gestupa

What’s self-learning without some challenges to shape us, hence for the past few weeks, I’ve taken upon The Cloud Resume Challenge by Forrest Brazeal and re-created my resume using Google Cloud services, instead of my usual cloud platform (nothing wrong with exploring a little hehe).

Feel free to checkout my recreated resume here: https://resume.smgestupa.dev — this won’t replace my resume in my portfolio website.

I followed most of the requirements/steps, except for steps 1. Certification (got too excited to do this), and 12. Infrastructure as Code (saving this for another blog post). Still, I was able to finish the challenge:

resume.smgestupa.dev Preview

Disclaimer: this isn’t a step-by-step guide, instead, it’s focused more on my process, the decisions I made from researching. I do give out some tips on certain things, but finding something out yourself will always be better.

How did I start this?

This challenge tested me, especially when building the CI/CD pipeline for my repositories. Thankfully, I got to utilize my skills as a Full-Stack Developer to good use — thanks to my previous clients who trusted me to build their theses, capstones, etc.

I went with SvelteKit to make everything easier for me (feel free to use what works for you to achieve your goal). I also used TailwindCSS’ preflight script to reset the default browser styles to make styling super easy.

From there, I was able to recreate my original resume for steps 2. HTML and 3. CSS, though it’s not an exact 1-to-1 match — I did my best to keep the look somewhat similar, but things like the exact font and dimensions were hard to determine.

At first, everything was just running locally:

Local architecture where I can only access my website

Since no one can visit what I’ve made, my next step was to deploy it with Google Cloud as a static website.

Enter Google Cloud

Before deploying, I had to activate the free $300 credits, since some services require billing to be enabled beforehand, such as the Cloud Storage which is used to host my recreated resume as a static website (as part of 4. Static Website).

Note: You may have to enable the Cloud Storage API before creating a bucket.

Hosting a static website with Cloud Storage was simple:

  1. Create my bucket.
  2. Select Uniform Access (as recommended by Google Cloud).
  3. Upload my website files.
  4. Disable the Prevent public access setting.
  5. Add the allUsers principal with the Storage Legacy Object Reader permission.
  6. Set the bucket’s website configuration to point to index.html, and boom! I have a static website with HTTPS, for free!

The architecture for this deployment was:

Architecture using GCS for static website deployment

My estimated cost for the GCS bucket is $0.00/month since my files are under a gigabyte, since storage in Singapore (asia-southeast1) incurs a $0.020/month per GB — rates vary depending on the bucket’s region.

But this was not enough, we can’t expect visitors to remember a long domain, which is why my website will need a user-friendly URL and this can be done with Cloud DNS.

DNS

I created a managed zone in Cloud DNS for my subdomain: https://resume.smgestupa.dev/, as part of step 6. DNS.

Note: You may have to enable the Cloud DNS API before creating a managed zone.

But before that, I bought my own, personal domain (smgestupa.dev) which will be required for this step — I paid $12.62 for one year, upfront. If you plan on purchasing a domain, decide as if you’ll be using it for the rest of your life.

It’s was also pretty easy to migrate my domain to Cloud DNS, specifically a subdomain since I don’t want to migrate my whole domain to it.

If you don’t plan on moving your whole domain (like mine!), then the process will be simple:

  1. Look up the nameservers generated by your Cloud DNS.
  2. Import the nameservers to your primary DNS management service.

After importing the nameservers, you’ll have to create new DNS records in your managed zone moving forward.

Tip: you should consult your DNS management service’s documentation for importing nameservers.

My estimated cost for a managed zone is $0.20/month

Load Balancing

As part of 5. HTTPS, I used a Cloud Load Balancer to add a user-friendly domain with HTTPS for my static website by linking it with my managed zone. It’s also used to distribute user traffic.

Note: You may have to enable the Compute Engine API before creating a load balancer.

I deployed a public-facing, global load balancer with a rule that uses port 443 (for HTTPS) with a static IP address for the Frontend configuration, which will be used to map my subdomain to it. Since I’m using a managed zone, I secured the rule with a Google-managed SSL certificate.

For the Backend configuration, a backend bucket was created that points to my bucket that hosts the static website with Cloud CDN enabled.

Routing rules were left as-is. After finalizing my changes, I proceeded to create it and had to wait for a few seconds for the load balancer to initialize.

The architecture for this deployment was:

Architecture with Cloud CDN, Cloud Load Balancer (with Cloud CDN enabled)

My estimated cost is $20.44/month, since my rule incurs $0.28/hour (for the first 5 rules) in Singapore. The rates will vary depending on the load balancer’s region.

Database

For step 8. Database, I used a native, regional Firestore database to track the number of visits to my static website.

Note: You may have to enable the Cloud Firestore API before creating a database.

Firestore is a NoSQL document database that was easy to setup. In my case, I created two for each environment: production and staging.

The production database uses the (default) database, since Firestore has a free quota for this database and I believe will be useful to prevent incurring any extra charges.

Additionally, a named database is used for staging, since named databases have no free quota and will be useful for my pipelines.

My estimated total cost for my Firestore databases is $0.00/month, for I don’t expect my production database to exceed the free quota while my pipelines call less than 10 operations per workflow.

Of course, I can’t just directly give my static website permissions to modify my databases, which is why I created a Cloud Function as a “middle-man” — we should always assume there will be malicious actors that will cause irreparable damage if they have direct access to a database (I don’t want to get charged by Google Cloud hehe).

Function

To give a controlled way for my static website to increment the total visits, I provisioned a Python function (as part of steps 9. API & 10. Python).

Note: You may have to enable the Cloud Functions API before creating a function.

The process for my code was straightforward:

  1. My static website sends a GET HTTP requests to my function.
  2. The function increments the counter in Firestore by 1.
  3. Right before the function returns a response, it retrieves the counter’s value which is added to the response body.

I opted to use a first-generation environment for faster cold starts, while I configured the function to have a 128MiB RAM and 0.167 — I believe my function does not need more to do simple operations.

An environment variable is also used to determine the Firestore database to be used, depending on the environment for my pipeline’s testing job.

My estimated cost is $0.00/month, since the billing is request-based and I don’t expect the function to invoke anywhere near a million per month.

For my static website, I created a new section to display the total number of visits. Additionally, a JavaScript code was added that will send a request to the function to increment the total visits, and display the new value from the response body.

The architecture for this deployment was:

Architecture with Cloud Functions and Firestore

Source Control

Of course, I didn’t forget to use a source control for all of my code. I’ve always had the habit to create a repository before I start a new project.

For this challenge, I used GitHub (as part of step 13. Source Control) to store my front-end (static website) and back-end (Python function) in separate, private repositories.

Front-end

Manually building my website then uploading it to my bucket was good and all, for about 5 seconds. I realized that I’d be repeating this tedious process for weeks while doing this challenge, which is why I decided to setup a pipeline with GitHub Actions (as part of step 15. CI/CD (Front-end)).

For my pipeline, I created two workflows depending on the action:

Pull Request to main Branch

  1. Use actions/setup-node@v4 to install NodeJS, which will be accessible for the whole runner.
  2. Checkout to the repo with actions/checkout@v4, install pnpm, install dependencies then build, all with pnpm.

Push to main Branch

  1. Use actions/setup-node@v4 to install NodeJS, which will be accessible for the whole runner.
  2. Checkout to the repo with actions/checkout@v4, install pnpm, install dependencies then build, all with pnpm. The compiled code will be reused as an artifact via actions/upload-artifact@v4.
  3. Authenticate to Google Cloud, setup gcloud CLI, and download the compiled code artifact with actions/download-artifact@v4. The artifact will then be uploaded to my bucket via gcloud storage cp.

Back-end

The Cloud Functions has an option to link a function to a repo, but I instead opted to manually setup my automation process (as part of step 14. CI/CD (Back-end)) so that I can deepen my understanding of pipelines a little bit more.

I also had a similar realization when working on my front-end repo, where I will repeatedly doing the tedious process of copy-and-pasting my code to update my function, leading me to build a new pipeline.

Additionally, I implemented unit testing (as part of step 11. Tests) for my pipelines:

  • Should return an HTTP 200 status code.
  • Should return an HTTP 200 status code, a counter should be in the response, and the counter should be greater than or equal to 0.
  • Should return an HTTP 204 status code, and includes Access-Control-Allow-Origin, Access-Control-Allow-Methods, and Access-Control-Allow-Headers in the headers with these values:
    • Access-Control-Allow-Origin: ['https://resume.smgestupa.dev']
    • Access-Control-Allow-Methods: ['GET']
    • Access-Control-Allow-Headers: ['Accept']
  • Should return an HTTP 200 status code, and Access-Control-Allow-Origin with ['https://resume.smgestupa.dev'].

Similar to the pipeline in my front-end repo, I created two workflows depending on the action:

Pull Request to main Branch

  1. Update the FIRESTORE_DATABASE environment variable to point to the staging database, checkout to the repo with actions/checkout@v4, and run unit testing.

Push to main Branch

  1. Update the FIRESTORE_DATABASE environment variable to point to the staging database, checkout to the repo with actions/checkout@v4, and run unit testing.
  2. Checkout to the repo with actions/checkout@v4, authenticate to Google Cloud, setup gcloud CLI, create a build folder, copy the main.py and requirements.txt inside the build folder, and update the function via gcloud run deploy.

CI/CD

I made heavy use of compose actions to reuse repetitive steps, like authenticating to Google Cloud, so it could reduce the time for me to manually update the repetitive steps across multiple jobs.

For authentication, I utilized OIDC via Workload Identity Federation which lets me selectively choose which repo can deploy to my Google Cloud project without needing to download a service account’s credentials.

Fortunately, GitHub Actions offer free quota for private repos and I estimated the cost to be $0.00/month since I don’t expect to run my pipelines for more than 33 hours in total.

Final Architecture

I recreated my resume with SvelteKit, deployed it as a static website with Cloud Storage, connected it to a Cloud Load Balancer, then pointed my subdomain to it with Cloud DNS.

To streamline my deployment to both of my static website & function, I automated the process with GitHub Actions.

After all is done, I’m proud to bring myself to the greatest part: my final architecture that is running now for my static website.

Final architecture with Google Cloud & CI/CD pipelines

With the help of Google Cloud Pricing Calculator, I estimated my total monthly cost to be $21.69 — I only included the cost of provisioning services and omitted specific metrics such as data transfers:

Item Monthly Yearly
Personal Domain (1-year, upfront) $1.052 $12.62
Cloud Storage $0.00 $0.00
Cloud Load Balancer $20.44 $245.28
Cloud DNS $0.20 $2.4
Cloud Firestore $0.00 $0.00
Cloud Run Functions $0.00 $0.00
GitHub Actions $0.00 $0.00
Total Cost $21.69/month $260.3/year

I still have more than $200 free credits that have yet to expire in less than 90 days, so at least for now, I can keep everything running.

If you’re reading this and want to deploy something similar, I do have an alternative in mind that should bring your monthly cost to $0.00/month.

Alternative

Assuming you already bought a personal domain, this setup should reduce your monthly cost to $0.00/month by removing both the Cloud Load Balancer and Cloud DNS:

Alternative architecture without Cloud Load Balancer & Cloud CDN

How this works: you can redirect/forward your domain/subdomain to the public URL of the index.html with your DNS management service. The trade-off for this is that the public URL will always be displayed after redirection.

Ultimately, it’s about deciding what you’ll compromise or the trade-offs you’ll be comfortable with, especially avoiding to pay every month just to host a resume.

What I’ve learned

This was an exciting journey. I already planned on doing this a few months ago, but couldn’t decide to proceed because I felt the challenge was unnecessary at the time.

I was fortunate to see myself fail, learn, and grow in new ways — especially in a new cloud platform — I’ve always wanted to grow beyond my current responsibilities, starting with automating more of my deployments.

Failing with automation and monitoring (there were moments I couldn’t understand something) was essential to prepare myself to become a DevOps or Site Reliability Engineer. And I could clearly see my progress over time, and one of those is successfully deploying my static website.

I also applied as much as I could from what I’ve learned from my usual cloud platform: to design an architecture that is operational, secure, reliable, performant, while optimizing for cost.

I know this can be better and there’s always for my improvement, but I’m proud to say what I’ve learned here and now will help me to become better for what is next for me.

Next Steps

I’ve actually already started preparing for one the Google Cloud Associate Cloud Engineer, since I want to broaden my skills even more while pushing myself further.

Nevertheless, I’ll keep on doing this self-learning journey, focusing more on automation, monitoring, or anything that is related to DevOps or Site Reliability Engineer (hehe).

All in all, it was fun to fail and learn.

Thanks for making it this far, I truly appreciate it!


This content originally appeared on DEV Community and was authored by Shawn Gestupa