This content originally appeared on DEV Community and was authored by Vivek
From Web to ML Research Engineer: Day 3 of 60
Hey everyone!
So… Day 3 is in the books, and I’m gonna be real with you – it was one of those days where you feel like you’re drinking from a fire hose while simultaneously trying to build the hose itself.
What I Tackled Today
Eigenvalues and Eigenvectors (The Fun Stuff)
Today was all about diving deep into eigenvalues and eigenvectors. For those not familiar, these are basically the “special directions” and “stretching factors” that matrices have. Think of it like this – when you apply a transformation to a vector, most vectors will change direction. But eigenvectors? They’re the rebels. They stay pointing in the same direction, just getting stretched or shrunk by their eigenvalue.
I spent the morning working through Gilbert Strang’s lectures, and honestly, the geometric intuition from 3Blue1Brown’s videos was a game-changer. Seeing those vectors stay in their lanes while everything else gets rotated and skewed around them… it just clicked.
The Implementation Challenge
Here’s where things got interesting (and by interesting, I mean humbling). Implementing eigenvalue decomposition from scratch is… well, let’s just say it’s not as straightforward as matrix multiplication.
I went down a rabbit hole trying to build the power iteration method for finding eigenvalues. The math is beautiful on paper, but getting it to converge properly? That’s where you really start appreciating the numerical wizardry that goes into libraries like NumPy.
# What I thought would be simple:
def find_eigenvalue(matrix):
# Just iterate and it'll converge, right?
# Narrator: It did not simply converge
PCA and the “Aha!” Moment
But here’s the cool part – I finally understand Principal Component Analysis (PCA) at a fundamental level. It’s just finding the directions of maximum variance in your data, which are… the eigenvectors of the covariance matrix!
I built a simple PCA implementation from scratch and tested it on some toy data. Watching the algorithm automatically discover the main “direction” of the data felt like magic. This is the kind of stuff that makes all the mathematical heavy lifting worth it.
The Reality Check Section
What Went Well
- Actually grasping the geometric intuition behind eigenvalues
- Successfully implementing power iteration (after debugging for 2 hours)
- Building a working PCA from first principles
- Starting to see connections between linear algebra and machine learning
What Was… Challenging
- The numerical stability issues are real (floating point precision, anyone?)
- Some of the MIT problem sets are genuinely tough
- My brain started feeling like mush around the 10-hour mark
- Realizing there’s still SO much I don’t know
The Honest Truth
This is hard. Like, really hard. I spent probably 3 hours just trying to understand why my eigenvalue calculation was giving me complex numbers when I expected real ones. (Spoiler: not all matrices have real eigenvalues, and that’s totally normal.)
But here’s the thing – I’m starting to see the bigger picture. These aren’t just abstract mathematical concepts. They’re the building blocks of machine learning algorithms I want to understand and improve.
Some Random Thoughts
On Learning in Public
Sharing this journey publicly has been both motivating and terrifying. Yesterday someone commented asking if I really think 60 days is enough, and honestly? I don’t know. But I do know that I’m learning faster than I ever have before, partly because I know people are watching.
On the Math
I used to think linear algebra was just about solving systems of equations. Now I’m starting to see it as the language of transformation and space. Every ML algorithm is essentially about finding the right transformations to apply to data. It’s beautiful and intimidating at the same time.
On the Journey
Three days in, and I’m already noticing that my tolerance for mathematical abstraction is improving. Concepts that seemed impossible on Day 1 are starting to feel… manageable. Not easy, but manageable.
Tomorrow: SVD and Matrix Decompositions
Day 4 is going to be all about Singular Value Decomposition (SVD). I’ve heard it called the “Swiss Army knife of linear algebra,” so I’m both excited and slightly terrified.
The plan is to:
- Understand SVD geometrically (not just algebraically)
- Implement it from scratch (wish me luck)
- Build an image compression demo using SVD
- Start connecting it to recommendation systems
A Quick Thank You
Thanks to everyone who’s been following along and offering encouragement. Special shout-out to the folks who’ve been pointing out good resources and correcting my misconceptions in the comments.
This community aspect has been unexpected but incredibly valuable. Having people who’ve been through this journey before offer guidance makes this whole crazy endeavor feel less lonely.
The Vibe Check
Am I on track? Hard to say. Am I learning a ton? Absolutely. Am I occasionally questioning my sanity? Maybe a little. But I’m also starting to see glimpses of the bigger picture, and that’s keeping me motivated.
The goal isn’t perfection – it’s progress. And today, despite the struggles and the moments of confusion, I made progress.
See you tomorrow for Day 4: SVD and the Art of Matrix Decomposition.
How’s your own learning journey going? Any tips for staying motivated during the tough mathematical chapters? Drop a comment below – I’d love to hear from you!
Previous posts in this series:
- Day 1: From Web3 to ML Research Engineer – The Journey Begins
- Day 2: Matrix Operations and Building Intuition
Tags: #MachineLearning #LinearAlgebra #60DayChallenge #Eigenvalues #PCA #LearningInPublic #MLJourney
This content originally appeared on DEV Community and was authored by Vivek