This content originally appeared on DEV Community and was authored by Sundance
Not because I wasn’t qualified not because my GPA was trash or my resume was full of red flags. It was the consistency of the feedback that broke me:
“We appreciate your interest, but due to location constraints…”
Basically, my resume wasn’t even seen. I was a sophomore at a state school in another country, applying to positions in New York, Chicago, and the EU. Automated filters killed my application before a human could glance at it.
That hurt differently than regular rejection
By the time I got my fifth rejection, something clicked. Instead of optimizing my resume formatting or figuring out how to fake a local address, I decided to do something else. I’d build something that couldn’t be ignored. Something real. Something that solved an actual problem in high-frequency trading.
That’s how I ended up building a high-performance NASDAQ ITCH parser in Rust. And eventually, it hit 107 million operations per second.
The Problem Nobody Talks About
NASDAQ TotalView-ITCH is the protocol powering real-time market data. Every trade update, every order cancellation, every execution report flows through ITCH. It’s a binary protocol, meticulously structured, generating millions of messages per second. When you’re processing market data at scale, parsing these messages fast isn’t just a feature; it’s the entire game.
If your parser is slow, your trading logic is slow. If your parser blocks, your entire system blocks. If your parser allocates memory carelessly, you’re burning latency that your competitors already optimized away.
The commercial parsers hit 100M+ messages per second. They use SIMD, custom memory allocators, lock-free queues, careful CPU pinning, and a thousand other micro-optimizations. They cost six figures. They’re closed source. And they’re overkill for most people learning.
But there’s a gap in the market. There’s no good reference implementation. No open-source parser that’s both correct and fast, that you can read, understand, and build upon.
So I built one.
Starting Simple: The Baseline
My first version didn’t use SIMD. No lock-free data structures. No fancy tricks. Just clean, readable Rust code that did one thing well: parse ITCH messages and count them fast.
The idea was deliberate. Before you optimize, you need a baseline. Before you add complexity, you need to know what complexity buys you. So I built the boring version first.
The ITCH format is straightforward: messages are length-prefixed with a big-endian u16 followed by type and payload. That’s the contract. Respect it, and you can parse any ITCH file. My implementation validated that every boundary check matters. Every offset calculation has to be correct. Corruption or truncation isn’t optional to handle.
My initial benchmark showed roughly 10 million messages per second on commodity hardware. Not bad for clean, readable code that prioritizes correctness. The memory-mapped file approach meant I wasn’t burning cycles loading entire datasets into RAM.
But 10 million messages per second was just the starting point.
The Version I Published
The version on GitHub is intentionally the simpler baseline. I called it parser-lite because that’s exactly what it is. It’s the thing you read to understand the concept. It’s not trying to win every benchmark; it’s trying to win understanding.
That decision was important. The people who need this parser aren’t firms with six-figure budgets for commercial solutions. They’re researchers trying to understand market microstructure. They’re grad students building backtesting engines. They’re other sophomores who got rejected and decided to build something anyway.
For them, clarity matters more than an extra 5 million messages per second. I documented the format assumptions, explained the validation logic, and built a benchmark harness that shows real throughput. I made it hackable.
The Unpublished Optimization Lesson
What I didn’t publish is the version I worked on after the baseline. That’s the one where I actually lost sleep.
I spent weeks pushing the parser to its theoretical limits. SIMD vectorization using Rust’s std::simd module to process multiple bytes in parallel. Careful data layout to maximize cache hit rates. Pre-allocated ringbuffers cycling through message pools instead of allocating fresh on each parse. Custom memory management tuned specifically for message-sized allocations.
That version eventually hit over 100 million messages per second. It was also exponentially harder to read and maintain.
The interesting part wasn’t the final performance number. It was what I learned while building it. Starting in February 2025, I spent months obsessing over every microsecond.
The biggest bottleneck wasn’t parsing logic. It was memory allocation. Every message parse allocated a new struct on the heap. Once I switched to pre-allocating a pool of message buffers and cycling through them, latency cut in half. This forced me to think about memory ownership differently than typical Rust code suggests. Suddenly, object pooling and lifetime management became visceral rather than theoretical.
SIMD didn’t help as much as expected. For simple length checks and offset calculations, modern CPUs were already doing a better job than my vectorization attempts. SIMD helped when I was doing batch validation across multiple messages, but that’s a different use case entirely. The lesson: don’t optimize where the CPU’s already ahead of you. Sometimes the compiler is smarter than your assembly intuition.
Memory layout destroyed everything else. When I restructured the parser to keep hot data in tight cachelines, performance jumped three times. Not because I was doing anything algorithmically clever, but because the CPU wasn’t starving for data. Every cache miss was expensive, so I organized the code to make cache hits inevitable. That taught me more about CPU architecture than reading papers ever could. Performance engineering isn’t about being clever; it’s about respecting the machine.
These lessons mattered more than hitting 107 million operations per second.
Why This Actually Matters
Here’s the thing about systematic rejection.
When you get rejected consistently, there’s this narrative you tell yourself:
“I’m not good enough. I don’t have the pedigree. I’m at a state school, not an Ivy. I’m in the wrong place geographically.”
Some of that might be true. The location bias in finance is documented and real. Non-target schools do have to work harder. Many locations do get filtered. Recruiters make mistakes. The system has structural problems.
But what I discovered while building this parser is that I didn’t need their acceptance to prove I was capable.
I built something real. Not a side project. Not portfolio filler. A legitimate tool that researchers, engineers, and traders can actually use. Code that’s correct, reasonably fast, and genuinely readable. Code that solves a real problem in the quant finance world.
The firms that rejected me were making an optimization error. They filtered based on geography, missing someone building exactly what their engineers would need. That’s their loss, not my failure.
The Actual Results
The parser is open source now. MIT licensed. Completely free. It’s on GitHub under Lunyn-HFT because that’s the fictional trading firm I imagined while getting rejected. The firm that doesn’t care where you’re from, just whether you can solve hard problems.
I’m not done optimizing. There are whole classes of improvements not yet touched: FPGA integration for truly low-latency scenarios, GPU batch processing for massive datasets, protocol extensions for OUCH and FIX variants. The optimization journey never really ends in this space.
But this time I’m doing something different. I’m documenting the journey. Every optimization I add, I’m writing about why it matters and what it costs in code complexity. I’m building a resource that other engineers can learn from. Because the next person who gets rejected from their dream quant firm shouldn’t have to figure this out alone.
The Brutal Truth
Location bias is real. The quant industry has concentration problems. If you’re not near a major financial hub, you’re at a systematic disadvantage. That sucks. It’s unfair. It’s also documented fact.
But here’s what’s also true: you can build something that changes the entire conversation. You can create value so obvious that location becomes irrelevant. You don’t win by having the perfect resume. You win by being too useful to filter.
I didn’t get accepted by the firms I applied to. But I got messages from engineers at proprietary trading firms asking to use this parser. Grad students building research projects reached out. People are actually using this code professionally. Some of them are people I’d actually want to work with.
That’s not because I’m special or because I got lucky. It’s because I did the work anyway, even after getting rejected.
For Other Rejected Sophomores
Getting rejected sucks. Especially when the feedback is basically “we liked your profile but geography.” That’s not about your ability.
Here’s the actual move: stop trying to get past filters that don’t want you. Build something that makes you impossible to ignore. Build something so obviously useful that geographic filters become pointless.
I got rejected by every quant firm I applied to as a sophomore. Today I get inbound interest from people I’d actually want to work with. Not because my resume improved, but because I did the work anyway.
Your resume won’t get past automated filters. That’s fine. Build something that speaks louder than your resume ever could. Build something real. Build something useful. Build something that solves a problem that actually matters in the industry you want to join. Make it open source. Make it correct. Make it fast.
Then when someone from a firm worth working for comes knocking, the conversation isn’t “We’re sorry, location issues.” The conversation is “We want you working on this problem full time.”
The path to breaking through location bias isn’t fixing your resume. It’s being too valuable to filter.
The Journey Continues
The Lunyn ITCH parser started as a middle finger to rejection. It became proof that I could do the work. Now it’s becoming something bigger — a resource that other engineers can learn from and build upon.
That’s not how I expected my sophomore year to go. I thought I’d land an internship, learn some trading logic, and feel validated by some firm’s name on my resume. Instead, I’m building something that validates itself through impact.
If you’re reading this and you’re getting rejected, just know this: the firms filtering you out are making mistakes. The system is broken in predictable ways. You don’t need to fix the system. You just need to build something the system can’t ignore.
Now it’s time to build.
Lunyn ITCH Lite is available on GitHub at github.com/Lunyn-HFT/parser-lite.
Check lunyn.com for more on the journey from rejection to implementation.
This content originally appeared on DEV Community and was authored by Sundance