This content originally appeared on DEV Community and was authored by Oni Toluwalope
As developers, and engineers, we all have those “quick tasks” we expect to knock out in half an hour, tops. A few weeks back, my goal was simple: set up SonarQube locally to get a better understanding of its configuration. While I’d used it before on some IBM cloud machines, I wanted hands-on experience with a local setup.
I’d made a few minor code changes and wanted to see how SonarQube would flag them—a process I anticipated would take around 30 minutes from start to finish. Little did I know, this seemingly straightforward task would plunge me into a rabbit hole of configuration errors and resource limitations, stretching my troubleshooting skills to their limit over a grueling nine hours.
The first sign of trouble appeared quickly. SonarQube simply refused to start. When I tried to access it through my browser, all I got was a connection refused
error. My initial reaction was slight annoyance, but nothing alarming. “Probably just a small hiccup,” I thought, diving into what I assumed would be a quick configuration tweak.
My First Mistake: I Didn’t Check the Logs Immediately
Instead of checking the logs, I started fiddling with network settings, assuming the issue was a port conflict or a firewall rule. I spent a good hour going down this path, tweaking configurations that ultimately had nothing to do with the actual problem. Had I taken a moment to peek at the SonarQube logs right away, I would have seen a clear error message screaming about insufficient memory map areas.
Eventually, after my fruitless network explorations, I investigated the console output more closely. That’s where I was greeted by a cryptic error message mentioning vm.max_map_count
. This was a parameter I’d vaguely encountered before, but its exact purpose eluded me.
My Second Mistake: I Misinterpreted What vm.max_map_count
Meant
This leads to my second blunder: I initially thought vm.max_map_count
was directly related to the total memory available to my Kali Linux Virtual Machine. I misinterpreted the “vm” as “Virtual Machine” rather than “virtual memory.” Based on this flawed understanding, I started adjusting the RAM size of my virtual machine in Oracle VirtualBox, thinking I needed to allocate more memory. This, of course, didn’t solve the issue, and I spent another couple of hours chasing this red herring .
It wasn’t until I decided to research the error message more thoroughly that the moment of discovery came. I learned that this kernel parameter defines the maximum number of memory map areas a process can have. SonarQube, specifically its Elasticsearch component, requires a large number of these maps, and the default setting on my system was simply too low.
Even with this newfound understanding, I still hesitated. Modifying kernel parameters felt like venturing into the operating system’s core, and I wanted to be sure I was making the right change. So, I spent some more time double-checking the documentation and searching for similar issues online.
My Third Mistake: I Didn’t Increase vm.max_map_count
Sooner
This was my third mistake: not increasing vm.max_map_count
in a timely manner. While due diligence is important, my hesitation prolonged the outage unnecessarily.
Finally, I bit the bullet and decided to increase the vm.max_map_count
value. I used the command:
sudo sysctl -w vm.max_map_count=524288
(You can also edit the sysctl.conf
file, add the line vm.max_map_count=524288
, and apply the changes using sudo sysctl -p
.)
I felt a surge of hope as I restarted SonarQube.
My Fourth Mistake: I Overlooked Storage and Under-Allocated Resources
My elation was short-lived. SonarQube started, but it was incredibly slow and unstable, frequently crashing with out-of-memory errors. I wondered what was wrong since I saw that 512 MB of memory was the minimum required for SonarQube to run, and I had set it to that value. Out of frustration, I ended up increasing the vm.max_map_count
to 2,097,152 KB (2 GB), of which the initial 524,288
was sufficient, but I was still faced with the same problems and, once again, did not read the logs on time.
I tried different things I saw online regarding the out-of-memory error, and nothing worked. Eventually, checking the logs led me to another realization: I didn’t have enough storage on my Kali Linux virtual machine. The virtual disk was almost full, leaving insufficient space for SonarQube to operate efficiently, especially with Elasticsearch indexing and storing data.
This was the final piece of the puzzle. I had to shut down the virtual machine, resize the virtual disk in my virtualization software, delete some unnecessary files, and then expand the filesystem within Kali Linux.
After this final step, I restarted the entire system and, with bated breath, started SonarQube again. This time, it worked! The web interface loaded quickly, the code analysis ran smoothly, and my initial 30-minute task was finally completed—nine hours later, a significant portion of which was also spent waiting for my slow, and patient testing computer to respond.
Key Takeaways from the Marathon Troubleshooting Session
This experience, though frustrating, was a valuable reminder of several key lessons:
- Check the Logs First: They are your best friends. Don’t overlook them in your initial attempts to diagnose a problem.
- Understand Error Messages: Don’t make assumptions. Research cryptic error messages to grasp the underlying issue accurately.
- Address System Requirements: Ensure your system meets the minimum and recommended requirements for the software you are running, including memory, storage, and kernel parameters. Don’t assume default settings are sufficient.
- Be Decisive (But Informed): While research is crucial, don’t hesitate to implement a likely solution once you have a good understanding of the problem.
- Consider the Environment: If you’re working in a virtualized environment, remember to check the resources allocated to the virtual machine itself.
What was supposed to be a quick 30-minute code quality check turned into a nine-hour deep dive into system configuration and resource management. While I wouldn’t wish this marathon troubleshooting session on anyone, the satisfaction of finally resolving the issue, coupled with the valuable lessons learned—like the pain of working under tight compute resources—made it (almost) worth the time.
Now, if you’ll excuse me, I need a hot bowl of chicken pepper soup!
This content originally appeared on DEV Community and was authored by Oni Toluwalope