#DAY 10: Retrospective & Tuning



This content originally appeared on DEV Community and was authored by Samuel Adeduntan

Consolidating Knowledge and Engineering for the Future

Introduction

Day 10 of the DFIR Lab Challenge concluded with an emphasis on introspection and progress. On that day, I put all of the knowledge and abilities I had acquired during the challenge into one cohesive package, went over the engineering choices made in various builds, and pinpointed areas that needed improvement. The main objectives of the session were to optimize data pipelines, fine-tune detection criteria, and ensure that the lab setup was scalable and robust. By taking a step back, assessing the progress, and planning for the future, the lab was transformed from a simple practice setting into a strong basis for continuing education in digital forensics and incident response.

Key terms:

Retrospective & Tuning

  • Retrospective: Reflecting on the progress made over the past days, identifying successes, challenges, and areas for improvement.
  • Tuning: Adjusting the configurations, workflows, or processes to optimize performance and align with objectives.

  • Consolidating knowledge and engineering for the future: Building sustainable, flexible, and effective systems that promote long-term growth and success requires combining expertise and engineering for the future with innovation and experience.

Objective
establishing a baseline for typical system behavior, formalizing important findings, and documenting the lab architecture to facilitate anomaly detection in the future.

Documentation – The SOC Runbook
My Lab’s Instruction Manual
Why Document? So that I can rebuild the lab effortlessly, onboard a new team member, or recall my actions 6 months from now.

The “One-Page” Cheat Sheet Should Include:

  • Splunk Cloud URL: https://[your-instance].cloud.splunk.com
  • On-Prem Enterprise URL: http://localhost:8000
  • Universal Forwarder Install Command: msiexec /i splunkforwarder.msi RECEIVING_INDEXER=”[IP]:9997″ /quiet
  • UF CLI Configuration Command: .\splunk add forward-server [IP]:9997 -auth [user]:[password]
  • Critical Firewall Port: TCP 9997 (Inbound Rule on Enterprise Server)
  • Key Alert SPL: The brute force query from Day 7.

Saving Your Work – Reports & Alerts
Packaging Intelligence for Reuse

Don’t Lose Your Effective Searches! Saving them turns one-time investigations into reusable tools.
Action: Save Effective Searches as Reports.
For each powerful search (e.g., Brute Force, Top EventCodes, FIM changes), click Save As > Report.
Please give it a clear name and description.

Result: These reports are now available in the Reports menu for any analyst to run with a single click, saving valuable time during an investigation.

Baselining “Normal”
The Foundation of Anomaly Detection

Proactive Task: Find the most common process events on your system.
The Search:
index=* sourcetype=WinEventLog:Security
| stats count by EventCode
| sort – count

This isn’t just a search; it’s baselining. Knowing what “normal” EventCode traffic looks like (e.g., 4624 logons, 4634 logoffs are high) allows you to spot later “abnormal” activity (e.g., a spike in 4625 failures or a rare event like 4672).

Lessons Learned

From Technical Steps to Strategic Understanding

My Top 3 Technical Lessons:

  • Infrastructure is Key: The most challenging part wasn’t the analysis; it was configuring the forwarders, firewalls, and ports correctly. The data pipeline must be rock-solid.
  • SPL is a Superpower: Mastering stats, where, and by transforms raw data into intelligence.
  • Context is Everything: Without understanding Event IDs like 4624/4625, the logs are just noise. Threat hunting requires context.

Mindset Shift

The Evolution from Student to Analyst

The Biggest Mindset Changes:

  • Proactive vs. Reactive: Shifting from “What happened?” to “What is happening?” and “What could happen?” through alerts and baselining.
  • Embrace the CLI: Real-world administration and troubleshooting often happen in a terminal, not a GUI.
  • The Lab is a Living System: It’s not a one-time setup. It requires continuous tuning, like adjusting alert thresholds and adding new data sources.

The 3 Improvement Plan

My Roadmap for Leveling Up

Based on this experience, the three things I would love to improve next are:

  • Ingest Network Data: Onboard firewall or Zeek (Bro) logs to add network context to host-based events.
  • Build a Custom Alert: Move beyond pre-built queries and create an alert for a specific threat relevant to my lab (e.g., detection of a specific Mimikatz command).
  • Practice Incident Response: Simulate a complete breach scenario and use my Splunk lab to go through the entire IR lifecycle from detection to containment.

Day 10 & Overall Reflection

Success Goals Achieved:

  • I created documentation for the entire setup.
  • I saved key searches as reusable reports.
  • I ran a proactive search to baseline normal activity.
  • I produced a “lessons learned” document and an improvement plan.

This 10-day journey wasn’t just about learning Splunk. It was about building the methodology of a security analyst: preparation, tool mastery, investigation, documentation, and continuous improvement. I haven’t just installed software; I was able to build a professional-grade practice environment.


This content originally appeared on DEV Community and was authored by Samuel Adeduntan