Linux Structure
Linux, as you might already know, is far more than just another operating system. It is a cornerstone in the world of cybersecurity—valued for its robustness, flexibility, and open-source nature. From powering personal computers and servers to being the backbone of mobile operating systems like Android, Linux is everywhere.
For anyone pursuing a career in cybersecurity, understanding Linux is as essential as learning how to drive before hitting the road. Think of this as your first driving lesson: getting familiar with the structure of the car, the rules of the road, and the philosophy that makes the vehicle unique.
In this article, we’ll explore:
- The definition of Linux and its role as an operating system
- Its history and evolution
- The philosophy and culture behind it
- The architecture that keeps everything working together
- The file system hierarchy that organizes data
- Popular distributions and their uses in cybersecurity
What Is Linux?
At its core, Linux is an operating system (OS)—software that manages a computer’s hardware and acts as a bridge between applications and the underlying physical components. Just like Windows, macOS, or iOS, Linux ensures that software can run smoothly on hardware, but it does so with a unique twist: it is open source.
Unlike other operating systems, Linux comes in a wide variety of distributions (commonly called distros). Each distro is like a customized version of Linux tailored for specific needs. Some focus on user-friendliness (e.g., Ubuntu, Linux Mint), while others are designed for advanced users and professionals (e.g., Arch Linux, Gentoo). Still others—like Kali Linux or Parrot OS—are built with cybersecurity and penetration testing in mind.
A Brief History of Linux
Linux’s roots go deep into the history of computing. Here’s how it all came together:
- 1970 – Unix
Ken Thompson and Dennis Ritchie at AT&T released the Unix operating system. It became the foundation for modern OS design. - 1977 – BSD (Berkeley Software Distribution)
BSD introduced new features but ran into legal issues because it contained AT&T-owned Unix code. - 1983 – The GNU Project
Richard Stallman launched the GNU Project to create a free Unix-like OS, leading to the GNU General Public License (GPL). This was a game-changer, laying the groundwork for open-source development. - 1991 – The Birth of Linux
Linus Torvalds, a Finnish student, began developing a free kernel as a personal project. This kernel became the missing piece needed to complete the GNU system.
Today, Linux has grown from a small student project into one of the largest collaborative software projects in the world, with the Linux kernel now containing over 23 million lines of code.
Why Linux Matters in Cybersecurity
Linux has earned a reputation for being:
- Secure – While no system is invulnerable, Linux’s design makes it less susceptible to malware than Windows. Its security model emphasizes user permissions and process isolation.
- Stable and reliable – Servers and critical infrastructure often run on Linux because it can operate for years without crashing.
- Flexible – The open-source nature allows anyone to customize Linux, whether for personal use or large-scale enterprise deployments.
- Widely used – From cloud servers (AWS, Google Cloud, Azure) to embedded devices (routers, smart TVs, IoT), Linux is everywhere. Even Android smartphones rely on the Linux kernel.
Distributions (Distros): The Many Flavors of Linux
There are more than 600 Linux distributions, each designed with specific goals in mind. Some of the most notable include:
- Ubuntu – Beginner-friendly, widely used on desktops and servers.
- Debian – A stable, community-driven base for many other distros.
- Fedora – Cutting-edge features, often used by developers.
- Red Hat Enterprise Linux (RHEL) – A commercial distro focused on enterprise environments.
- Kali Linux / Parrot OS – Security-focused distributions packed with penetration testing tools.
- Gentoo / Arch Linux – Advanced, customizable distros for power users.
For cybersecurity professionals, Parrot OS and Kali Linux are especially important. In fact, many training platforms provide access to Pwnbox, a customized version of Parrot OS optimized for hacking labs and learning environments.
The Philosophy of Linux
Linux is more than just software—it’s a philosophy. At its heart, Linux embodies the principles of:
- Freedom – The GPL license allows anyone to use, modify, and distribute the code.
- Transparency – The source code is available for anyone to inspect.
- Collaboration – Thousands of developers worldwide contribute to Linux’s continuous growth.
- Simplicity and modularity – Tools are designed to do one thing well and can be combined like Lego bricks to solve complex problems.
Think of Linux like a company with a unique culture:
- The employees are the components (kernel, shell, file system, etc.).
- The organizational structure is the architecture, defining how everything works together.
- The company values are the Linux philosophy: openness, cooperation, and efficiency.
System Information
Nice — time to get your hands dirty. Below is a focused, practical guide to using the terminal and shell, aimed at security work: gathering system information, inspecting processes, checking network configuration, and exploring devices. I’ll give command examples, what they do, common flags, security tips, and short practice exercises you can run in your lab (Pwnbox / Parrot / any Linux VM).
Pro tip: when in doubt, use
-h,--help, orman <command>to read the built-in documentation.
Quick reference: essential commands (what they do)
A compact cheat-sheet you can copy into your notes.
whoami # show current username
id # show user identity (uid/gid/groups)
hostname # print/set hostname
uname # kernel & system info (use -a for all)
pwd # print working directory
ifconfig # deprecated on some systems; view/assign IPs (net-tools)
ip # modern tool for interfaces/routes: ip addr, ip route, ip link
netstat # network connections/statistics (deprecated on some systems)
ss # socket statistics (replacement for netstat)
ps # show processes (ps aux or ps -ef)
who # who is logged in
env # print environment variables
lsblk # list block devices (disks, partitions)
lsusb # list USB devices
lspci # list PCI devices (network cards, GPUs)
lsof # list open files / sockets
Useful command examples and short explanations
User & identity
whoami
id
whoami→ prints current username.id→ prints UID, GID, and supplementary groups (useful to check sudo rights).
Host & kernel
hostname
uname -a
hostnameshows system name.uname -ashows kernel version, architecture, hostname, and build info.
Working directory & filesystems
pwd
lsblk
df -h
mount | column -t
lsblklists disks and partitions (good for discovering attached storage).df -hshows mounted filesystems and disk usage in human format.
Network interfaces & routing
ip addr show # list interfaces + IPs
ip link show # link status
ip route show # routing table
ifconfig -a # older systems
ipis preferred overifconfig. Usesudo ip addr add 192.168.56.10/24 dev eth0to assign an address (lab only).
Network sockets & connections
ss -tulpn # show listening sockets with processes
netstat -tunap # older equivalent (if installed)
lsof -i -n -P # open network files (resolve suppressed)
ss -tulpnquickly reveals listening ports and the process IDs bound to them — crucial during reconnaissance.
Processes & users
ps aux | less
ps -ef | grep sshd
top # interactive process monitor
htop # nicer top (if installed)
- Use
ps auxto get a snapshot;top/htopto monitor live. Check for unexpected daemons.
Logged in users
who
w
last
wholists active sessions.lastshows historical logins.
Environment variables
env | sort
printenv HOME PATH
- Inspect environment variables for secrets left in env (in a real assessment, check for credentials in scripts).
Hardware / buses
lspci -v
lsusb
dmidecode | less # requires sudo
lspcifinds NIC models, GPUs, etc.dmidecodeprints BIOS and hardware vendor info.
Files open by processes
sudo lsof -p <pid>
sudo lsof /var/log/syslog
lsofis excellent to see which files/socket a process has open (useful when investigating file handles or suspicious connections).
Short security notes while you explore
- Never run unknown scripts as root. Inspect them first.
- Use snapshot/restore (VM snapshot) before risky actions in a lab.
- On shared lab environments, avoid revealing real credentials — use dedicated lab accounts.
sudousage: check/etc/sudoersorsudo -lfor allowed commands for an account.- Watch
/var/log/(e.g.,sudo tail -f /var/log/auth.logorjournalctl -f) for authentication events.
Step-by-step practice workflow (follow in your lab)
- Spawn the target VM (Pwnbox / Parrot / Kali) in your lab platform.
- SSH into the target:
ssh user@target-ip- If SSH uses non-standard port:
ssh -p 2222 user@target-ip
- If SSH uses non-standard port:
- Confirm your identity & environment:
whoami id uname -a hostname pwd env | grep -i PATH - Enumerate disks & mounts:
lsblk df -h mount | column -t - Inspect network config & routes:
ip addr show ip route show ss -tulpn - List processes & suspicious services:
ps aux --sort=-%mem | head ss -tulpn | head sudo lsof -i -n -P | head - Check users & logins:
who last -n 10 sudo cat /etc/passwd | tail -n +1 sudo grep -i bash /etc/shells - Look for sensitive files:
sudo find / -type f -name "*pass*" -o -name "*.pem" 2>/dev/null | head sudo grep -R --line-number "password" /home 2>/dev/null | head- In a real engagement, be careful with data privacy rules. Only search where authorized.
- Check logs:
sudo tail -n 100 /var/log/auth.log sudo journalctl -u ssh -n 200
Example commands explained (short scenarios)
- Find which process is listening on port 22:
sudo ss -tulpn | grep :22- Output shows PID/programname — tells you which SSH server binary is running.
- See open files of a suspicious PID:
sudo lsof -p 4242- Reveals which files, sockets that process has — helps discover config files or open network endpoints.
- Dump environment variables of a running service (if you have permission):
sudo cat /proc/<pid>/environ | tr '\0' '\n'- Often reveals PATH, and sometimes secrets (be careful & authorized).
Short practice exercises (10–30 minutes each)
- Basic system snapshot (10 min)
Run a sequence of commands and save outputs to a report file:(uname -a; id; whoami; ip addr; ss -tulpn; lsblk) > /tmp/sys_snapshot.txt - Find listening services (15 min)
Usess -tulpn, map PID → binary → package:ss -tulpn ps -p <pid> -o pid,cmd dpkg -S $(readlink -f /proc/<pid>/exe) # Debian-based systems - Disk & USB discovery (10–20 min)
Uselsblk,lsusb,dmesg | tailafter plugging a USB (lab only). - Log analysis (20–30 min)
Tail auth logs while generating a login attempt to observe entries:sudo tail -f /var/log/auth.log # from another session, attempt ssh login (wrong password)
Common pitfalls & tips
ifconfigandnetstatmay not be installed on modern minimal systems — preferipandss.sudovssu: prefersudoin assessments because it leaves an audit trail.- Commands return different output across distros — test on the specific distro you’ll use (Parrot vs Ubuntu vs CentOS).
- Use
-noptions (ss -n,lsof -n) to avoid DNS delays in listings.
Key Takeaways
- The terminal is your primary reconnaissance tool in Linux: commands like
ip,ss,ps,lsof, andlsblkgive a fast system snapshot. - Prefer modern tools:
ip(network),ss(sockets) overifconfig/netstaton modern distros. - Always check identity (
whoami,id) and sudo privileges (sudo -l) before performing privileged operations. - Log files (
/var/log/*,journalctl) are gold for detecting authentication and system events. - Practice in a controlled lab: snapshot/restore your VMs, avoid running unknown code as root, and document every step.
Find out the machine hardware name and submit it as the answer.
Use the command bellow:
$ uname -m
[REDACTED]
What is the path to htd-student’s home directory?
$ pwd
[REDACTED]
What is the path to the htd-student’s mail?
$ echo $MAIL
[REDACTED]
Which shell is specified for the htd-student user?
$ echo $SHELL
Which kernel release is installed on the system? (Format: 1.22.3)
$ uname -r
[REDACTED]
What is the name of the network interface that MTU is set to 1500?
$ ifconfig
[REDACTED]: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 127.0.0.1 netmask 255.255.0.0 broadcast 127.0.0.1
inet6 fe80::250:56ff:fe80:ba16 prefixlen 64 scopeid 0x20<link>
inet6 dead:beef:a250:56ff:fe80:ba16 prefixlen 64 scopeid 0x0<global>
ether 00:50:56:80:ba:16 txqueuelen 1000 (Ethernet)
RX packets 18869 bytes 1311176 (1.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2052 bytes 164660 (164.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 4785 bytes 376606 (376.6 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4785 bytes 376606 (376.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Navigation
Navigating a system is fundamental—similar to how a typical Windows user relies on a mouse to move around. In Linux, navigation allows us to explore directories, manage files, and access the resources we need. To achieve this, we rely on specific commands and tools that display directory and file information, often with options to customize the output to suit our tasks.
The most effective way to truly understand new concepts is by practicing them. In this section, we’ll focus on navigation in Linux: how to create, move, edit, and remove files and folders; how to locate them across the system; how to use redirections; and what file descriptors are. We’ll also look at shortcuts that make working in the shell smoother and more efficient. For hands-on practice, it’s best to experiment in a locally hosted virtual machine (VM). Make sure to take a snapshot of the VM beforehand, so you can easily restore it if something breaks unexpectedly.
We’ll begin with navigation. Before moving around the system, we need to know our current location. The command pwd (print working directory) tells us exactly where we are within the directory structure.
What is the name of the hidden “history” file in the htd-user’s home directory?
$ ls -la
total 32
drwxr-xr-x 4 htd-student htd-student 4096 Aug 3 2021 .
drwxr-xr-x 5 root root 4096 Aug 3 2021 ..
-rw------- 1 htd-student htd-student 5 Sep 23 2020 [REDACTED]
-rw-r--r-- 1 htd-student htd-student 220 Apr 4 2018 .bash_logout
-rw-r--r-- 1 htd-student htd-student 3771 Apr 4 2018 .bashrc
drwx------ 3 htd-student htd-student 4096 Aug 3 2021 .cache
drwx------ 3 htd-student htd-student 4096 Aug 3 2021 .gnupg
-rw-r--r-- 1 htd-student htd-student 807 Apr 4 2018 .profile
What is the index number of the “sudoers” file in the “/etc” directory?
Go to the etc directory
cd /etc
List the index number of sudoers
$ cd /etc
$ ls -i | grep sudoers
[REDACTED] sudoers
146948 sudoers.d
Working with Files and Directories
The key distinction between handling files in Linux and Windows lies in the approach to accessing and managing them. On Windows, most users rely on graphical tools such as File Explorer to browse, open, and edit files. In contrast, Linux provides the terminal as a powerful alternative, enabling direct file access and manipulation through commands. This method is not only faster but also more versatile, allowing files to be viewed and modified without the need for traditional text editors like vim or nano.
What makes the terminal so efficient is its flexibility. With only a handful of commands, you can locate files, adjust their contents, and even apply selective edits using regular expressions (regex). Beyond that, Linux allows chaining multiple commands together, redirecting outputs, and automating repetitive edits across many files at once. This capability greatly reduces the time required for tasks that would otherwise be cumbersome in a graphical environment.
In the following section, we’ll dive deeper into practical commands for working with files and directories, equipping you with the skills to better organize and manage your system’s content.
What is the name of the last modified file in the “/var/backups” directory?
$ ls -la -t
[REDACTED]
apt.extended_states.1.gz
dpkg.status.0
dpkg.status.1.gz
dpkg.status.2.gz
dpkg.status.3.gz
dpkg.status.4.gz
dpkg.status.5.gz
dpkg.status.6.gz
alternatives.tar.0
apt.extended_states.2.gz
apt.extended_states.3.gz
alternatives.tar.1.gz
alternatives.tar.2.gz
dpkg.statoverride.0
dpkg.statoverride.1.gz
dpkg.statoverride.2.gz
dpkg.statoverride.3.gz
dpkg.statoverride.4.gz
dpkg.statoverride.5.gz
dpkg.statoverride.6.gz
apt.extended_states.4.gz
passwd.bak
shadow.bak
gshadow.bak
group.bak
dpkg.diversions.0
dpkg.diversions.1.gz
dpkg.diversions.2.gz
dpkg.diversions.3.gz
dpkg.diversions.4.gz
dpkg.diversions.5.gz
dpkg.diversions.6.gz
dpkg.diversions.7.gz
What is the inode number of the “shadow.bak” file in the “/var/backups” directory?
$ ls -i | grep shadow.bak
265817 gshadow.bak
[REDACTED] shadow.bak
Find Files and Directories
Being able to quickly locate files and directories is a vital skill when working in Linux. After gaining access to a Linux-based system, one of the first tasks often involves identifying important files such as configuration files, administrator or user-created scripts, and other critical resources. Manually browsing through each folder and checking modification dates would be both inefficient and time-consuming.
Fortunately, Linux provides several built-in tools that simplify this process. These utilities allow us to search efficiently, filter results, and pinpoint exactly the files we need without exhaustive manual effort.
What is the name of the config file that has been created after 2020-03-03 and is smaller than 28k but larger than 25k?
find / -iname "*.conf" -size +25k -size -28k -newermt 2020-03-03 2>/dev/null
/usr/share/drirc.d/[REDACTED]
How many files exist on the system that have the “.bak” extension?
find / -type f -name "*.bak" 2>/dev/null | wc -l
[REDACTED]
Submit the full path of the “xxd” binary.
$ which xxd
[REDACTED]
File Descriptors and Redirections
In Unix and Linux systems, a file descriptor (FD) is a reference created and managed by the kernel that helps the operating system handle Input/Output (I/O) operations. Each file descriptor serves as a unique identifier for an open file, network socket, or other I/O resource. On Windows, this same concept is referred to as a file handle. In short, file descriptors are the operating system’s method of tracking active I/O connections—whether you’re reading from a file, writing to it, or interacting with another resource.
A simple analogy helps illustrate this: imagine a coatroom where you’re given a ticket number when you check in your coat. That ticket (the file descriptor) represents your specific coat (the resource). When you want your coat back (perform I/O), you hand the ticket to the attendant (the operating system), who instantly knows where it’s stored. Without the ticket, finding your coat among many others would be inefficient—just as the OS cannot manage resources effectively without file descriptors.
Understanding file descriptors is critical because they form the foundation of how Linux manages input and output. By default, three key file descriptors are always available:
- STDIN (0): Input stream (usually keyboard input)
- STDOUT (1): Output stream (normal program output)
- STDERR (2): Error output stream (error messages)
These three descriptors provide the basis for handling input and output in Linux, and you’ll see why they matter as we move into practical examples.
How many files exist on the system that have the “.log” file extension?
$ find / -type f -name "*.log" 2>/dev/null | wc -l
[REDACTED]
How many total packages are installed on the target system?
$ apt list --installed | grep -c installed
[REDACTED]
Filter Contents
In the last section, we looked at how redirection can be used to pass the output of one program into another for additional processing. Now, we’ll shift focus to another essential skill: reading files directly from the command line—without having to launch a text editor.
For this, Linux provides two powerful utilities: more and less. These programs, called pagers, allow you to view file contents interactively, displaying the text one screen at a time. Although they share the same basic purpose, each has its own features and advantages, which we’ll highlight later.
With more and less, you can comfortably browse through large files, search within the text, and move forward or backward—all without altering the file itself. This makes them particularly valuable when examining lengthy logs or large text files that cannot be fully displayed in a single terminal screen.
The aim of this section is to prepare for filtering content and managing redirected output effectively. But before diving into filtering, it’s important to first understand the foundational tools that make this process both efficient and powerful. These commands are indispensable when dealing with big datasets or when automating tasks that require searching, sorting, or transforming information.
Next, we’ll explore some practical examples of these tools in action to see how they can help streamline your workflow.
How many services are listening on the target system on all interfaces? (Not on localhost and IPv4 only)
$ netstat -tlpn | grep -v tcp6 | grep -v "127.0.0." | grep -c LISTEN
[REDACTED]
Determine what user the ProFTPd server is running under. Submit the username as the answer.
$ ps aux | grep “proftpd”
[REDACTED]
Use cURL from your Pwnbox (not the target machine) to obtain the source code of the “https://www.inlanefreight.com” website and filter all unique paths (https://www.inlanefreight.com/directory” or “/another/directory”) of that domain. Submit the number of these paths as the answer.
curl https://www.inlanefreight.com/ | tr -s ' ' '\n' | grep -oE 'https://www.inlanefreight.com/([^"]+)' | sort -u | wc -l
[REDACTED]
Regular Expressions
Regular expressions (RegEx) can be thought of as detailed blueprints for identifying patterns within text. They give you the ability to search, match, replace, and manipulate data with remarkable accuracy. Imagine RegEx as a customizable filter that allows you to scan through text strings and extract exactly what you’re looking for—whether it’s analyzing logs, validating user input, or running complex search operations.
At their core, regular expressions are sequences of characters combined with special symbols that define a search pattern. These symbols, known as metacharacters, don’t represent literal text but instead provide rules for matching. For instance, with metacharacters you can specify whether you want to detect numbers, letters, whitespace, or virtually any character type that fits a defined structure.
Because of their flexibility, RegEx is supported across a wide range of programming languages and command-line tools—such as grep and sed—making it one of the most powerful and universal tools in a Linux user’s toolkit.
User Management
User management is one of the core responsibilities in Linux system administration. System administrators often need to create new accounts, assign users to groups, and apply access controls that ensure only the right people can reach specific resources. In many cases, it’s also necessary to run commands as another user, especially when tasks require elevated or different permissions. Properly managing these aspects is crucial for maintaining both system security and operational integrity.
For instance, some groups may have exclusive permissions to read or modify certain files and directories. This not only safeguards sensitive information but also ensures that only authorized users can carry out critical actions. From an administrative perspective, this level of control also helps with troubleshooting and auditing, since it allows detailed tracking of who can do what on the system.
Consider a practical example: a new employee, Alex, joins your organization and is given a Linux workstation. As the administrator, your job is to create Alex’s user account and place it in the correct groups, giving access to project files, development tools, and other necessary resources. Later, there may be situations where Alex needs to run commands with higher privileges or even act as another user to complete certain tasks—something that user and group management makes both secure and efficient.
Which option needs to be set to create a home directory for a new user using “useradd” command?
The -m flag tells useradd to create the user’s home directory (usually under /home/username) if it doesn’t already exist.
Which option needs to be set to lock a user account using the “usermod” command? (long version of the option)
When we want to prevent a user from logging in, we modify the account settings so that its password entry becomes inaccessible. This is done by using the –lock argument, which effectively disables authentication without deleting the user. Administrators often use this when temporarily suspending accounts during audits or investigations.
Which option needs to be set to execute a command as a different user using the “su” command? (long version of the option)
When we want to run just a single instruction as another user instead of opening their shell, we provide an argument that specifies which instruction should be executed. This is done by including –command, which lets us pass the exact command we want to run. It ensures we switch context only for that action, then return to our original session.
Service and Process Management
Services, often referred to as daemons, are essential components of any Linux system. They run quietly in the background, without direct user interaction, and handle critical operations that keep the system functioning smoothly. Services not only ensure core system stability but also provide extended features that enhance the overall user experience.
Types of Services
- System Services
These are the built-in services that launch during system startup. They are responsible for hardware initialization, preparing system components, and enabling the operating system to function properly. Think of them like a car’s engine and transmission: they are fundamental for the system to “run.” Without them, nothing else works. - User-Installed Services
These are services added later by the user, such as server applications or custom background processes. They aren’t strictly required for the operating system to operate, but they provide extra capabilities—like a car’s air conditioning or GPS. While optional, they significantly improve functionality and adaptability to user needs.
Daemons are typically recognizable by a “d” at the end of their names, such as sshd (the SSH daemon) or systemd. Just as a car relies on both essential parts and optional add-ons, a Linux system combines system services and user-installed services to deliver a complete and efficient experience.
Common Goals When Managing Services and Processes
When working with services, administrators usually focus on a few core tasks:
- Start/Restart a service or process
- Stop a service or process
- Check status to see what is happening (or has happened)
- Enable/Disable a service to control whether it starts at boot
- Locate a specific service or process
The Role of systemd
Most modern Linux distributions use systemd as their initialization (init) system. It is the first process that runs during boot and is assigned the Process ID (PID) 1. Every other process in the system is either directly or indirectly spawned by systemd.
Each process in Linux has:
- A PID (Process ID) – its unique identifier, viewable under the
/proc/directory. - A PPID (Parent Process ID) – showing which process started it. Processes created by another are called child processes.
This structure makes process management both organized and traceable, ensuring administrators can monitor and control what’s running on their systems at all times.
Use the “systemctl” command to list all units of services and submit the unit name with the description “Load AppArmor profiles managed internally by snapd” as the answer.
htd-student@nixfund:~$ systemctl list-units --type=service | grep snapd.apparmor
[REDACTED] loaded active exited Load AppArmor profiles managed internally by snapd
Task Scheduling
Task scheduling is an essential capability in Linux that lets administrators and users automate commands or scripts to run at set times or on a recurring schedule—removing the need to start them by hand. Common on distributions such as Ubuntu, Red Hat, and Solaris, scheduled tasks cover activities like automatic updates, running maintenance scripts, database upkeep, and regular backups. By automating routine jobs, scheduling guarantees they run reliably and on time. Administrators can also configure notifications to alert relevant people when particular events occur.
Think of scheduling as programming a coffee maker to brew every morning: once it’s set, the machine prepares the drink at the chosen time without any further intervention.
For security professionals and penetration testers, understanding scheduling is especially important. While schedulers are legitimate administrative tools, they can also be abused—unauthorized cron jobs or scheduled scripts may hide persistence mechanisms, execute malicious payloads, or periodically exfiltrate data. Knowing how scheduled tasks are created and where they live helps you spot suspicious entries, audit systems more effectively, and simulate realistic attack scenarios during assessments.
What is the Type of the service of the “dconf.service”?
The dconf.service interacts with the system through the session message bus rather than starting like a normal background process. Because of this, its service definition specifies dbus so systemd knows it should be activated through inter-process communication. This allows configuration changes to be transmitted efficiently across applications.
Working with Web Services
Another key aspect of web development is the interaction between browsers and web servers. On Linux systems, setting up a web server can be done in multiple ways, with common choices including Nginx, IIS, and Apache. Of these, Apache remains one of the most widely adopted solutions. You can think of Apache as the engine powering your website, managing communication between the site and its visitors so everything runs smoothly.
Apache can also be compared to the foundation of a house. Just as a house can be expanded with new rooms or customized features, Apache can be enhanced through modules. Each module has a unique purpose—some secure traffic, others handle redirections, while others reshape content dynamically, much like an interior designer rearranging a space to meet specific needs.
The real advantage of Apache lies in this modular design. You can tailor it with modules to serve precise functions:
- mod_ssl works like a lockbox, protecting communications between browser and server through encryption.
- mod_proxy acts like a traffic director, forwarding requests to the correct destination—especially useful for proxy setups.
- mod_headers and mod_rewrite give fine-grained control over HTTP headers and URLs, letting you rewrite or adjust them on the fly, similar to guiding the flow of a river.
Beyond delivering static files, Apache also enables the creation of dynamic web pages using server-side scripting languages. While PHP, Perl, and Ruby are common choices, it also supports Python, JavaScript, Lua, .NET, and others. These languages function as creative tools behind the scenes, generating content dynamically to make websites interactive, responsive, and engaging.
Find a way to start a simple HTTP server inside Pwnbox or your local VM using “npm”. Submit the command that starts the web server on port 8080 (use the short argument to specify the port number).
Explanation for the http-server -p 8080 answer:
To serve the current directory over HTTP using the npm http-server package, run http-server -p 8080.
The -p short option specifies the port (8080 here), so the server listens on that port for incoming requests.
This provides a fast, single-command static web server useful in labs or quick file hosting.
Find a way to start a simple HTTP server inside Pwnbox or your local VM using “php”. Submit the command that starts the web server on the localhost (127.0.0.1) on port 8080.
Run PHP’s built-in web server with the short -S flag; it binds to an address:port and serves the current directory.php -S 127.0.0.1:8080
After running that in the folder with your files, open http://127.0.0.1:8080 in a browser to access the site.
File System Management
Managing file systems in Linux is a fundamental responsibility that involves structuring, storing, and maintaining data across disks or other storage devices. One of Linux’s strengths is its support for a wide range of file systems—such as ext2, ext3, ext4, XFS, Btrfs, and NTFS—each designed for different needs and scenarios. Choosing the right file system depends on the application requirements and user priorities, including performance, reliability, and compatibility.
- ext2: An older, lightweight file system without journaling. While not ideal for modern systems, it can still be useful in low-resource environments like USB drives.
- ext3 / ext4: Both support journaling, which improves recovery after crashes. Ext4, the default in most modern Linux distributions, offers a solid balance of speed, dependability, and support for large files.
- Btrfs: Provides advanced capabilities such as snapshots and built-in data integrity verification, making it a strong choice for complex storage infrastructures.
- XFS: Optimized for handling very large files and high-performance workloads, especially in systems with heavy I/O demands.
- NTFS: A Windows-native file system, but still valuable in dual-boot environments or when external drives must work across Linux and Windows.
When evaluating which file system to use, it’s important to weigh factors like performance needs, data integrity requirements, compatibility, and overall storage goals.
The Linux File System Structure
Linux follows the Unix model, which organizes everything into a hierarchical directory structure. At the heart of this architecture are inodes—special data structures that contain metadata about each file or directory. Inodes track details such as permissions, ownership, size, and timestamps, along with pointers to the physical data blocks on disk. They don’t store file names or contents directly but instead act like reference points.
The inode table is essentially a large database of all inodes, enabling the Linux kernel to keep track of every file and directory. This system makes file access efficient but also introduces a unique limitation: a disk can run out of available inodes before it runs out of actual storage space. Knowing how to monitor and manage inodes is therefore a key part of Linux system administration.
Analogy: The Library Catalog
You can think of the Linux file system as a library. Each inode is like an index card in the library catalog. The card doesn’t contain the book itself (the file’s data) but provides essential details—title, author, and where the book can be found. The inode table is the entire card catalog that helps the library (the operating system) locate and organize every book (file) efficiently.
File Types in Linux
Files in Linux fall into several primary categories:
- Regular files – standard documents, programs, or data.
- Directories – special files that act as containers for other files.
- Symbolic links – pointers or shortcuts that reference other files.
How many partitions exist in our Pwnbox? (Format: 0)
htd-student@suricato:/$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55.4M 1 loop /snap/core18/1932
loop1 7:1 0 59.6M 1 loop /snap/powershell/137
loop2 7:2 0 65.3M 1 loop /snap/powershell/149
loop3 7:3 0 55.3M 1 loop /snap/core18/1885
loop4 7:4 0 97.7M 1 loop /snap/core/10126
loop5 7:5 0 97.8M 1 loop /snap/core/10185
[REDACTED]
├─[REDACTED]
└─[REDACTED]
htd-student@suricato:/$
