Setting Up

🎯 Introduction

Before starting any penetration-testing engagement, it’s crucial to build a dependable, efficient workspace. That means organizing your tools, configuring systems, and preparing all required resources in advance. A well-planned testing environment reduces downtime, cuts mistakes, and speeds up the assessment. This module covers the core technologies and configurations you should establish up front, with an emphasis on virtualization and creating the right environment for our testing tasks.

Assume our company has been hired by a new client (Inlanefreight) to perform both external and internal penetration tests. As noted, proper operating-system preparation is necessary prior to any testing. The client will hand us internal machines that must be readied before the engagement so the testing can start without delay. We therefore need to prepare the required operating systems in a practical and efficient way.


πŸ›‘οΈ Penetration-testing stages & scenarios

Every engagement differs in scope, objectives, and infrastructure depending on the customer’s services and environment. Besides the standard phases of a pentest, our workflow can change based on the test type β€” which may expand or restrict what we can do and which resources we can use.

For instance, in an internal test we’re commonly given a host inside the client environment to operate from. If that host has internet access (which is often the case), we should also have a VPS with our toolset so we can quickly fetch tools and resources when needed.

Testing may occur remotely or on-site, depending on the client’s preference. For remote work we typically either ship a device with our preferred pentesting distribution installed, or provide a VM image that connects back to our infrastructure via OpenVPN. The client might host that image (which we’ll access, log into, and minimally customize on day one) and allow SSH access via IP whitelisting, or they may give us VPN credentials into their network. Some clients won’t host an image and will only provide VPN access β€” in those cases we can test from our own Linux and Windows VMs.

When working on-site, bring both a tailored, up-to-date Linux VM and a Windows VM. Many tools run best (or only) on Linux, while certain tasks β€” especially Active Directory enumeration β€” are much faster and easier from a Windows environment. Whatever option the client chooses, we must explain the trade-offs and recommend the best approach for their network and requirements.

This is one area where versatility and adaptability are essential. We must be ready on day one with the right toolset to deliver deep, high-value testing. Each environment is unique and unpredictable; during enumeration we often discover needs for additional tools, scripts, or packages. If our attack VMs are well preconfigured, we avoid wasting the early days of an assessment installing and configuring basic tooling. Ideally, only minor, scenario-specific changes should be necessary as the engagement progresses.


βš™οΈ Setup & efficiency

Over time each tester accumulates preferred tools and workflows. Being organized is critical because it dramatically improves efficiency. Rather than hunting for utilities and their dependencies at the start of an engagement, you should rely on a prebuilt, well-organized environment. Achieving this takes planning and familiarity with multiple operating systems, which grows with experience.

Everyone wants to be efficient, but it’s easy to overburden a system with too many tools β€” which can slow things down or cause instability. This abundance of opinions and options is especially overwhelming for newcomers: every source seems to recommend a different toolkit and approach, and while they may all be valid, that variety creates friction.

When the scope of your role or your tooling needs change, migrating from an old setup to a new one can be time-consuming and costly β€” and it may not always pay off. That’s why this module focuses on building an essential, well-understood environment: a workspace you know thoroughly, can customize yourself, and can adapt quickly to new situations.


πŸ—‚οΈ Organization

As discussed in the Learning Process module, being organized is crucial for successful penetration tests β€” no matter the engagement type. When your working environment is structured and familiar, you save a huge amount of time that would otherwise be spent hunting for resources. Locating required tools or reference material should take minutes, not hours; without a consistent structure, preparing for each assessment can easily eat up several hours.

Corporate networks are usually heterogeneous β€” hosts and servers run different operating systems β€” so it’s sensible to sort your files and tools by OS. If you also align your folder organization to the common pentest phases, a practical directory layout might look like this:

Snippet de cΓ³digo

Cry0l1t3@mysite[/mysite]$ tree ..

└── Penetration-Testing
    β”‚
    β”œβ”€β”€ Pre-Engagement
    β”‚       └── ...
    β”œβ”€β”€ Linux
    β”‚   β”œβ”€β”€ Information-Gathering
    β”‚   β”‚   └── ...
    β”‚   β”œβ”€β”€ Vulnerability-Assessment
    β”‚   β”‚   └── ...
    β”‚   β”œβ”€β”€ Exploitation
    β”‚   β”‚   └── ...
    β”‚   β”œβ”€β”€ Post-Exploitation
    β”‚   β”‚   └── ...
    β”‚   └── Lateral-Movement
    β”‚       └── ...
    β”œβ”€β”€ Windows
    β”‚   β”œβ”€β”€ Information-Gathering
    β”‚   β”‚   └── ...
    β”‚   β”œβ”€β”€ Vulnerability-Assessment
    β”‚   β”‚   └── ...
    β”‚   β”œβ”€β”€ Exploitation
    β”‚   β”‚   └── ...
    β”‚   β”œβ”€β”€ Post-Exploitation
    β”‚   β”‚   └── ...
    β”‚   └── Lateral-Movement
    β”‚       └── ...
    β”œβ”€β”€ Reporting
    β”‚   └── ...
    └── Results
        └── ...

If you specialise in particular pentest domains, feel free to adapt the structure to suit those areas. Teams should agree on one layout so everyone knows where to place and find artifacts. Here’s an alternative that emphasises test type first, then OS:

Snippet de cΓ³digo

Cry0l1t3@mysite[/mysite]$ tree ..

└── Penetration-Testing
    β”‚
    β”œβ”€β”€ Pre-Engagement
    β”‚       └── ...
    β”œβ”€β”€ Network-Pentesting
    β”‚       β”œβ”€β”€ Linux
    β”‚       β”‚   β”œβ”€β”€ Information-Gathering
    β”‚       β”‚   β”‚   └── ...
    β”‚       β”‚   β”œβ”€β”€ Vulnerability-Assessment
    β”‚       β”‚   β”‚   └── ...
    β”‚       β”‚   └── ...
    β”‚       β”œβ”€β”€ Windows
    β”‚       β”‚   β”œβ”€β”€ Information-Gathering
    β”‚       β”‚   β”‚   └── ...
    β”‚       β”‚   └── ...
    β”‚       └── ...
    β”œβ”€β”€ WebApp-Pentesting
    β”‚       └── ...
    β”œβ”€β”€ Social-Engineering
    β”‚       └── ...
    β”œβ”€β”€ .......
    β”‚       └── ...
    β”œβ”€β”€ Reporting
    β”‚   └── ...
    └── Results
        └── ...

Proper structure helps you track work and spot process gaps. As you progress through training and other courses, save your cheat sheets, scripts, and notes into these folders so nothing critical is missed in future engagements. For newcomers, organising by operating system is a sensible starting point.

When working as a team, document roles and expected activities so items don’t end up in the wrong folders and evidence doesn’t get misplaced or corrupted.


πŸ”– Bookmarks

Browser extensions and bookmarks can drastically boost efficiency β€” but reinstalling them repeatedly wastes time. Firefox allows syncing of add-ons and bookmarks via a Firefox account; logging into that account on a new machine will restore your curated environment automatically.

Be careful never to save sensitive or client-specific resources to a synced account. Assume the bookmark list will eventually be seen by others. Create a dedicated penetration-testing account for syncing, and if you must import client-related links, keep them locally and import them only into the pentest account. After that, change or remove them from your private account.


πŸ” Password Manager

Password managers are indispensable for both normal users and pentesters. A common attack vector is password reuse β€” credentials harvested from one system often work across multiple services. Password managers address the three main password problems:

  • Complexity β€” Secure passwords are hard for humans to invent and remember.
  • Reuse β€” Reusing the same password on many services creates broad exposure.
  • Remembering β€” Users forget many different credentials without a vault.

Popular options include 1Password, LastPass, Keeper, Bitwarden, and Proton Pass. These tools let you store many strong, unique credentials while remembering only one master password. Proton Pass is a solid choice thanks to its free plan, paid tiers, built-in 2FA support, and dark-web monitoring.


πŸ”„ Updates & Automation

Keep your OS images, toolsets, and GitHub collections updated before each engagement. Record the resources you rely on and their locations so you can automate retrieval later. Store automation scripts in a versioned place β€” Proton, GitHub, or a self-hosted server β€” to fetch them quickly when needed.

Automation scripts are OS dependent: you’ll likely write Bash for Linux, PowerShell for Windows, and Python for cross-platform tasks. Building automation is a practical way to learn scripting and makes reinstallations or provisioning much faster. Keep your cheat sheets and notes current so automation can evolve alongside your toolkit.


πŸ“ Note Taking

Notes capture the many details that are impossible to retain mentally during an engagement. There are five key information types you should log:

  1. Newly discovered information (IPs, usernames, passwords, source code snippets)
  2. Ideas for further tests and follow-up tasks
  3. Scan results
  4. General logs of activity/commands
  5. Screenshots and visual evidence

1. Discovered Information

Record any useful findings from OSINT, scanning, or manual review β€” IPs, creds, endpoints, etc. These are the items you’ll reuse later in pivoting, exploitation, or reporting.

2. Processing / Follow-up Ideas

Capture hypotheses and tasks that emerge while testing. The volume of data can be overwhelming, so jot everything that looks worth investigating to avoid forgetting potential leads. Tools like Notion, Anytype, Obsidian, and XMind work well:

  • Notion β€” web-based, flexible markdown editor for structured notes and collaboration.
  • XMind β€” great for mind maps and visual process flows.
  • Obsidian β€” local markdown vault with powerful linking for knowledge bases.
  • VS Code + Foam β€” a lightweight personal wiki in your editor, useful if you prefer local files.

3. Results

Store all scan outputs and intermediate results. Over time you’ll learn to spot what’s important β€” practice trains this skill β€” but preserve everything because an apparently insignificant item can become valuable later. Tools such as GhostWriter or Pwndoc can help turn notes and findings into structured reports.

4. Logging

Logs protect both you and the client. If something goes wrong during a test, your logs prove the actions you performed and their timing. For precise timestamps and command capture:

  • Use date and customize your shell prompt (PS1) to include timestamps.
  • Use script on Linux to capture an entire terminal session to a file. Example:

Snippet de cΓ³digo

Cry0l1t3@mysite[/mysite]$ script 03-21-2021-0200pm-exploitation.log
...perform commands...
Cry0l1t3@mysite[/mysite]$ exit

  • On Windows, use PowerShell’s Start-Transcript / Stop-Transcript:

PowerShell

C:\> Start-Transcript -Path "C:\Pentesting\03-21-2020-0200pm-exploitation.log"
C:\> ...commands...
C:\> Stop-Transcript

Define and follow a logging filename convention such as <date>-<start time>-<activity>.log so files sort chronologically. Many terminal multiplexers (tmux) and terminal tools can record sessions automatically. If a tool won’t log, redirect output or use tee:

Linux

Bash

./custom-tool.py 10.129.28.119 >> logs.custom-tool
./custom-tool.py 10.129.28.119 | tee -a logs.custom-tool

Windows (PowerShell)

PowerShell

.\custom-tool.ps1 10.129.28.119 > logs.custom-tool
.\custom-tool.ps1 10.129.28.119 | Out-File -Append logs.custom-tool

Good logging makes it easier for teammates to follow what was done and lets you analyze and optimise your workflow later. If you repeat sequences of commands often, consider scripting them.

5. Screenshots

Screenshots are quick, indisputable proof for PoC and reporting. Flameshot is an excellent screenshot tool that includes simple annotation features and can be installed via apt or downloaded from GitHub. When you need to show a sequence of steps, use a screen-recording GIF tool like Peek to capture animated evidence.


πŸ’» Virtualization

Virtualization means creating software-based versions of computing resources. Both hardware and software can be abstracted into β€œvirtual” or β€œlogical” components that behave like their physical equivalents. The big benefit is the layer of abstraction between real hardware and the virtual instance. This concept underpins many cloud services that are now standard in business environments. Keep in mind: virtualization is different from simulation and emulation.

By representing hardware, software, storage, and networking as virtual resources, we can allocate them flexibly and on demand to multiple users, improving overall utilization. A key objective is running applications on platforms that wouldn’t normally support them. Common categories include:

  • Hardware virtualization
  • Application virtualization
  • Storage virtualization
  • Data virtualization
  • Network virtualization

Hardware virtualization uses a hypervisor to expose hardware independently of its physical form. The most familiar example is the virtual machine (VM) β€” a full computer system (hardware + OS) that runs as a guest on a physical host. With VirtualBox, you can also install Guest Additions, a driver/tool bundle that improves performance and usability inside guest OSes.

Virtual Machines

A virtual machine is an operating system instance running on top of a real computer (the host). Multiple, isolated VMs can run at the same time. The hypervisor parcels out CPU, RAM, disk, and network to each VM, which acts like an independent machine without affecting the others.

From inside the VM, applications and the OS operate as if installed on bare metal; they’re unaware of the virtualization layer. There is usually some performance overhead because the hypervisor itself consumes resources. Even so, VMs provide strong advantages over installing directly on physical hardware:

  • Services inside one VM don’t interfere with others
  • Guest systems are decoupled from the host OS and physical hardware
  • VMs can be migrated or cloned simply by copying files
  • Hypervisors can adjust hardware allocations dynamically
  • Hardware gets used more effectively overall
  • Systems/apps can be provisioned much faster
  • Centralized, simplified management
  • Higher availability thanks to reduced dependence on a single physical box

Introduction to VirtualBox

VirtualBox is a free alternative to VMware Workstation Pro. It stores virtual disks in VDI files and also supports VMDK (VMware), VHD, and other formats. You can convert disks with VBoxManage, VirtualBox’s CLI. Installation can be done from your package manager or by downloading the installer from the official site.

VirtualBox homepage: open-source virtualization for personal and enterprise use, with platform downloads. Download: https://www.virtualbox.org/

On Ubuntu, you can install VirtualBox and its Extension Pack together:

Snippet de cΓ³digo

cry0l1t3@mysite[/mysite]$ sudo apt install virtualbox virtualbox-ext-pack -y

The Extension Pack adds features like:

  • USB 2.0/3.0 passthrough
  • VirtualBox RDP
  • Disk encryption
  • PXE boot support
  • NVMe support

Proxmox

Proxmox is an open-source, enterprise virtualization platform used widely in business and data centers. It combines KVM (full virtualization) and LXC (containers) with robust management.

Proxmox homepage: highlights Virtual Environment, Backup Server, Mail Gateway.

Its three main products (downloadable):

  • Proxmox Virtual Environment (VE)
  • Proxmox Backup Server
  • Proxmox Mail Gateway

Proxmox lets you create full virtualized labs and complex networks with VMs and containers. You can even try Proxmox VE inside VirtualBox to experiment without extra hardware.

Quick start inside VirtualBox:

  1. Create a new VM and attach the Proxmox VE ISO.
  2. Allocate resources β€” at least 4 GB RAM and 2 CPUs recommended.
  3. Review settings and start the VM.
  4. When it boots, choose the graphical installer.
  5. Follow the prompts carefully. After installation, you’ll see the login prompt and the web management URL.
  6. Sign in to the web dashboard with the credentials set during install:
    • Username: root
    • Password: your chosen password
  7. Once inside the dashboard, you’ll manage your virtualized Datacenter β€” upload VM images, create containers, configure networks, and more. These resources live entirely within Proxmox; you don’t need to add each one separately to VirtualBox.