Network Address Translation (NAT)
NAT was designed to conserve IPv4 addresses by letting many internal devices use private address ranges. A NAT router rewrites outbound packets and keeps a table so replies can be sent back to the correct internal system.
NAT operates at Layer 3 but must also inspect and modify Layer 4 headers (TCP/UDP ports) so it can track connections in its translation table.
NAT provides an important security benefit: external hosts cannot initiate connections to internal systems. The NAT router only creates translation table entries when internal hosts start connections, so it blocks all unsolicited inbound traffic by default. An external attacker can't send packets to 192.168.1.10 because that address isn't routable on the Internet, and even if they somehow could, the router has no translation entry for it.
This isn't perfect security since internal hosts can still make outbound connections that could be exploited, but it's a significant improvement over every internal device having a public IP address directly accessible from the Internet.
First-Generation Firewalls: Packet Filters
A packet filter (also called a screening router) sits at a network boundary and makes independent decisions for each packet based on rules. It examines packet headers and decides whether to allow or drop each packet based on:
-
Source and destination IP addresses
-
Source and destination ports (for TCP/UDP)
-
Protocol type (TCP, UDP, ICMP, etc.)
-
Network interface (which physical port on the router)
Rules are evaluated in order, and processing stops at the first match. This means rule ordering is critical: a broad rule high in the list can shadow more specific rules below it.
Ingress and egress filtering
Ingress filtering applies to inbound traffic and typically follows a “default deny” model:
-
Block traffic that should never appear from the Internet (such as private-source addresses).
-
Block packets that claim to come from your own internal network (spoofed traffic).
Egress filtering
Egress filtering controls outbound traffic from internal networks to external ones. While we generally trust internal hosts, it is useful to restrict how a compromised internal host can communicate with the outside. Useful filters can:
-
Limit which protocols internal hosts can use to leave the network.
-
Prevent compromised hosts from freely downloading malware or talking to command-and-control servers.
-
Log unusual outbound traffic patterns.
Second-Generation Firewalls: Stateful Packet Inspection (SPI)
First-generation packet filters examine each packet independently without remembering past packets. But network protocols like TCP create ongoing conversations between hosts. Stateful packet inspection firewalls track the state of these conversations.
Stateful firewalls track:
-
TCP connection state, including the SYN, SYN-ACK, and ACK handshake
-
Return traffic, allowing it only if the internal host initiated the connection
-
Related connections, where a protocol negotiates additional connections after an initial control session
Stateful inspection prevents packets from being injected into existing connections, blocks invalid protocol sequences, and supports protocols that rely on multiple coordinated flows.
Security Zones and the DMZ
Organizations rarely have a single “internal” network. Instead, they divide networks into zones with different trust levels and use firewalls to control traffic between zones.
The DMZ (demilitarized zone) is a network segment that hosts Internet-facing services like web servers, mail servers, or DNS servers. These systems must be accessible from the Internet, making them prime targets for attack. The DMZ isolates them from internal networks so that if they're compromised, attackers don't gain direct access to internal systems.
Typical firewall policies between zones are:
-
Internet → DMZ: Allow only specific services (like HTTPS) to specific servers
-
Internet → Internal: Block entirely (no direct inbound connections)
-
Internal → DMZ: allow only what applications and administrators need
-
DMZ → Internal: allow only essential connections
-
DMZ → Internet: allow only what the DMZ services require
Network segmentation
Network segmentation extends this concept inside the organization. Instead of one big internal network, you create separate segments for different functions or sensitivity levels. Examples include separate segments for web servers, application servers, database servers, HR systems, development environments, and guest WiFi.
Segmentation provides several benefits:
-
Limits lateral movement after a compromise (an attacker who breaches one segment can't freely access others)
-
Reduces the blast radius of an attack
-
Helps enforce least privilege (systems only have access to what they actually need)
-
Makes policies simpler within each segment
Third-Generation Firewalls: Deep Packet Inspection (DPI)
Deep Packet Inspection (DPI) examines application-layer data, not just IP and transport headers. This lets firewalls understand what applications are doing and make more intelligent decisions.
DPI capabilities include:
-
Filtering based on destination hostname: Even for encrypted connections, the initial TLS handshake includes the server name. DPI can block connections to specific websites
-
Validating protocols: Checking that HTTP requests follow the expected structure, that DNS responses match query IDs, and that SMTP uses valid commands
-
Filtering content: Detecting unwanted file types or active content in traffic
-
Detecting some types of malware malware: Matching patterns inside packets against known malware signatures
DPI must keep up with network speeds and can only buffer and inspect a limited portion of the traffic. Encrypted traffic cannot be inspected deeply unless the firewall performs TLS interception, which replaces server certificates and breaks true end-to-end encryption.
Deep Content Inspection (DCI)
Deep Content Inspection (DCI) goes beyond simple DPI by:
-
Reassembling flows that span multiple packets
-
Decoding encoded content, such as email attachments
-
Examining patterns across multiple connections
Because this is computationally expensive, DCI is usually applied only to traffic that has already been flagged or that matches specific criteria.
Intrusion Detection and Prevention Systems
An IDS (Intrusion Detection System) monitors traffic and raises alerts when it sees suspicious behavior. An IPS (Intrusion Prevention System) sits inline and blocks traffic it identifies as malicious before it reaches its destination. IDS is passive (monitor and alert), while IPS is active (monitor and block).
Detection techniques used by IDS/IPS systems:
Protocol-based detection checks that traffic strictly follows protocol specifications. This includes validating HTTP headers and message structure, ensuring DNS responses match outstanding queries, restricting SMTP to valid commands, and verifying SIP signaling messages. This helps block attacks that rely on malformed or unexpected input.
Signature-based detection compares traffic patterns against a database of known attack signatures. Each signature describes byte sequences or packet patterns that correspond to a specific attack. This is effective for known threats but must be updated frequently and cannot detect new, unknown (zero-day) attacks.
Anomaly-based detection learns what "normal" traffic looks like and flags deviations. Examples include port scanning activity, unusual protocol mixes, or abnormally high traffic volumes. The main challenge is avoiding false positives, since legitimate changes in behavior can look like anomalies.
Challenges for IDS/IPS
Deploying IDS and IPS systems at scale introduces several practical challenges:
-
Volume: High traffic rates can overwhelm inspection capacity, and high false-positive rates lead to alert fatigue.
-
Encryption: Encrypted traffic cannot be inspected deeply without TLS interception.
-
Performance: Deep inspection and pattern matching are computationally expensive.
-
Evasion: Attackers use fragmentation, encoding tricks, and timing variations to avoid detection.
-
Evolving threats: New attacks require constant updates to signatures and detection models.
Next-Generation Firewalls (NGFW)
NGFWs combine multiple capabilities into one platform: stateful inspection, deep packet inspection, intrusion prevention, TLS inspection, and application and user awareness.
They identify applications by analyzing:
-
TLS metadata, such as the advertised server name
-
Distinctive protocol message patterns
-
Traffic characteristics such as packet timing and direction
-
Vendor-maintained DPI signatures for well-known applications
However, NGFW application identification can be evaded by traffic obfuscation (tunnels inside TLS, domain fronting, protocol mimicking).
The key capability is distinguishing applications that all use the same port. Traditional firewalls see "HTTPS traffic on port 443." NGFWs can distinguish Zoom from Dropbox from Netflix, even though all use HTTPS on port 443, and apply different policies to each.
However, NGFWs still cannot see which local process on a host created the traffic. That level of visibility requires host-based firewalls.
Application Proxies
An application proxy sits between clients and servers as an intermediary. The client connects to the proxy, which then opens a separate connection to the real server. This means the proxy terminates one connection and initiates another.
A proxy can terminate TLS and inspect plaintext, but only if configured (and clients trust the proxy’s root certificate).
Application proxies can:
-
Enforce protocol correctness at the application layer
-
Filter or rewrite content before it reaches the client or server
-
Hide internal server addresses and structure
-
Provide a single point for logging and monitoring
The drawbacks are that proxies must understand each protocol in detail and can become performance bottlenecks if they handle large amounts of traffic.
Host-Based Firewalls
Host-based (or personal) firewalls run on individual systems instead of at the network perimeter. They integrate with the operating system so they can associate each network connection with a specific executable.
Host-based firewalls can:
-
Block unauthorized inbound connections
-
Restrict which applications may send or receive network traffic
-
Adjust rules depending on the current network (for example, home vs. public Wi-Fi)
-
Limit what malware can do if it runs on the system
Their limitations include:
-
If malware gains administrative privileges, it can disable or reconfigure the firewall.
-
Users may approve prompts from malicious applications just to make them go away.
-
Deep inspection on the host can introduce performance overhead.
-
They only see traffic to and from that one system.
Host-based firewalls work best as part of defense in depth, combined with network firewalls and other controls.
Zero Trust Architecture (ZTA)
The Problem: Deperimeterization
The traditional perimeter model assumed a clear boundary between trusted internal networks and untrusted external networks.
This model is breaking down for several reasons:
-
Mobile devices move constantly between trusted and untrusted networks
-
Cloud computing means applications and data run outside the datacenter, beyond the traditional perimeter
-
Insider threats demonstrate that being inside the network doesn't guarantee trustworthiness
-
Compromised systems allow lateral movement once an attacker breaches one internal system
-
Web applications mean browsers interact with countless external services, blurring the perimeter
The assumption that "inside equals safe" is no longer valid.
Zero Trust Principles
Zero trust abandons the idea that being "inside" a network is enough to be trusted. Instead, each access request is evaluated independently using identity, device state, and context, regardless of the source network.
Core principles of zero trust:
-
Never trust, always verify: Don't assume anything is safe just because it's inside the network
-
Least privilege access: Users and systems get only the specific access they need, not broad network access
-
Assume breach: Design systems assuming attackers are already inside, so limit what they can do
-
Verify explicitly: Use multiple signals (user identity, device health, location, behavior) to make access decisions
-
Microsegmentation: Divide the network into very small segments so even a successful compromise is contained
In practice, implementing zero trust is challenging. Ideally, security would be built into applications themselves, with applications authenticating users, verifying authorization, and encrypting data end-to-end. But most existing applications weren't designed this way.
As a practical approach, organizations often implement these ideas through Zero Trust Network Access (ZTNA) systems. These create VPN-like connections between authenticated devices, enforce strict access control policies, monitor and log all access, and use multi-factor authentication. Unlike traditional VPNs that often grant broad network access, ZTNA restricts users and devices to only the specific applications they're authorized to reach.
Microsegmentation in zero trust
Traditional segmentation divides the network into a handful of zones (DMZ, internal, guest). Microsegmentation divides it into very small segments, potentially one per application or per individual virtual machine.
In a microsegmented environment, a compromised web server can't reach database servers for other applications, a compromised workstation can't scan the network or move laterally, and each workload has precisely defined communication policies. This is often enabled by software-defined networking and virtualization technologies. In many deployments, microsegmentation is enforced by distributed firewalls inside the hypervisor or container runtime rather than by perimeter firewalls.
Microsegmentation supports zero trust by ensuring that even if an attacker gains initial access, they're contained within a very limited environment.
Defense in Depth
Modern network security relies on multiple layers of protection so that one failure does not compromise the entire environment. Key layers include:
-
Perimeter firewalls: Stateful inspection and DPI
-
DMZ: Public-facing services isolated from internal networks
-
Network segmentation: Internal networks divided by function and sensitivity
-
VPN: Remote access with multi-factor authentication
-
TLS: Encryption for all application traffic
-
Zero trust: Per-request authorization and device posture checks
-
Microsegmentation: Containment of critical workloads in tightly limited environments
-
Host-based firewalls: Endpoint-level control over application network access
-
IDS/IPS: Monitoring and blocking of known or suspicious activity
These layers work together to create resilience rather than relying on any single security mechanism.