pk.org: Computer Security/Lecture Notes

Network Protection -- Firewalls and Zero Trust

Study Guide

Paul Krzyzanowski – 2025-11-16

Network Address Translation (NAT)

NAT was designed to conserve IPv4 addresses by letting many internal devices use private address ranges. A NAT router rewrites outbound packets and keeps a table so replies can be sent back to the correct internal system.

NAT operates at Layer 3 but must also inspect and modify Layer 4 headers (TCP/UDP ports) so it can track connections in its translation table.

NAT provides an important security benefit: external hosts cannot initiate connections to internal systems. The NAT router only creates translation table entries when internal hosts start connections, so it blocks all unsolicited inbound traffic by default. An external attacker can't send packets to 192.168.1.10 because that address isn't routable on the Internet, and even if they somehow could, the router has no translation entry for it.

This isn't perfect security since internal hosts can still make outbound connections that could be exploited, but it's a significant improvement over every internal device having a public IP address directly accessible from the Internet.

First-Generation Firewalls: Packet Filters

A packet filter (also called a screening router) sits at a network boundary and makes independent decisions for each packet based on rules. It examines packet headers and decides whether to allow or drop each packet based on:

Rules are evaluated in order, and processing stops at the first match. This means rule ordering is critical: a broad rule high in the list can shadow more specific rules below it.

Ingress and egress filtering

Ingress filtering applies to inbound traffic and typically follows a “default deny” model:

Egress filtering

Egress filtering controls outbound traffic from internal networks to external ones. While we generally trust internal hosts, it is useful to restrict how a compromised internal host can communicate with the outside. Useful filters can:

Second-Generation Firewalls: Stateful Packet Inspection (SPI)

First-generation packet filters examine each packet independently without remembering past packets. But network protocols like TCP create ongoing conversations between hosts. Stateful packet inspection firewalls track the state of these conversations.

Stateful firewalls track:

Stateful inspection prevents packets from being injected into existing connections, blocks invalid protocol sequences, and supports protocols that rely on multiple coordinated flows.

Security Zones and the DMZ

Organizations rarely have a single “internal” network. Instead, they divide networks into zones with different trust levels and use firewalls to control traffic between zones.

The DMZ (demilitarized zone) is a network segment that hosts Internet-facing services like web servers, mail servers, or DNS servers. These systems must be accessible from the Internet, making them prime targets for attack. The DMZ isolates them from internal networks so that if they're compromised, attackers don't gain direct access to internal systems.

Typical firewall policies between zones are:

Network segmentation

Network segmentation extends this concept inside the organization. Instead of one big internal network, you create separate segments for different functions or sensitivity levels. Examples include separate segments for web servers, application servers, database servers, HR systems, development environments, and guest WiFi.

Segmentation provides several benefits:

Third-Generation Firewalls: Deep Packet Inspection (DPI)

Deep Packet Inspection (DPI) examines application-layer data, not just IP and transport headers. This lets firewalls understand what applications are doing and make more intelligent decisions.

DPI capabilities include:

DPI must keep up with network speeds and can only buffer and inspect a limited portion of the traffic. Encrypted traffic cannot be inspected deeply unless the firewall performs TLS interception, which replaces server certificates and breaks true end-to-end encryption.

Deep Content Inspection (DCI)

Deep Content Inspection (DCI) goes beyond simple DPI by:

Because this is computationally expensive, DCI is usually applied only to traffic that has already been flagged or that matches specific criteria.

Intrusion Detection and Prevention Systems

An IDS (Intrusion Detection System) monitors traffic and raises alerts when it sees suspicious behavior. An IPS (Intrusion Prevention System) sits inline and blocks traffic it identifies as malicious before it reaches its destination. IDS is passive (monitor and alert), while IPS is active (monitor and block).

Detection techniques used by IDS/IPS systems:

Protocol-based detection checks that traffic strictly follows protocol specifications. This includes validating HTTP headers and message structure, ensuring DNS responses match outstanding queries, restricting SMTP to valid commands, and verifying SIP signaling messages. This helps block attacks that rely on malformed or unexpected input.

Signature-based detection compares traffic patterns against a database of known attack signatures. Each signature describes byte sequences or packet patterns that correspond to a specific attack. This is effective for known threats but must be updated frequently and cannot detect new, unknown (zero-day) attacks.

Anomaly-based detection learns what "normal" traffic looks like and flags deviations. Examples include port scanning activity, unusual protocol mixes, or abnormally high traffic volumes. The main challenge is avoiding false positives, since legitimate changes in behavior can look like anomalies.

Challenges for IDS/IPS

Deploying IDS and IPS systems at scale introduces several practical challenges:

Next-Generation Firewalls (NGFW)

NGFWs combine multiple capabilities into one platform: stateful inspection, deep packet inspection, intrusion prevention, TLS inspection, and application and user awareness.

They identify applications by analyzing:

However, NGFW application identification can be evaded by traffic obfuscation (tunnels inside TLS, domain fronting, protocol mimicking).

The key capability is distinguishing applications that all use the same port. Traditional firewalls see "HTTPS traffic on port 443." NGFWs can distinguish Zoom from Dropbox from Netflix, even though all use HTTPS on port 443, and apply different policies to each.

However, NGFWs still cannot see which local process on a host created the traffic. That level of visibility requires host-based firewalls.

Application Proxies

An application proxy sits between clients and servers as an intermediary. The client connects to the proxy, which then opens a separate connection to the real server. This means the proxy terminates one connection and initiates another.

A proxy can terminate TLS and inspect plaintext, but only if configured (and clients trust the proxy’s root certificate).

Application proxies can:

The drawbacks are that proxies must understand each protocol in detail and can become performance bottlenecks if they handle large amounts of traffic.

Host-Based Firewalls

Host-based (or personal) firewalls run on individual systems instead of at the network perimeter. They integrate with the operating system so they can associate each network connection with a specific executable.

Host-based firewalls can:

Their limitations include:

Host-based firewalls work best as part of defense in depth, combined with network firewalls and other controls.

Zero Trust Architecture (ZTA)

The Problem: Deperimeterization

The traditional perimeter model assumed a clear boundary between trusted internal networks and untrusted external networks.

This model is breaking down for several reasons:

The assumption that "inside equals safe" is no longer valid.

Zero Trust Principles

Zero trust abandons the idea that being "inside" a network is enough to be trusted. Instead, each access request is evaluated independently using identity, device state, and context, regardless of the source network.

Core principles of zero trust:

In practice, implementing zero trust is challenging. Ideally, security would be built into applications themselves, with applications authenticating users, verifying authorization, and encrypting data end-to-end. But most existing applications weren't designed this way.

As a practical approach, organizations often implement these ideas through Zero Trust Network Access (ZTNA) systems. These create VPN-like connections between authenticated devices, enforce strict access control policies, monitor and log all access, and use multi-factor authentication. Unlike traditional VPNs that often grant broad network access, ZTNA restricts users and devices to only the specific applications they're authorized to reach.

Microsegmentation in zero trust

Traditional segmentation divides the network into a handful of zones (DMZ, internal, guest). Microsegmentation divides it into very small segments, potentially one per application or per individual virtual machine.

In a microsegmented environment, a compromised web server can't reach database servers for other applications, a compromised workstation can't scan the network or move laterally, and each workload has precisely defined communication policies. This is often enabled by software-defined networking and virtualization technologies. In many deployments, microsegmentation is enforced by distributed firewalls inside the hypervisor or container runtime rather than by perimeter firewalls.

Microsegmentation supports zero trust by ensuring that even if an attacker gains initial access, they're contained within a very limited environment.

Defense in Depth

Modern network security relies on multiple layers of protection so that one failure does not compromise the entire environment. Key layers include:

These layers work together to create resilience rather than relying on any single security mechanism.


Terms you should know