pk.org: CS 419/Lecture Notes

Firewalls

Protecting the network

Paul Krzyzanowski – November 13, 2024

A firewall protects the junction between different network segments, most typically between an untrusted network (e.g., external Internet) and a trusted network (e.g., internal network). Two approaches to firewalling are packet filtering and proxies.

A normal router has the task of determining how to route a packet. That is, a router is connected to two or more networks, each connected to a different port on the router. An IP packet is received on one port and the router needs to determine which port to send it to.

A packet filter, also known as a screening router, is a router that not only selects the route for a packet but also determines whether the packet should be routed or dropped based on specific rules. These rules are applied to the packet's IP header, TCP/UDP header, and the network interface (port) where the packet was received. Packet filtering is typically performed by a border router (also called a gateway router), which controls the flow of traffic between an internal network and an external network, such as the Internet. The border router determines whether a packet should be forwarded to its destination or rejected, helping protect the internal network from unauthorized access.

The basic principle of firewalls is to never have a direct inbound connection from the originating host from the Internet to an internal host; all traffic must flow through a firewall and be inspected.

The packet filter evaluates a set of rules to determine whether to drop or accept a packet. This set of rules forms an access control list, often called a chain. Strong security follows a default deny model, where packets are dropped unless some rule in the chain specifically permits them.

First-generation packet filters implemented stateless inspection. A packet is examined on its own with no context based on previously-seen packets.

Second-generation packet filters

Second-generation packet filters, also known as stateful packet inspection (SPI) firewalls, improve upon first-generation firewalls by tracking the state of active connections and making decisions based on the context of previously seen packets. These firewalls can monitor sessions, understand the relationship between packets, and apply rules based on the state of a connection. By maintaining a state table of active sessions and their statuses, SPI firewalls enhance security and functionality, ensuring that traffic is allowed only when it corresponds to a legitimate, established connection.

Here are key features of SPI firewalls:

Third-Generation Packet Filters

Traditional packet filters primarily inspect packet headers up to the transport layer (e.g., examining TCP/UDP protocols and port numbers) to make routing or filtering decisions. Third-generation packet filters add deep packet inspection (DPI), enabling firewalls to go beyond examining network and transport-layer headers and analyze the actual application data within the packets. This capability allows these firewalls to make decisions based on the specific contents of network traffic.

Deep Packet Inspection (DPI)

Deep packet inspection examines the application-layer data within packets to validate protocols, enforce policies, and detect malicious content. For instance, DPI firewalls can: - Identify application-layer protocols (e.g., HTTP, FTP) regardless of the port they are running on. - Apply application-specific rules, such as checking for malformed URLs or blocking certain types of content, like suspicious Java applets or ActiveX controls. - Detect security threats, such as malicious payloads or protocol anomalies, making DPI a core feature of modern Intrusion Prevention Systems (IPS).

DPI focuses on analyzing individual packets in real-time, providing protocol validation and some content filtering without reassembling large chunks of data.

Deep Content Inspection (DCI)

Deep content inspection (DCI) builds on the principles of DPI but goes further by buffering and analyzing large chunks of data from multiple packets. This allows the firewall to handle complete objects (like files or encoded messages) rather than individual packets. For example, DCI can: - Reassemble file downloads or email attachments and scan them for malware. - Decode base64-encoded content (commonly used in web and email traffic) to reveal its actual payload for analysis. - Perform signature-based or heuristic analysis on the full object to detect advanced threats, such as embedded malware or hidden exploits.

Key Distinction Between DPI and DCI

Application proxies

An application proxy is software that presents the same protocol to the outside network as the application for which it is a proxy. For example, a mail server proxy will listen on port 25 and understand SMTP, the Simple Mail Transfer Protocol. The primary job of the proxy is to validate the application protocol and thus guard against protocol attacks (extra commands, bad arguments) that may exploit bugs in the service. Valid requests are then regenerated by the proxy to the real application that is running on another server and is not accessible from the outside network.

Application proxies are usually installed on dual-homed hosts. This is a term for a system that has two "homes," or network interfaces: one for the external network and another for the internal network. Traffic never passes between the two networks. The proxy is the only one that can communicate with the internal network. Unlike DPI, a proxy may modify the data stream, such as stripping headers, modifying machine names, or even restructuring the commands in the protocol used to communicate with the actual servers (that is, it does not have to relay everything that it receives).

Attacks

Like routers, firewalls are just special-purpose computers (and almost always integrated into routers). As with any software, they can have vulnerabilities that may be exploited.

For example, between November and December 2024, Palo Alto Networks disclosed a few high-severity vulnerabilities in PAN-OS, the software that runs across their family of firewalls.

CVE-2024-0012: Allows an attacker to bypass authentication by setting an X-PAN-AUTHCHECK header to the value "off" in a web request. (see here for details).

CVE-2024-9474 is a command injection vulnerability via manipulation of a username field that allows an attacker to gain administrative privileges. The culprit is this line of php code:

return $p->pexecute("/usr/local/bin/pan_elog -u audit -m $msg -o $username");

Chaining this together with CVE-2024-0012 enables an unauthenticated attacker to run commands on the firewall with root privileges.

CVE-2024-3393: Allows an attacker to send a malicious packet that will reboot the firewall, resulting in a denial of service attack on the networks behind the firewall. Doing this repeatedly will cause the firewall to enter maintenance mode.

DMZs and Micro-Segmentation

Each network connected to a router can be thought of as a security zone, representing a group of systems with a similar level of trust. Two basic zones are the internal zone and the external zone. The internal zone includes the organization's systems, which are generally trusted, while the external zone represents untrusted systems on the Internet. A gateway router with a packet filter is used to control the flow of traffic between these zones, screening and enforcing security rules.

The primary danger in this design is that all internal systems are on the same local area network and share the same range of addresses. If attackers successfully compromise one system, they could gain full access to the rest of the internal network. Additionally, improperly configured firewall rules may allow unauthorized external requests to reach internal systems.

To address these risks, many organizations implement a screened subnet architecture, which introduces a separate network segment known as the DMZ (demilitarized zone). The DMZ contains systems that offer externally accessible services, such as web servers, email servers, or DNS servers, isolating them from the internal network. This separation ensures that systems in the internal network, which do not provide external-facing services, are not directly accessible from the Internet. In this architecture: - The DMZ is protected by a screening router, which controls packet flow based on the interface where packets arrive and their header values. - Internal systems are placed in a separate, more secure subnet that is shielded from direct Internet access. A packet filter controls access between these systems and those in the DMZ as well as the external network. - A single firewall can protect the networks since the packet origin can be identified by the incoming port and filtering decisions can be made based on that.

Router Access Control Policies

The router enforces strict access control policies to manage traffic between the external network, the DMZ, and the internal network: 1. From the Internet (External Network): - No direct traffic is allowed into the internal network unless it is a response to a request initiated internally (e.g., a DNS query or TCP return traffic). - Only packets destined for valid services in the DMZ (specific IP addresses, protocols, and ports) are allowed. - Packets masquerading as internal traffic are rejected.

  1. From the DMZ:
  2. Only designated systems in the DMZ that require access to internal network services are allowed through, and their access is limited to specific services.
  3. Outbound traffic from the DMZ to the Internet may be restricted to prevent attackers from downloading tools or malware if a DMZ system is compromised.
  4. This segmentation limits the ability of an intruder who compromises a DMZ system to further attack internal systems.

  5. From the Internal Network:

  6. Internal traffic is typically allowed to flow to the DMZ or the Internet, though restrictions may be applied to block specific external services or prevent certain activities (e.g., torrenting or accessing prohibited websites).
  7. Internal users generally have unrestricted access to DMZ systems, including public-facing services and potentially additional internal-facing services, such as login portals.

Micro-Segmentation

The separation of internal and DMZ networks can be further enhanced with micro-segmentation, where additional subnets or VLANs (Virtual Local Area Networks)** are created to isolate different groups or functions within an organization. Examples include separate subnets for departments like Development, Human Resources, Legal, and Marketing. Traffic between these subnets passes through firewalls, allowing organizations to enforce strict access controls and limit exposure. For instance: - Developer's systems may only access specific development servers or repositories. - Marketing systems may be isolated from legal systems, minimizing the risk of lateral movement during an attack.

Micro-segmentation can also be applied within the DMZ itself. For example, each system or service in the DMZ (e.g., a web server, a mail server) can be placed in its own subnet, restricting the ability of a compromised DMZ system to attack others. This additional layer of defense reduces the attack surface and provides granular control over inter-system communication.

With micro-segmentation, organizations can implement a layered security model, ensuring that even if one segment of the network is compromised, the damage is contained, and the risk of further breaches is minimized.

Deperimeterization and Zero Trust

Deperimeterization refers to the diminishing effectiveness of traditional network boundaries (perimeters) in protecting systems and their resources. Traditionally, networks relied on perimeter security models that assumed systems inside the network could be trusted and focused on securing the boundary between the internal network and the external world (e.g., the Internet). This model became less effective due to several factors: - The rise of mobile devices, where users move a device (such as a laptop or phone) between the home, office, and possibly various Wi-Fi hotspots (e.g., Starbucks, airport lounge). The same device connects to trusted as well as untrusted networks at different times. - The rise of remote work, which require users to access internal resources from outside the corporate network. - The widespread use of cloud services, which store sensitive data outside traditional perimeters. - The increased use of unvetted software and deceptive downloads, where trusted users may inadvertently download malware. - Cyber threats that exploit compromised internal devices or privileged users to move laterally within a network.

As a result, the concept of a "secure perimeter" began to break down, leading to the development of the Zero Trust Security Model.

The Zero Trust Security Model

The Zero Trust model is a modern approach to security that eliminates the assumption that systems, users, or devices inside a network are inherently trustworthy. Instead, Zero Trust operates on the principle of "never trust, always verify." Under this model: 1. Verification: Every connection to a service, whether from inside or outside the network, is thoroughly authenticated, authorized, and encrypted. 2. Least Privilege Access: Users and devices are granted the minimum level of access necessary to perform their functions, reducing the attack surface.

Zero Trust assumes that breaches are inevitable or may have already occurred and focuses on minimizing damage and maintaining security through strict access controls.

Zero Trust Network Access (ZTNA) is a key component of the Zero Trust model, specifically focused on securing network access. It replaces the role of traditional Virtual Private Networks (VPNs) by providing more granular and secure access controls that focus on connecting to specific systems hosting services rather than providing access to an entire subnet. Key characteristics of ZTNA include:

  1. Granular Access Control: Users and devices are granted access only to specific applications, services, or resources they are authorized to use, rather than broad network access.

  2. Dynamic Authentication and Authorization: Access requests are verified through mechanisms like multi-factor authentication (MFA), device security checks, and behavioral analysis.

  3. No Implicit Trust: ZTNA treats all access requests as originating from an untrusted network, whether the user is inside or outside the organization’s physical network.
  4. Application-Centric Security: Instead of granting access to an entire network, ZTNA connects users directly to specific applications or services, hiding the rest of the network from view. This reduces the risk of lateral movement by attackers. Think of it as a host-to-host VPN with a firewall that restricts what a host can access on the other host.

Host-based firewalls

Firewalls intercept all packets entering or leaving a local area network. A host-based firewall, on the other hand, runs on a user's computer. Unlike network-based firewalls, a host-based firewall can associate network traffic with individual applications. Its goal is to prevent malware from accessing the network. Only approved applications will be allowed to send or receive network data. Host-based firewalls are particularly useful in light of deperimiterization: the boundaries of external and internal networks have become fuzzy as people connect their mobile devices to different networks and import data on flash drives. A concern with host-based firewalls is that if malware manages to get elevated privileges, it may be able to shut off the firewall or change its rules.

Intrusion detection/prevention systems

An enhancement to screening routers is the use of intrusion detection systems (IDS). Intrusion detection systems are often parts of DPI firewalls and try to identify malicious behavior. There are three forms of IDS:

  1. A protocol-based IDS validates specific network protocols for conformance. For example, it can implement a state machine to ensure that messages are sent in the proper sequence, that only valid commands are sent, and that replies match requests.

  2. A signature-based IDS is similar to a PC-based virus checker. It scans the bits of application data in incoming packets to try to discern if there is evidence of "bad data", which may include malformed URLs, extra-long strings that may trigger buffer overflows, or bit patterns that match known viruses.

  3. An anomaly-based IDS looks for statistical aberrations in network activity. Instead of having predefined patterns, normal behavior is first measured and used as a baseline. An unexpected use of certain protocols, ports, or even amount of data sent to a specific service may trigger a warning.

Anomaly-based detection implies that we know normal behavior and flag any unusual activity as bad. This is difficult since it is hard to characterize what normal behavior is, particularly since normal behavior can change over time and may exhibit random network accesses (e.g., people web surfing to different places). Too many false positives will annoy administrators and lead them to disregard alarms.

A signature-based system employs misuse-based detection. It knows bad behavior: the rules that define invalid packets or invalid application layer data (e.g., ssh root login attempts). Anything else is considered good.

Intrusion Detection Systems (IDS) monitor traffic entering and leaving the network and report any discovered problems. Intrusion Prevention Systems (IPS) serve the same function but are positioned to sit between two networks like a firewall and can actively block traffic that is considered to be a threat or policy violation.

Type Description
Firewall (screening router) 1st generation packet filter that filters packets between networks. Blocks/accepts traffic based on IP addresses, ports, protocols
Stateful inspection firewall 2nd generation packet filter. Like a screening router but also takes into account TCP connection state and information from previous connections (e.g., related ports for TCP)
Deep Packet Inspection firewall 3rd generation packet filter. Examines application-layer protocols
Application proxy Gateway between two networks for a specific application. Prevents direct connections to the application from outside the network. Responsible for validating the protocol
IDS/IPS Can usually do what a stateful inspection firewall does + examine application-layer data for protocol attacks or malicious content
Host-based firewall Typically screening router with per-application awareness. Sometimes includes anti-virus software for application-layer signature checking
Host-based IPS Typically allows real-time blocking of remote hosts performing suspicious operations (port scanning, ssh logins)

References

A few relatively easy-to-digest references for the TLS 1.3 protocol: