Disclaimer: This study guide attempts to touch upon the most important topics that may be covered on the exam but does not claim to necessarily cover everything that one needs to know for the exam. Finally, don't take the one hour time window in the title literally.
Last update: Wed Nov 26 14:28:20 2025
Week 9-10: Malware
Malware is software intentionally designed to perform unwanted, unexpected, or harmful actions on a target system. Three requirements: intentional (bugs don't count), unwanted by the legitimate owner, and causes harm or performs unauthorized actions.
Zero-Day and N-Day Exploits
Malware often relies on software vulnerabilities to gain access or escalate privileges. Two common terms describe how attackers exploit flaws based on when they become known.
-
Zero-day exploit: Targets a previously unknown vulnerability for which no patch or mitigation exists. Defenders have “zero days” to prepare, making such attacks difficult to block.
-
N-day exploit: Uses a publicly known vulnerability that already has a patch or workaround available. Attacks succeed because organizations fail to update or secure affected systems.
Zero-day attacks reflect gaps in vendor and researcher awareness; N-day attacks expose weaknesses in operational security and patch management. Both remain central to modern malware campaigns.
Malware Classification by Function
Self-Propagating Malware
The critical distinction is agency -- whether human action is required.
Virus: Attaches to host files (executables or documents with macros). Requires user action to spread—running infected programs or opening infected documents. When activated, seeks other files to infect.
Worm: A self-contained program that spreads autonomously across networks without user intervention. It scans for vulnerable systems and automatically attempts to infect them.
Key difference: Viruses need users to help them spread; worms spread on their own.
Stealth and Unauthorized Access
Trojan Horse: Appears to be legitimate software that users willingly install. it combines an overt purpose (cache cleaning, system optimization) with covert malicious actions (installing backdoors, spyware, ransomware).
Backdoor: Provides remote access bypassing normal authentication. It allows attackers to return to compromised systems at will.
Rootkit: Operates at the kernel or system level to evade detection. It intercepts system calls and lies to security tools, concealing files, processes, network connections, and registry entries.
Financial Malware
Ransomware: Encrypts files or locks systems, demanding payment for restoration. Some variants use double extortion—encrypting data while also exfiltrating it and threatening to publish stolen data.
Cryptojacking: Secretly uses the victim's computing resources to mine cryptocurrency. This causes degraded performance and increased power consumption.
Adware: Displays unwanted advertisements for revenue, often bundled with free software.
Data Theft
Spyware: Monitors user activity without consent. This includes operations like keylogging, screen capture, and browser monitoring.
Keylogger: Records every keystroke to capture passwords, credit card numbers, and private messages.
Information Stealer: Targets stored credentials, browser data, cryptocurrency wallets, and other valuable information.
Remote Control
Bot/Botnet: Infected computers (bots, also known as zombies) controlled remotely and organized into networks (botnets) for coordinated attacks. Used for DDoS, spam distribution, and credential stuffing.
Remote Access Trojan (RAT): Provides the attacker with comprehensive remote control: file access, screen viewing, webcam/microphone activation, and command execution.
Destructive Malware
Logic Bomb: Remains dormant until specific conditions trigger it (date, event, command).
Wiper: Destroys data and systems without a financial motive, often used in geopolitical conflicts.
Nuisance Malware
Scareware: Falsely claims the system is infected to push fake security software purchases.
Browser Hijacker: Modifies browser settings to redirect users and track browsing activity.
How Malware Spreads
Malware employs three broad categories:
-
Network-based attacks exploiting technical vulnerabilities
-
User-assisted methods relying on human interaction
-
Supply chain compromises poisoning trusted distribution channels.
1. Network-Based Propagation
-
Exploit-based: Targets software vulnerabilities to gain access without user interaction
-
Password-based: Uses dictionary attacks, brute force, and credential stuffing
-
Zero-click attacks: Compromise a device without any user interaction. They exploit flaws in message parsing, image rendering, network protocol handling, or background services (e.g., iMessage, MMS, Wi-Fi stack, or baseband firmware). The victim receives malformed data and is infected without opening attachments or clicking links.
2. User-Assisted Propagation
-
Email attachments: Malicious files distributed via email with social engineering
-
Drive-by downloads: Automatic downloads when visiting compromised websites, exploiting browser vulnerabilities
-
USB/removable media: Malware spreads via infected devices using autorun or disguised files
Attackers also use domain deception to mislead users into visiting malicious sites.
-
Typosquatting: Registering domains with slight misspellings of legitimate names (e.g., gooogle.com).
-
Combosquatting: Registering domains that append trusted names with extra words (e.g., paypal-login.com).
3. USB-Based Attacks
-
USB drop attacks: Attackers leave infected USB drives in public places, relying on curiosity or helpfulness. Users plug them in and trigger malware through autorun, malicious documents, or disguised executable files.
-
Malicious USB firmware (BadUSB-class attacks): Devices whose firmware has been altered to impersonate keyboards, network adapters, or storage devices. The OS trusts the device class, allowing attacks such as rapid keystroke injection, traffic redirection, or installing backdoors. Because firmware is not scanned by antivirus tools, these devices bypass most defenses.
-
Example: USB Rubber Ducky: A well-known keystroke-injection tool that looks like a USB drive but behaves like a programmable keyboard. It delivers scripted commands immediately when plugged in.
4. Supply Chain Attacks
-
Software updates: Compromising legitimate update mechanisms
-
Third-party libraries: Inserting malicious code into widely-used packages
Social Engineering
Social engineering manipulates human psychology rather than exploiting technical vulnerabilities. This is often the weakest link in security.
Psychological Manipulation Tactics
-
Urgency: Creates time pressure to bypass careful consideration
-
Authority: Leverages hierarchical power through impersonation
-
Fear: Threatens negative consequences (legal action, account suspension)
-
Curiosity: Exploits the drive to know more
-
Greed: Promises rewards or financial gain
-
Trust: Leverages relationships or a trustworthy appearance
Common Attack Vectors
-
Phishing: Mass emails impersonating legitimate organizations
-
Spear phishing: Targeted attacks using researched personal information
-
Vishing: Voice phishing via phone calls (fake tech support, bank security)
-
Smishing: SMS-based phishing with malicious links
-
Pretexting: Fabricated scenarios to obtain information
-
Quid pro quo: Offering services in exchange for information/access
The Malware Lifecycle
Modern malware operates through a six-stage lifecycle rather than as a single program.
Stage 1: Infection and Delivery
Getting malware onto the target system through exploiting vulnerabilities, social engineering, physical access, supply chain compromise, or drive-by downloads. The initial payload is often just a small first stage that downloads the real malware.
Stage 2: Dropper and Loader
Dropper (or downloader): Downloads and installs the main malware from a remote server. Small and obfuscated, it performs environment checks (VM detection, antivirus status, OS version) before proceeding.
Loader: Includes payload embedded within itself, encrypted or compressed. Unpacks and executes the hidden payload.
Advantages: Smaller initial payloads are easier to deliver, malware can be updated easily (it's a separate component), environment detection avoids sandboxes, and flexible payload delivery.
Stage 3: Persistence Mechanisms
Ensures malware survives reboots. Malware often establishes multiple mechanisms for redundancy.
Windows:
-
Registry Run keys (automatic execution at login)
-
Scheduled Tasks (execute at startup, login, or specific times)
-
Windows Services (run with SYSTEM privileges at boot)
-
DLL hijacking (malicious DLL in a location that is checked before the legitimate version)
-
Boot sector modification (infects master boot record or UEFI -- executes before OS loads)
Linux/macOS:
-
Cron jobs (scheduled execution)
-
Init scripts/systemd services (startup execution)
-
Modified shell configuration files (.bashrc, .profile)
-
Compromised system binaries
-
Launch Agents/Launch Daemons (macOS)
Cross-platform:
-
Browser extensions
-
Startup folders
-
Abuse of legitimate auto-start software
Stage 4: Trigger Conditions
Immediate execution: Runs as soon as installed (common for ransomware).
Time-based triggers:
-
Time bomb: Activates at a specific date/time
-
Logic bomb: Waits for specific conditions (account disabled, file deleted)
Event-based triggers: Banking website visits, accessing specific files, system idle, reboot count, and presence of analysis tools.
Manual triggers: Waits for commands from C2 server; operators decide when to activate based on reconnaissance.
Delayed activation evades time-limited sandbox analysis and enables synchronized attacks.
Stage 5: Payload Execution
What malware actually does—its core functionality:
-
Data manipulation: Encrypting (ransomware), deleting/corrupting (wipers), exfiltrating
-
System manipulation: Installing backdoors, modifying security settings, disabling antivirus, creating accounts
-
Resource abuse: Cryptocurrency mining, sending spam, launching DDoS attacks
-
Surveillance: Keylogging, screenshots, audio/video recording, network monitoring
Stage 6: Propagation
Viruses: Infect other files on the same system. Spreading requires user action—sharing and opening infected files.
Worms: Spread autonomously. Typical cycle: infect, scan network, test vulnerabilities, exploit, copy and execute, repeat. Creates exponential growth.
Common methods: Email (address book contacts), network shares, removable media, network exploits, peer-to-peer networks, social media, APIs with stolen credentials.
Command and Control (C2 or C&C) Mechanisms
Attackers need communication channels to send commands, receive stolen data, and update malware. This is essential for bots, which often sit idle until instructed to launch attacks or download new payloads. The attacker’s challenge is to maintain communication without being noticed or blocked.
C2 Communication Methods
Malware uses a variety of mechanisms to exchange data with its controllers. These channels must work across firewalls and blend into normal traffic. Common communication strategies include:
-
Direct connection: Malware contacts an attacker-controlled server, usually over HTTPS, so the traffic resembles ordinary web requests.
-
Domain Generation Algorithms (DGA): The malware computes large sets of potential domain names. Attackers register a few and wait for infected systems to locate them, which complicates blacklisting and takedown efforts.
-
DNS Tunneling: Data is encoded inside DNS queries sent to an attacker-controlled resolver. This is stealthy but low-bandwidth.
-
Social media and cloud services: Commands or data move through platforms such as X, GitHub, Dropbox, or Google Drive. These services generate traffic that looks legitimate.
-
Peer-to-Peer (P2P): Bots communicate with each other rather than a central server. This eliminates single points of failure but requires a discovery mechanism and often exposes bots to inbound connections.
C2 Evasion Techniques
These techniques help hide the location of C2 servers, disguise traffic patterns, and avoid simple blocking rules:
-
Encryption: Protects the contents of C2 messages from inspection. This is standard in modern malware.
-
Beaconing: Infrequent check-ins that resemble routine software updates rather than constant communication.
-
Domain fronting: Uses a legitimate domain name in the TLS handshake while routing traffic to a hidden C2 server behind a content-delivery network.
-
Fast flux DNS: Rapidly rotates the IP addresses associated with a domain to make takedowns harder.
-
VPN or proxy routing: Masks the true location of C2 infrastructure by relaying traffic through multiple network layers.
Evading Detection and Analysis
Code Obfuscation Techniques
Crypters: Encrypt the malware so that only encrypted data appears on disk; it's decrypted only at runtime. Security tools scanning files see only encrypted content.
Packers: Tools that compress executables and add a small unpacking stub. The real code appears only after the program runs.
Polymorphism: The malware mutates its wrapper code (decryptor or unpacking stub) using techniques such as code reordering, instruction substitution, and junk-code insertion. The payload stays the same. Each copy looks different enough to evade signature-based detection.
Metamorphism: The malware the malware’s entire code body using the same types of transformations (reordering, substitution, junk instructions) so each copy has a different internal structure. There is no constant core to match, which makes signature-based detection significantly harder.
Anti-Analysis Techniques
Virtual Machine Detection: Checks for VM artifacts (hardware IDs, VM-related files/processes, timing inconsistencies). Refuses to execute in detected VMs.
Sandbox Detection: Detects analysis environments through limited user activity, small number of files, short uptime. Remains dormant or behaves benignly.
Debugger Detection: Identifies debugging tools through API calls or debugging flags. Alters behavior or terminates when detected.
Time-based Evasion: Delays malicious activity using sleep functions or waits for specific dates/events. Evades automated analysis with time limits.
Side-Channel Attacks
Side-channel attacks exploit unintended signals rather than software flaws. Malware can leak data or receive commands by manipulating physical or observable system behavior, such as timing, power usage, or device LEDs.
An example is using the keyboard Caps Lock LED to blink encoded data (Bad Bunny), allowing a nearby camera or sensor to capture exfiltrated information. These channels bypass normal network defenses because they rely on observable side effects rather than network traffic.
Fileless Malware
Fileless malware operates entirely in memory without writing files to disk, making it significantly harder to detect since traditional antivirus scans files on disk.
PowerShell-based: Uses Windows PowerShell to download and execute code directly in memory.
Registry-based: Stores code in registry values rather than files.
Living off the land: Uses legitimate system tools (PowerShell, Windows Script Host) for malicious purposes.
Privilege Escalation
Once malware gains initial access, it often needs to escalate privileges to gain full system control and bypass security restrictions.
Kernel exploits: Exploits OS vulnerabilities for system-level access.
Privilege prompt bypasses: Avoid or subvert mechanisms that require user approval for elevated actions, such as Windows UAC or macOS authorization dialogs.
Ken Thompson's "Reflections on Trusting Trust" described perhaps the most insidious backdoor: one that evades even source-code inspection by hiding in the compiler itself, demonstrating that perfect security is impossible and trust is unavoidable at some level.
Defending Against Malware
No single defense is sufficient; each has gaps. Effective defense requires layered approaches.
Anti-Malware Software: Signature-Based Detection
Uses databases of known malware signatures: unique byte patterns identifying specific malware.
Strengths: Fast, accurate for known threats, low false positive rates.
Limitations: Requires updates for new threats; easily evaded through polymorphism and encryption.
Anti-Malware Software: Heuristic and Behavioral Analysis
Examines behavior and characteristics rather than exact signatures.
Static heuristics: Analyzes file structure without execution (suspicious API calls, unusual code patterns, obfuscation indicators).
Dynamic heuristics: Observes program behavior during execution (file modifications, registry changes, network connections, process creation).
Machine learning is often used to train models on malware/benign samples to identify suspicious characteristics. Heuristics are better at detecting new threats but have higher false positive rates and are more resource-intensive.
Sandboxing
Executes suspicious files in isolated environments to safely observe behavior.
Types: Virtual machines (complete OS isolation), application sandboxes (restrict program capabilities).
Benefits: Safe observation, detects unknown threats.
Application sandboxes are the dominant means of protecting mobile devices and are increasingly being adopted across other systems (but slowly). Virtual machines provide an environment for anti-malware software writers to test for threats.
Limitations: Sophisticated malware can detect and evade sandboxes ... or use techniques such as waiting several days or a certain number of reboots before activation.
Honeypots
Honeypots are isolated decoy systems that appear vulnerable or valuable but contain no real data. They attract attackers and record their activity, providing early warning of intrusions and insights into tools and techniques without risking production systems.
Access Control and Privilege Management
Restricting the ability of users to access files and other system resources restricts the ability of malware to do the same. It limits the damage from successful infections:
-
Principle of least privilege: Minimum permissions necessary
-
User Account Control (UAC): Requires approval for administrative actions
-
Application whitelisting: Only approved applications can execute
-
Network segmentation: Isolates critical systems
Email Security
Email has long been a popular channel for social engineering, often delivering malicious attachments or links behind seemingly trustworthy messages. Most email-based attacks depend on deception: impersonating legitimate senders.
Several mechanisms help receiving systems validate the origin and integrity of email:
-
SPF (Sender Policy Framework): The receiver queries DNS to get a list of IP addresses authorized to send mail for a domain. SPF checks the sender domain, not necessarily the human-visible “From:” header.
-
DKIM (DomainKeys Identified Mail): The sender signs selected headers and the message body using a private key. The receiver retrieves the public key via DNS and verifies the signature to confirm that the message was not altered and that the signing domain is legitimate.
-
DMARC: The receiver looks up the domain’s DMARC record in DNS to learn how to handle SPF or DKIM failures and whether the domain in the visible “From:” header aligns with the domains authenticated by SPF and/or DKIM.
Additional security measures focus on message content:
-
Content filtering: Scans attachments, URLs, and embedded scripts; blocks or sanitizes dangerous file types and suspicious content.
-
Link rewriting: Replaces embedded links with security-service URLs. When the user clicks, the service checks the destination for malicious behavior before allowing or blocking access.
Patch Management
Regular software updates address known vulnerabilities and help avoid N-day attacks. Challenges include compatibility issues, testing requirements, and zero-day vulnerabilities (unknown to the vendor).
Week 10-11: Network security & DDoS
Network protocols were developed for cooperative environments and often lack authentication or integrity protections. Attackers exploit these assumptions to intercept, modify, or disrupt communication.
Link Layer Attacks (Layer 2)
Link-layer protocols operate entirely within the local network. Devices on the LAN are implicitly trusted, so an attacker with local access can exploit that trust.
CAM Overflow
Switches maintain a Content Addressable Memory (CAM) table that maps MAC addresses to specific switch ports. This allows the switch to forward unicast traffic privately rather than broadcasting it. CAM tables are finite.
In a CAM overflow attack, an attacker sends frames containing large numbers of fake MAC addresses. When the CAM table fills, legitimate entries age out and the switch begins flooding unknown-destination traffic out every port, exposing frames to anyone on the LAN.
Prevention
Managed switches support multiple defenses:
-
Port security to limit MAC addresses per port
-
Static or “sticky” MAC bindings
-
Disable unused ports
-
Monitoring for abnormal MAC learning rates
ARP Spoofing
ARP (Address Resolution Protocol) maps IP addresses to MAC addresses but provides no authentication. Any host can send unsolicited ARP replies, and most operating systems accept these messages.
In ARP cache poisoning (also known as ARP spoofing), , the attacker sends forged ARP replies, claiming to be another device (often the gateway). Victims update their ARP cache and send traffic to the attacker, who can inspect or modify it.
Prevention:
-
Switches with Dynamic ARP Inspection (DAI) verify ARP replies against known IP-MAC bindings (often learned through DHCP snooping)
-
ARP monitoring tools
-
Static ARP entries for critical devices.
VLAN Hopping
VLANs segment a switch into isolated broadcast domains. VLAN hopping allows an attacker to inject or receive frames on a VLAN they are not assigned to.
Two approaches are used to get content from other VLANs:
-
Switch spoofing: The attacker pretends to be another switch and negotiates a trunk link, gaining access to multiple VLANs.
-
Double tagging: The attacker sends frames with two VLAN tags. The first switch strips the outer tag, leaving the inner one to direct traffic into the target VLAN.
Prevention: Disable automatic trunk negotiation, manually configure trunk ports, tag the native VLAN, and place unused switch ports in isolated VLANs.
DHCP Attacks
DHCP (Dynamic Host Configuration Protocol) assigns IP addresses, default gateways, and DNS servers. Clients accept the first server reply.
Two common attacks:
-
DHCP starvation: flooding the server with DHCP requests to exhaust the address pool. The goal is to deny service or prepare for a rogue server.
-
Rogue DHCP server (also referred to as a DHCP spoofing attack): responding faster than the legitimate server. The goal is to give victims malicious DNS servers or a malicious default gateway, enabling full redirection or interception.
Prevention: Switches with DHCP snooping mark legitimate DHCP server ports as trusted and block DHCP responses on untrusted ports.
Network Layer Attacks (Layer 3)
The network layer routes packets between networks. Routers exchange routing information under the assumption that peers are honest.
IP Spoofing
IP spoofing involves forging the source IP address in outgoing packets. Attackers do this to evade identification, bypass simple filters, or craft reflection attacks.
Prevention: ISPs and enterprises should implement egress filtering, enforcing valid source prefixes (this is known as BCP 38-style filtering, but you don't have to know this).
Router Vulnerabilities
Routers maintain routing tables to determine where to send packets. If an attacker compromises a router, they can drop, reroute, or replicate large volumes of traffic.
Common router attacks include:
-
Denial of Service against router CPU or memory
-
Route table poisoning, injecting false or misleading routes
-
Malware installation on router firmware
-
Brute-forcing credentials or abusing insecure management interfaces
-
Exploiting outdated firmware vulnerabilities
Prevention: Restrict administrative access, keep firmware updated, enforce strong authentication, and filter inbound routing updates.
BGP Hijacking
Border Gateway Protocol (BGP) connects networks known as Autonomous Systems (ASes). An Autonomous System is a set of IP networks under a single administrative domain (e.g., an ISP or a large organization).
ASes advertise IP prefixes: contiguous ranges of IP addresses they can route. Routers select routes based on these advertisements.
Because BGP does not validate prefix ownership, an AS can announce someone else’s prefix. Since routers prefer more specific prefixes (e.g., a /25 over a /24), attackers can redirect traffic at Internet scale.
Prevention:
- RPKI (Resource Public Key Infrastructure) protects the origin of the route.
- A Regional Internet Registry signs a certificate saying “Organization X owns this IP block.” Organization X signs a Route Origin Authorization (ROA) saying “AS Y is allowed to announce this block.” Routers that validate RPKI can reject unauthorized announcements.
- BGPsec extends RPKI to validate the entire sequence of ASes in a route.
- Each AS signs the route before forwarding it. The goal is to ensure no AS is added, removed, or modified. However, for BGPsec to work, every AS on the path must support BGPsec. Using it means routers must continuously process all digital signatures at line speed, imposing a significant computational burden on routers.
Transport Layer Attacks (TCP and UDP)
Transport protocols move data between applications.
- TCP
- Reliable, connection-oriented, ordered delivery. Uses sequence numbers and acknowledgments.
- UDP
- Unreliable, connectionless, no handshake, no sequence numbers.
Both of these protocols assume that end hosts behave honestly. Attackers exploit predictability in sequence numbers or lack of authentication.
TCP Session Hijacking
Early TCP implementations used predictable Initial Sequence Numbers (ISNs). An attacker who guessed the next sequence number could inject malicious packets into an existing session without seeing the traffic.
Prevention:
-
Random ISNs prevent guessing sequence numbers.
-
TLS prevents meaningful injection even if packets are forged since all content is encrypted and has integrity checks.
-
TCP MD5 signatures authenticate each segment (mainly for BGP) through a MAC.
SYN Flooding
TCP allocates resources after receiving a SYN packet. In a SYN flood, attackers send many SYNs without completing the handshake, exhausting the server’s connection backlog.
Prevention:
-
SYN cookies: store connection information in the server’s sequence number using a hash of the connection parameters and a secret key. Memory is allocated only after the ACK arrives.
-
Rate limiting and firewalls that track connections can help.
TCP Reset Attacks
A forged RST packet with an acceptable sequence number forces an immediate connection teardown.
Prevention:
-
Strict RST validation: accept only RSTs with sequence numbers extremely close to the expected value.
-
TLS: hides sequence number state from attackers.
-
TCP MD5: authenticates packets (used in BGP).
UDP Spoofing
UDP provides no handshake or sequence numbers, so attackers can forge source addresses effortlessly. This enables impersonation and reflection attacks.
Prevention:
-
Network filtering to block spoofed packets.
-
Application-level authentication.
DNS Attacks
DNS Basics
DNS resolves domain names to IP addresses. It depends on caching, unauthenticated replies, and a chain of delegations among authoritative servers.
DNS Pharming
Pharming redirects users to malicious sites even when they enter the correct domain name. This is a permanent change to the user's DNS setting or to the DNS resolver. Techniques include:
-
Social engineering (“change your DNS settings to fix your Internet”)
-
Malware modifying DNS settings or the hosts file
-
Rogue DHCP giving victims malicious DNS servers
-
Compromising DNS servers directly
Prevention: Use endpoint defenses, enforce DHCP snooping, validate certificates, and deploy DNSSEC where possible.
DNS Cache Poisoning
Resolvers cache DNS responses. Attackers attempt to inject forged replies by matching the resolver’s transaction ID and arriving before the legitimate server.
Resolvers cache DNS answers. Attackers race the legitimate server by sending fake replies with guessed transaction IDs.
If a fake response arrives first and matches the transaction ID, the resolver caches it.
An enhanced attack (the Kaminsky attack, but you don't need to know the name) involves:
-
Querying many nonexistent subdomains
-
Forcing the resolver to perform repeated lookups
-
Injecting forged responses containing malicious additional records
If accepted, the resolver caches the incorrect records, redirecting all users querying that resolver.
Prevention:
-
Random query IDs
-
Randomize source UDP port
-
0x20 encoding (case randomization in queries)
-
Issuing double DNS queries and checking for consistency
-
Using TCP instead of UDP when possible
-
Deploying DNSSEC. DNSSEC provides cryptographic validation of DNS responses and is the most robust defense.
-
Resolvers should ignore unsolicited additional records.
DNSSEC
DNSSEC adds digital signatures to DNS records. Clients validate them using a chain of trust anchored at the root. DNSSEC prevents record tampering and cache poisoning but does not encrypt traffic. However, it's more CPU-intensive and creates longer responses.
DNS Rebinding
DNS rebinding tricks browsers into interacting with internal network services through an attacker-controlled domain.
Steps:
-
The victim visits an attacker-controlled site.
-
The site returns a DNS record with very short TTL (e.g., TTL = 1) to force repeated DNS requests.
-
JavaScript loads in the browser.
-
The next DNS lookup returns a private IP address.
-
The browser allows the request because the origin (scheme + host + port) is unchanged.
-
JavaScript can now interact with internal devices or APIs.
Prevention:
-
Enforce minimum TTL values to ignore extremely short TTLs
-
DNS pinning, refusing to switch IP addresses during a page session
-
Reject DNS responses containing private or reserved IP addresses
Abandoned Domains (Sitting Ducks)
Some domains have broken DNS delegation (name servers that no longer exist).
If a DNS provider does not verify ownership, an attacker can “claim” such a domain and set new DNS records. Attackers can then serve malware or impersonate services.
Distributed Denial of Service (DDoS)
DDoS attacks overwhelm systems using massive amounts of traffic or by triggering expensive server operations.
Types of DDoS Attacks
-
Volumetric attacks: The goal is to saturate the victim’s bandwidth with massive traffic (measured in bits per second). Examples include UDP floods and large packet floods.
-
Packet-per-second attacks: Send enormous numbers of small packets to overload routers or firewalls. This attack targets forwarding performance, not bandwidth.
-
Request-per-second application-layer) attacks: Flood application servers with HTTP or API requests, exhausting CPU or memory. Examples include HTTP floods or expensive API calls.
Asymmetric Attacks
Attackers send cheap requests that force the defender to do expensive work.
Examples: heavy database lookups, intensive parsing, ICMP error processing.
Reflection and Amplification
Reflection attacks spoof the victim’s IP address so third-party servers send replies to the victim. Amplification uses services where small requests yield disproportionately large responses.
Common amplification services:
-
DNS open resolvers
-
NTP (Network Time Protocol)
-
CLDAP (Connectionless LDAP)
-
SSDP (Simple Service Discovery Protocol)
-
Memcached servers
-
Some gaming or voice protocols
Botnets
Large sets of compromised devices, especially IoT systems, generate massive attack traffic. Botnets often use encrypted C2, fast-flux DNS, or domain-changing strategies.
Defenses
Network-level Defenses:
-
Rate limiting
-
Filtering and anti-spoofing
-
Blackhole routing (drop all traffic that's going to a specific target)
-
Scrubbing centers (services dedicated to ensuring clean traffic at large scale)
Application-level defenses:
-
Web Application Firewalls (WAF): filter malicious HTTP patterns
-
CAPTCHAs: separate humans from bots
-
Content Delivery Networks (CDNs): absorb load at edge locations
-
Throttling: slow requests per user/IP
-
Graceful degradation: maintain minimal service
Key Themes in Network Security
-
Many protocols were built for trust, not security.
-
Attackers exploit resource asymmetry and lack of authentication.
-
Spoofing enables redirection, amplification, and man-in-the-middle attacks.
-
DNS and BGP attacks affect large populations simultaneously.
-
IoT devices continue to fuel massive botnets.
-
Defense requires layered mechanisms and cooperation across networks.
Week 11: VPNs
The Network Security Problem
The Internet's core protocols were designed without security. IP, TCP, UDP, and routing protocols have no built-in authentication, integrity checks, or encryption. This creates serious vulnerabilities:
-
No authentication: IP packets can be forged with fake source addresses
-
No integrity protection: Data can be modified in transit without detection
-
No confidentiality: Anyone along the path can read packet contents
-
Vulnerable routing: BGP and other routing protocols can be manipulated
-
Spoofable protocols: ARP, DHCP, and DNS can be spoofed to redirect traffic
We need security mechanisms that work despite these vulnerabilities, providing confidentiality, integrity, and authentication even when the underlying network is completely untrusted.
Transport Layer Security (TLS)
What TLS Provides
TLS creates a secure channel between two applications. After the initial handshake completes, applications communicate as if using regular TCP sockets, but with three critical guarantees:
-
Authentication: A client can verify it's talking to the correct server (the server can optionally verify the client)
-
Confidentiality: Communication is encrypted, preventing eavesdropping
-
Integrity: Any tampering with the data stream will be detected
TLS (Brief Review)
The TLS 1.3 handshake:
-
Negotiates acceptable cryptographic algorithms
-
Uses Diffie–Hellman to derive a shared secret
-
Has the server prove possession of a private key associated with a CA-signed certificate
-
Derives symmetric encryption keys for application data
TLS 1.3 then uses AEAD encryption such as AES-GCM or ChaCha20-Poly1305.
AEAD (Authenticated Encryption with Associated Data) combines encryption and authentication in a single operation. Think of it as generating a keyed MAC concurrently with encryption, but more efficient than doing encrypt-then-MAC separately. Common algorithms are AES-GCM and ChaCha20-Poly1305.
Mutual authentication is possible but rarely used because deploying client certificates at scale is impractical; most applications authenticate clients with passwords or MFA inside the TLS channel.
Client Authentication
TLS supports mutual authentication, but client certificates are rarely used in practice:
-
Setting up certificates for users is complicated
-
Moving private keys between devices is difficult
-
Any website could request your certificate (destroying anonymity)
-
Public computers create security risks for private keys
Common practice: Authenticate clients after establishing the TLS connection using passwords, multi-factor authentication, or other mechanisms. TLS protects the confidentiality of these credentials.
TLS Limitations
While TLS solves many problems, it has important limitations:
-
It's a transport-layer solution; applications must explicitly use it
-
It only protects communication between two specific applications
-
A TLS connection doesn't guarantee that the server itself is trustworthy
-
It adds some latency (though TLS 1.3 minimized this)
Virtual Private Networks (VPNs)
As we saw, the Internet's core protocols were designed without security. TLS solves this at the transport layer, but each application must implement it separately.
Virtual Private Networks (VPNs) take a different approach by operating at the network layer, protecting all traffic between networks or hosts automatically. Once a VPN tunnel is established, every application benefits from its security without any changes.
Tunneling
Tunneling is the foundation of VPNs. A tunnel encapsulates an entire IP packet as the payload of another IP packet. This allows private addresses within a local area network (like 192.168.x.x) to communicate across the public Internet.
A gateway router on one network takes an outgoing packet destined for a private address on another network, wraps it inside a new packet addressed to the remote gateway's public address, and sends it across the Internet. The remote gateway extracts the original packet and delivers it to its destination.
Basic tunneling provides no security. The encapsulated data travels in unencrypted, endpoints are not authenticated, and there is no integrity protection.
What makes a VPN?
A VPN combines tunneling with three security properties:
Encryption ensures outsiders cannot read the encapsulated data, even if they capture packets on the public Internet.
Integrity protection through message authentication codes ensures outsiders cannot modify data without detection. Tampered packets are discarded.
Authentication ensures you are connected to the legitimate gateway, not an imposter. This typically uses certificates, pre-shared keys, or public key cryptography.
The formula: VPN = tunnel + encryption + integrity + authentication.
VPN Deployment Models
VPNs serve three primary purposes, each with different security implications.
Site-to-Site (Network-to-Network)
This was the original VPN use case. Organizations connect geographically separated networks through VPN tunnels between gateway routers. Computers on either network communicate as if on a single unified network, unaware that traffic is encrypted and tunneled.
Remote Access (Host-to-Network)
Individual computers connect to corporate networks from remote locations. The remote computer runs VPN client software that establishes a tunnel to the corporate VPN gateway. Once connected, the computer can access internal resources as if physically in the office.
Privacy VPNs (Host-to-Provider)
Commercial services like ExpressVPN, NordVPN, and ProtonVPN allow users to establish VPN connections to the provider's network, which then acts as a gateway to the Internet. Traffic appears to originate from the provider rather than the user's actual location.
These services protect against local eavesdropping (such as on public Wi-Fi networks) and can bypass geographic content restrictions. However, the VPN provider can see all your traffic. You are shifting trust from your ISP to the VPN provider, not eliminating surveillance. Whether this improves privacy depends entirely on whether you trust the provider more than your ISP.
VPN Protocols
VPN protocols have evolved over time, each with different design philosophies. Three protocols dominate VPN deployments, each with different design philosophies.
IPsec (earliest: 1990s)
IPsec was developed as the first standardized approach to network-layer security. It operates at Layer 3 (i.e., it does not use TCP or UDP) and is typically implemented in the operating system kernel. IPsec is a separate protocol from TCP and UDP, using unique protocol numbers in the IP header to identify the encapsulated content as being IPsec data.
IPsec consists of two separate protocols (use one or the other):
-
The Authentication Header (AH) protocol provides authentication and integrity but not encryption.
-
The Encapsulating Security Payload (ESP) provides authentication, integrity, and encryption. ESP is the standard choice today since it does everything AH does plus encryption.
IPsec operates in two modes:
-
Tunnel mode encapsulates the entire original IP packet for network-to-network or host-to-network communication.
-
Transport mode protects only the payload for direct host-to-host communication.
IPsec Cryptography: Both sides negotiate on the algorithms to use when the connection is set up. IPsec uses the IKE (Internet Key Exchange) protocol to negotiate keys and algorithms, which in turn uses Diffie–Hellman key exchange. AES-CBC or AES-GCM are used for confidentiality, and HMAC-SHA1/SHA2 for integrity.
Advantages of IPsec:
-
Standardized and widely supported
-
Efficient kernel implementation
-
Transparent to applications
Disadvantages:
-
Complex standard
-
Complicated configuration
-
Problems with NAT (network address translation gateways)
OpenVPN (early 2000s)
OpenVPN emerged as the first widely-adopted open-source VPN protocol. Unlike IPsec, it runs in user space (not the kernel) and uses TLS for the control channel.
OpenVPN communicates via an operating system's TUN (TUNnel) virtual network interface, which the operating system treats as a regular interface. Traffic destined for the VPN is routed to this interface, encrypted by the OpenVPN process, and sent as regular UDP or TCP packets.
OpenVPN separates communication into two channels:
-
The control channel uses TLS for authentication and key exchange.
-
The data channel carries encrypted tunnel traffic using symmetric encryption (typically AES or ChaCha20) with keys derived from the TLS handshake. Like IPsec, OpenVPN negotiates on a specific set of algorithms to use when setting up a connection.
Advantages of OpenVPN:
-
Open source
-
Can run over TCP or UDP (TCP makes it easier to bypass firewalls)
-
Supports pre-shared keys or certificate-based authentication
-
Highly portable across operating systems
Disadvantages:
- Performance overhead from running in user space
WireGuard (newest: 2016)
WireGuard takes a minimalist approach. Its entire codebase is approximately 4,000 lines (compared to hundreds of thousands for IPsec or OpenVPN), enabling formal verification of its cryptographic properties. It was incorporated into the Linux kernel in version 5.6 (March 2020).
WireGuard Cryptography
WireGuard makes opinionated choices rather than offering configuration options. There is no cipher negotiation; it uses exactly one set of modern algorithms: Elliptic Curve Diffie-Hellman for key exchange, ChaCha20 for encryption, Poly1305 for message authentication, and BLAKE2s for hashing (used for deriving keys). This eliminates vulnerabilities related to negotiation and downgrade attacks.
Each peer is identified by its public key rather than a certificate. Configuration requires generating key pairs, exchanging public keys out-of-band, and specifying which IP addresses should route through the tunnel.
Advantages of WireGuard:
-
Extremely small codebase, formally verified
-
Uses only modern cryptographic primitives (ChaCha20, Poly1305, Curve25519)
-
Runs in kernel space, communicates via UDP
-
No complexity of issuing and managing certificates
-
No protocol negotiation; uses best current practices
-
Extremely efficient (rapid setup, fast algorithms, kernel operation)
-
Rotates session keys regularly (~every 2 minutes of active traffic)
-
Simple configuration: exchange public keys, specify IP addresses
Disadvantages:
-
Newer (but verified)
-
Not supported by some enterprise VPN products (but that's changing)
-
Lacks cipher negotiation (which is good but an issue if more desirable ciphers emerge)
-
Requires a separate key distribution infrastructure for large deployments
Comparing the Protocols
The three protocols reflect different eras and design philosophies:
IPsec offers broad compatibility with enterprise equipment but has a complex configuration. It operates at the network layer in the kernel.
OpenVPN offers portability and the ability to bypass firewalls, but with some performance overhead. It runs in user space using TLS.
WireGuard offers simplicity and high performance with modern cryptography. It runs in kernel space with a minimal, formally-verified codebase.
Security Limitations
VPNs are not a complete security solution.
-
They protect data in transit through the tunnel but do not protect compromised endpoints. If your computer has malware, the attacker can access data before it enters the tunnel.
-
VPNs do not guarantee anonymity. The VPN provider or corporate network can see your traffic. Websites can still identify you through cookies, browser fingerprinting, and login credentials.
-
Using a VPN can create a false sense of security. VPNs protect against some threats (eavesdropping on local networks) but not others (phishing, malware, compromised websites). Security requires defense in depth.
VPN Performance Considerations
VPNs come with performance overhead, though not all types are equally significant:
-
Encryption overhead: Minimal with modern hardware (CPUs have hardware acceleration for AES)
-
Encapsulation overhead: Adds 20-60 bytes of headers; generally not significant
-
Routing overhead: Usually the bigger issue; traffic routes through distant servers, adding latency
For corporate VPNs, server/gateway capacity can also become a bottleneck.
Week 12: Firewalls
Network Address Translation (NAT)
NAT was designed to conserve IPv4 addresses by letting many internal devices use private address ranges. A NAT router rewrites outbound packets and keeps a table so replies can be sent back to the correct internal system.
NAT operates at Layer 3 but must also inspect and modify Layer 4 headers (TCP/UDP ports) so it can track connections in its translation table.
NAT provides an important security benefit: external hosts cannot initiate connections to internal systems. The NAT router only creates translation table entries when internal hosts start connections, so it blocks all unsolicited inbound traffic by default. An external attacker can't send packets to 192.168.1.10 because that address isn't routable on the Internet, and even if they somehow could, the router has no translation entry for it.
This isn't perfect security since internal hosts can still make outbound connections that could be exploited, but it's a significant improvement over every internal device having a public IP address directly accessible from the Internet.
First-Generation Firewalls: Packet Filters
A packet filter (also called a screening router) sits at a network boundary and makes independent decisions for each packet based on rules. It examines packet headers and decides whether to allow or drop each packet based on:
-
Source and destination IP addresses
-
Source and destination ports (for TCP/UDP)
-
Protocol type (TCP, UDP, ICMP, etc.)
-
Network interface (which physical port on the router)
Rules are evaluated in order, and processing stops at the first match. This means rule ordering is critical: a broad rule high in the list can shadow more specific rules below it.
Ingress and egress filtering
Ingress filtering applies to inbound traffic and typically follows a “default deny” model:
-
Block traffic that should never appear from the Internet (such as private-source addresses).
-
Block packets that claim to come from your own internal network (spoofed traffic).
Egress filtering
Egress filtering controls outbound traffic from internal networks to external ones. While we generally trust internal hosts, it is useful to restrict how a compromised internal host can communicate with the outside. Useful filters can:
-
Limit which protocols internal hosts can use to leave the network.
-
Prevent compromised hosts from freely downloading malware or talking to command-and-control servers.
-
Log unusual outbound traffic patterns.
Second-Generation Firewalls: Stateful Packet Inspection (SPI)
First-generation packet filters examine each packet independently without remembering past packets. But network protocols like TCP create ongoing conversations between hosts. Stateful packet inspection firewalls track the state of these conversations.
Stateful firewalls track:
-
TCP connection state, including the SYN, SYN-ACK, and ACK handshake
-
Return traffic, allowing it only if the internal host initiated the connection
-
Related connections, where a protocol negotiates additional connections after an initial control session
Stateful inspection prevents packets from being injected into existing connections, blocks invalid protocol sequences, and supports protocols that rely on multiple coordinated flows.
Security Zones and the DMZ
Organizations rarely have a single “internal” network. Instead, they divide networks into zones with different trust levels and use firewalls to control traffic between zones.
The DMZ (demilitarized zone) is a network segment that hosts Internet-facing services like web servers, mail servers, or DNS servers. These systems must be accessible from the Internet, making them prime targets for attack. The DMZ isolates them from internal networks so that if they're compromised, attackers don't gain direct access to internal systems.
Typical firewall policies between zones are:
-
Internet → DMZ: Allow only specific services (like HTTPS) to specific servers
-
Internet → Internal: Block entirely (no direct inbound connections)
-
Internal → DMZ: allow only what applications and administrators need
-
DMZ → Internal: allow only essential connections
-
DMZ → Internet: allow only what the DMZ services require
Network segmentation
Network segmentation extends this concept inside the organization. Instead of one big internal network, you create separate segments for different functions or sensitivity levels. Examples include separate segments for web servers, application servers, database servers, HR systems, development environments, and guest WiFi.
Segmentation provides several benefits:
-
Limits lateral movement after a compromise (an attacker who breaches one segment can't freely access others)
-
Reduces the blast radius of an attack
-
Helps enforce least privilege (systems only have access to what they actually need)
-
Makes policies simpler within each segment
Third-Generation Firewalls: Deep Packet Inspection (DPI)
Deep Packet Inspection (DPI) examines application-layer data, not just IP and transport headers. This lets firewalls understand what applications are doing and make more intelligent decisions.
DPI capabilities include:
-
Filtering based on destination hostname: Even for encrypted connections, the initial TLS handshake includes the server name. DPI can block connections to specific websites
-
Validating protocols: Checking that HTTP requests follow the expected structure, that DNS responses match query IDs, and that SMTP uses valid commands
-
Filtering content: Detecting unwanted file types or active content in traffic
-
Detecting some types of malware malware: Matching patterns inside packets against known malware signatures
DPI must keep up with network speeds and can only buffer and inspect a limited portion of the traffic. Encrypted traffic cannot be inspected deeply unless the firewall performs TLS interception, which replaces server certificates and breaks true end-to-end encryption.
Deep Content Inspection (DCI)
Deep Content Inspection (DCI) goes beyond simple DPI by:
-
Reassembling flows that span multiple packets
-
Decoding encoded content, such as email attachments
-
Examining patterns across multiple connections
Because this is computationally expensive, DCI is usually applied only to traffic that has already been flagged or that matches specific criteria.
Intrusion Detection and Prevention Systems
An IDS (Intrusion Detection System) monitors traffic and raises alerts when it sees suspicious behavior. An IPS (Intrusion Prevention System) sits inline and blocks traffic it identifies as malicious before it reaches its destination. IDS is passive (monitor and alert), while IPS is active (monitor and block).
Detection techniques used by IDS/IPS systems:
Protocol-based detection checks that traffic strictly follows protocol specifications. This includes validating HTTP headers and message structure, ensuring DNS responses match outstanding queries, restricting SMTP to valid commands, and verifying SIP signaling messages. This helps block attacks that rely on malformed or unexpected input.
Signature-based detection compares traffic patterns against a database of known attack signatures. Each signature describes byte sequences or packet patterns that correspond to a specific attack. This is effective for known threats but must be updated frequently and cannot detect new, unknown (zero-day) attacks.
Anomaly-based detection learns what "normal" traffic looks like and flags deviations. Examples include port scanning activity, unusual protocol mixes, or abnormally high traffic volumes. The main challenge is avoiding false positives, since legitimate changes in behavior can look like anomalies.
Challenges for IDS/IPS
Deploying IDS and IPS systems at scale introduces several practical challenges:
-
Volume: High traffic rates can overwhelm inspection capacity, and high false-positive rates lead to alert fatigue.
-
Encryption: Encrypted traffic cannot be inspected deeply without TLS interception.
-
Performance: Deep inspection and pattern matching are computationally expensive.
-
Evasion: Attackers use fragmentation, encoding tricks, and timing variations to avoid detection.
-
Evolving threats: New attacks require constant updates to signatures and detection models.
Next-Generation Firewalls (NGFW)
NGFWs combine multiple capabilities into one platform: stateful inspection, deep packet inspection, intrusion prevention, TLS inspection, and application and user awareness.
They identify applications by analyzing:
-
TLS metadata, such as the advertised server name
-
Distinctive protocol message patterns
-
Traffic characteristics such as packet timing and direction
-
Vendor-maintained DPI signatures for well-known applications
However, NGFW application identification can be evaded by traffic obfuscation (tunnels inside TLS, domain fronting, protocol mimicking).
The key capability is distinguishing applications that all use the same port. Traditional firewalls see "HTTPS traffic on port 443." NGFWs can distinguish Zoom from Dropbox from Netflix, even though all use HTTPS on port 443, and apply different policies to each.
However, NGFWs still cannot see which local process on a host created the traffic. That level of visibility requires host-based firewalls.
Application Proxies
An application proxy sits between clients and servers as an intermediary. The client connects to the proxy, which then opens a separate connection to the real server. This means the proxy terminates one connection and initiates another.
A proxy can terminate TLS and inspect plaintext, but only if configured (and clients trust the proxy’s root certificate).
Application proxies can:
-
Enforce protocol correctness at the application layer
-
Filter or rewrite content before it reaches the client or server
-
Hide internal server addresses and structure
-
Provide a single point for logging and monitoring
The drawbacks are that proxies must understand each protocol in detail and can become performance bottlenecks if they handle large amounts of traffic.
Host-Based Firewalls
Host-based (or personal) firewalls run on individual systems instead of at the network perimeter. They integrate with the operating system so they can associate each network connection with a specific executable.
Host-based firewalls can:
-
Block unauthorized inbound connections
-
Restrict which applications may send or receive network traffic
-
Adjust rules depending on the current network (for example, home vs. public Wi-Fi)
-
Limit what malware can do if it runs on the system
Their limitations include:
-
If malware gains administrative privileges, it can disable or reconfigure the firewall.
-
Users may approve prompts from malicious applications just to make them go away.
-
Deep inspection on the host can introduce performance overhead.
-
They only see traffic to and from that one system.
Host-based firewalls work best as part of defense in depth, combined with network firewalls and other controls.
Zero Trust Architecture (ZTA)
The Problem: Deperimeterization
The traditional perimeter model assumed a clear boundary between trusted internal networks and untrusted external networks.
This model is breaking down for several reasons:
-
Mobile devices move constantly between trusted and untrusted networks
-
Cloud computing means applications and data run outside the datacenter, beyond the traditional perimeter
-
Insider threats demonstrate that being inside the network doesn't guarantee trustworthiness
-
Compromised systems allow lateral movement once an attacker breaches one internal system
-
Web applications mean browsers interact with countless external services, blurring the perimeter
The assumption that "inside equals safe" is no longer valid.
Zero Trust Principles
Zero trust abandons the idea that being "inside" a network is enough to be trusted. Instead, each access request is evaluated independently using identity, device state, and context, regardless of the source network.
Core principles of zero trust:
-
Never trust, always verify: Don't assume anything is safe just because it's inside the network
-
Least privilege access: Users and systems get only the specific access they need, not broad network access
-
Assume breach: Design systems assuming attackers are already inside, so limit what they can do
-
Verify explicitly: Use multiple signals (user identity, device health, location, behavior) to make access decisions
-
Microsegmentation: Divide the network into very small segments so even a successful compromise is contained
In practice, implementing zero trust is challenging. Ideally, security would be built into applications themselves, with applications authenticating users, verifying authorization, and encrypting data end-to-end. But most existing applications weren't designed this way.
As a practical approach, organizations often implement these ideas through Zero Trust Network Access (ZTNA) systems. These create VPN-like connections between authenticated devices, enforce strict access control policies, monitor and log all access, and use multi-factor authentication. Unlike traditional VPNs that often grant broad network access, ZTNA restricts users and devices to only the specific applications they're authorized to reach.
Microsegmentation in zero trust
Traditional segmentation divides the network into a handful of zones (DMZ, internal, guest). Microsegmentation divides it into very small segments, potentially one per application or per individual virtual machine.
In a microsegmented environment, a compromised web server can't reach database servers for other applications, a compromised workstation can't scan the network or move laterally, and each workload has precisely defined communication policies. This is often enabled by software-defined networking and virtualization technologies. In many deployments, microsegmentation is enforced by distributed firewalls inside the hypervisor or container runtime rather than by perimeter firewalls.
Microsegmentation supports zero trust by ensuring that even if an attacker gains initial access, they're contained within a very limited environment.
Defense in Depth
Modern network security relies on multiple layers of protection so that one failure does not compromise the entire environment. Key layers include:
-
Perimeter firewalls: Stateful inspection and DPI
-
DMZ: Public-facing services isolated from internal networks
-
Network segmentation: Internal networks divided by function and sensitivity
-
VPN: Remote access with multi-factor authentication
-
TLS: Encryption for all application traffic
-
Zero trust: Per-request authorization and device posture checks
-
Microsegmentation: Containment of critical workloads in tightly limited environments
-
Host-based firewalls: Endpoint-level control over application network access
-
IDS/IPS: Monitoring and blocking of known or suspicious activity
These layers work together to create resilience rather than relying on any single security mechanism.