pk.org: Computer Security/Exams

Exam 1 Study Guide

The one-hour study guide for exam 1

Paul Krzyzanowski

Disclaimer: This study guide attempts to touch upon the most important topics that may be covered on the exam but does not claim to necessarily cover everything that one needs to know for the exam. Finally, don't take the one hour time window in the title literally.

Last update: Mon Sep 29 18:22:05 2025

Foundations of Computer Security

Key terms

Computer security protects systems and data from unauthorized access, alteration, or destruction. The CIA Triad summarizes its core goals: confidentiality, integrity, and availability.

Confidentiality: Only authorized users can access information. Related terms:

Integrity: Ensures accuracy and trustworthiness of data and systems. Integrity includes data integrity (no unauthorized changes), origin integrity (verifying source), recipient integrity (ensuring correct destination), and system integrity (software/hardware functioning as intended). Authenticity is tied to integrity: verifying origin along with correctness.

Availability: Ensures systems and data are usable when needed.
Threats include DoS (Denial of Service) and DDoS (Distributed Denial of Service) attacks, hardware failures, and failed backups.

System Goals
Security systems aim at prevention, detection, and recovery. Prevention stops attacks, detection identifies them, and recovery restores systems after incidents. Forensics investigates what happened. Defense in Depth is a layered strategy that applies multiple overlapping defenses — technical, procedural, and sometimes physical. If one control fails, others still provide protection.

Policies, Mechanisms, and Assurance

Security Engineering and Risk Analysis
Security engineering balances cost, usability, and protection.
Risk analysis evaluates asset value, likelihood of attack, and potential costs. Tradeoffs matter: too much protection may reduce usability or become too costly.

Trusted Components and Boundaries

Human Factors People remain the weakest link. Weak passwords, poor training, and misaligned incentives undermine protection.

Threats, Vulnerabilities, and Attacks

A vulnerability is a weakness in software, hardware, or configuration. Examples include buffer overflows, default passwords, and weak encryption. Hardware-level flaws include Spectre, Meltdown, and Rowhammer.

An exploit is a tool or technique that leverages a vulnerability. An attack is the execution of an exploit with malicious intent.

An attack vector is the pathway used to deliver an exploit, such as email, websites, USB drives, or open network ports. The attack surface is the total set of possible entry points.

Not all vulnerabilities are technical. Social engineering manipulates people into granting access or revealing information. Phishing, spear phishing, pretexting, and baiting are common techniques. (This will be explored in more detail later in the course.)

A threat is the possibility of an attack, and a threat actor is the adversary who may carry it out. One useful classification, described by Ross Anderson, distinguishes threats as disclosure (unauthorized access), deception (false data), disruption (interruptions), and usurpation (unauthorized control). Related concepts include snooping, modification, masquerading, repudiation, denial of receipt, and delay.

The threat matrix distinguishes between opportunistic vs. targeted attacks and unskilled vs. skilled attackers (from script kiddies to advanced persistent threats).

The Internet amplifies risk by enabling action at a distance, anonymity, asymmetric force, automation at scale, global reach, and lack of distinction between malicious and normal traffic.

A botnet is a network of compromised machines controlled via a command and control server, used for spam, phishing, cryptocurrency mining, credential stuffing, or DDoS attacks.

Adversaries and Cyber Warfare

Behind every attack is an adversary. Adversaries differ in their goals, risk tolerance, resources, and expertise.

Types include:

Economic incentives sustain underground markets where exploits, botnets, and stolen data are sold. Zero-day vulnerabilities can fetch high prices in closed broker markets. By contrast, bug bounty programs reward researchers for legal disclosure.

Advanced Persistent Threats (APTs) are advanced in their methods, persistent in maintaining access, and threatening in their ability to bypass defenses. They are typically state-backed and operate over months or years with stealth and patience.

Cyber warfare involves state-sponsored attacks on critical infrastructure and military systems.

Countermeasures include government and industry cooperation, international botnet takedowns, and intelligence sharing.

The implication is that cybersecurity affects all levels: national security, corporate security, and personal security. Cyber warfare blurs the line between peace and conflict. Attribution is difficult, and critical infrastructure, businesses, and individuals alike are potential targets.

Tracking Vulnerabilities and Risks

Why track vulnerabilities?
Early vulnerability reporting was inconsistent. The CVE system (1999) introduced standardized identifiers. CVSS added a way to score severity. Together, they form the backbone of how vulnerabilities are shared and prioritized.

CVE (Common Vulnerabilities and Exposures)
A unique identifier assigned to publicly disclosed vulnerabilities. Example: CVE-2021-44228 (Log4Shell).
CVSS (Common Vulnerability Scoring System)
A 0–10 scale for rating the severity of vulnerabilities, based on exploitability and impact. Scores are grouped into categories from Low to Critical.
Attribution challenges
Attackers obscure their origins, reuse tools, share infrastructure, and sometimes plant false flags. This makes it difficult to know with certainty who is behind an attack.
APT (Advanced Persistent Threat)
Well-funded, skilled groups (often state-backed) that carry out prolonged, targeted campaigns. Advanced = may use custom malware, zero-days, or sophisticated tradecraft; Persistent = long-term stealthy presence; Threat = ability to bypass defenses.

Foundations of Symmetric Cryptography

Kerckhoffs’s Principle. A system must remain secure even if everything about it is public except the key. Prefer standardized, openly analyzed algorithms; avoid secrecy of design.

Schneier’s Law. Anyone can design a cipher they cannot break; confidence comes only after broad, sustained public scrutiny.

Why cryptography. Protect against passive adversaries (eavesdropping) and active adversaries (injection, modification, replay). Core goals: confidentiality, integrity, authenticity.
Note: Non-repudiation is a protocol-level property we will cover later with public-key tools.

Core terms. Plaintext (original data), ciphertext (encrypted data), cipher (algorithm), key (secret parameter selecting one transformation), encryption / decryption (apply the cipher with a key), symmetric encryption (same secret key shared by sender and receiver).

Design stance. Open algorithms + secret keys; explicit threat assumptions; careful key management (generation, sharing, storage). Modes, authentication, and detailed cryptanalysis come later.

Historical pointer. Classical systems supplied two ingredients—substitution and transposition—that motivate Shannon’s later principles (confusion and diffusion).

Main takeaway. Use vetted primitives; be precise about which goal you’re achieving and against which attacker; most real mistakes come from sloppy assumptions and imprecise terminology.

Classical Ciphers (Substitution and Transposition)

Classical ciphers. Hand methods that illustrate substitution (change symbols) and transposition (reorder symbols). Alone, each leaks patterns.

Caesar cipher. Shift letters by a fixed amount; trivial to brute force and defeat via frequency counts.

Monoalphabetic substitution. Any fixed mapping; keyspace is large (\(26!\)) but frequency analysis breaks it. In English, E, T, A, O, I, N are common; Z, Q, X, J are rare; frequent digraphs TH, HE, IN and trigrams THE, AND stand out.

Frequency analysis (idea). Count the frequencies of single letters, digraphs (pairs of letters), trigraphs (triples of letters); align peaks and legal word shapes; refine with language constraints (double letters, common endings).

Alberti’s cipher disk (1460s). Two concentric alphabets; rotate the inner disk to switch substitution alphabets periodically mid-message. Early polyalphabetic* encryption that blurs single-letter statistics.

Vigenère cipher (16th c.). Repeat a keyword over the plaintext; each key letter selects a Caesar shift alphabet (via a table lookup of plaintext row vs. key column). The same plaintext letter can encrypt differently depending on its alignment with the key.

Breaking Vigenère (Babbage–Kasiski). Find repeated ciphertext fragments; measure gaps; factor to guess key length; split text into that many streams and solve each stream as a Caesar cipher by frequency.

Transposition ciphers. Preserve letters, change order.
Scytale: wrap a strip around a rod; write along the rod; read off unwrapped.
Columnar transposition: write plaintext in rows under a keyword; read columns in keyword order. Padding (often X) fills incomplete rows and can leak hints.

Some later ciphers, like Playfair and ADFGVX combine substitution and transposition.

Lessons. Combining substitution and transposition helps, but hand systems leave structure to exploit. These methods motivated mechanized cryptography (rotor machines) and later, theory-driven designs that deliver confusion and diffusion deliberately.

Mechanized Cryptography (Rotor Machines)

Goal. Show how machines automated polyalphabetic substitution and why complexity alone did not guarantee security.

Rotor machine. A stack of wired wheels applies a changing substitution on each keystroke; the rightmost rotor steps like an odometer, so the mapping evolves continuously.

Enigma workflow. Type a letter, current rotor positions map it through three rotors, a reflector sends it back through the rotors on a different path, and a lamp shows the ciphertext. Same setup decrypts. The use of a reflector implies that no letter ever encrypts to itself, which weakens the security of the system (i.e., if you see an 'A' in the ciphertext then you know it can't be an 'A').

Keyspace (why it's huge). Choose and order 3 rotors from 5 (60 ways), choose 26 starting positions for each rotor (\(26^3=17{,}576\)), and set a plugboard that swaps letter pairs (about \(10^{14}\) possibilities). The combined space exceeds \(10^{23}\).

Strength vs. practice. Moving rotors suppressed simple frequency patterns, but design quirks and operating procedures leaked structure.

How it was broken. Analysts used cribs (predictable text like headers), the “no self-encryption” property, operator mistakes (repeating message keys, key reuse), traffic habits, captured codebooks, and electromechanical search (Polish bombas, British bombes designed by Turing and Welchman).

Takeaway. Mechanization increased complexity and throughput but not proof of security. Operational discipline and small structural properties mattered.

Successors. SIGABA (late 1930s, irregular stepping, not broken in WWII) and Fialka (1950s, 10 rotors for Cyrillic plus digits) pushed the rotor idea further until electronic cryptography replaced it.

Shannon, Perfect Secrecy, and Randomness

One-time pad (OTP).
Encrypt each byte of plaintext by taking a bitwise XOR with the corresponding byte of the key: \(C=P\oplus K\), decrypt with \(P=C\oplus K\). Perfect secrecy only if the key is truly random, at least as long as the message, independent of the plaintext, and never reused. Reuse gives \(C_1\oplus C_2=P_1\oplus P_2\).

Why OTPs are rarely used.
It is hard to generate long truly random keys, distribute and store keys (pads) that are as long as the message, avoid reuse, and keep sender/receiver perfectly synchronized.

Shannon’s contribution.
Shannon defined the two properties strong ciphers must deliver:

Real ciphers apply these in simple rounds repeated many times.

Perfect vs. computational security.
Perfect secrecy: observing \(C\) reveals nothing about \(P\): \(\Pr[P=p\mid C=c]=\Pr[P=p]\). In practice, we aim for computational security— not theoretically perfect but something that has no efficient attack with realistic resources.

Random vs. pseudorandom

Random. Bits from a physical source we model as unpredictable (device noise, quantum effects).

Pseudorandom. Bits from a deterministic algorithm seeded with secret entropy. For cryptography we use a cryptographically secure pseudorandom number generator (CSPRNG): after seeding, outputs are computationally indistinguishable from true random and unpredictable without the seed.

Rule: get randomness from the OS CSPRNG (Linux getrandom(), Windows BCryptGenRandom); do not roll your own.

Shannon entropy: why it matters

What it is. A measure of unpredictability. Think “average yes/no questions to determine \(X\).”

Why you care.

Key points

Modern Symmetric Cryptography

Big picture

Block ciphers

Substitution–permutation networks (SPNs)

Feistel networks

DES and 3DES (legacy but instructive)

AES (Rijndael profile)

Modes of operation (how to use blocks on messages)

AEAD: Authenticated Encryption with Associated Data — encrypts and returns a tag so the receiver can reject modified ciphertexts. We will cover integrity later; read AEAD as “encryption with a built-in tamper check.”

Stream ciphers

ChaCha20 (with Poly1305)

Quick reference

Name Type Block/stream Typical key sizes Notes
AES-128/192/256 SPN block cipher 128-bit block 128/192/256 Default choice; wide hardware support
ChaCha20-Poly1305 Stream + tag (AEAD) Stream 256 (cipher) Fast on CPUs without AES-NI; embedded-friendly
DES Feistel block 64-bit block 56 (effective) Legacy; brute-forceable
3DES (EDE2/EDE3) Feistel block 64-bit block 112/168 (nominal) Legacy

Common pitfalls

Minimal equations to recognize

What to choose

Principles of Good Cryptosystems

Foundations

Security properties

Practical requirements

Keys (operational reminders)

Quick rules

Common pitfalls


Public Key Cryptography and Integrity

Public key cryptography

One-way functions

Trapdoor functions

Origins

Why not use public key for everything?

RSA and ECC

RSA basics

Elliptic Curve Cryptography (ECC)

Hash Functions

Hash function

Properties

SHA family

Applications

Entropy

Integrity Mechanisms

Message Authentication Codes (MACs)

AEAD (Authenticated Encryption with Associated Data)

Digital Signatures

Why hashes are involved

MACs vs. Signatures

Diffie-Hellman Key Exchange

Core question : How can two parties who have never met agree on a shared secret?

Basic process

Security: Relies on hardness of the discrete logarithm problem.

Elliptic Curve Diffie-Hellman (ECDH)

Limitation: Provides secrecy but not authentication (man-in-the-middle possible).

Putting It All Together

Hybrid cryptosystem

Long-term keys

Ephemeral keys

Forward secrecy

Digital certificates (X.509v3)

Root certificates and trust stores

Certificate verification process

  1. Receive certificate and intermediates.

  2. Check validity dates.

  3. Build chain to a root.

  4. Verify signatures at each link.

  5. Ensure root is in trust store.

  6. Confirm hostname matches certificate subject.

  7. Optionally check revocation.

  8. Verify server controls corresponding to the private key.

Protocols in practice

Quantum Attacks and Post-Quantum Cryptography

Code Signing and Software Integrity

Code signing

How it works

Per-page hashes

TLS (Transport Layer Security)

TLS handshake goals

Steps (simplified)

  1. ClientHello / ServerHello: pick versions and cipher suites.

  2. Certificate exchange: server sends certificate; client verifies chain.

  3. Key exchange: usually ephemeral ECDH (Elliptic Curve Diffie-Hellman with public/private keys generated spontaneously for this session), which produces a shared secret.

  4. Key derivation: TLS uses HKDF to expand the shared secret into session keys.

  5. Handshake finished: each side proves it derived the same session key.

  6. Secure channel: traffic is encrypted and authenticated with a symmetric cipher (e.g., AES-GCM or ChaCha20-Poly1305).

Key properties

Reminders

Key Points


Authentication

Core Concepts

Identification, Authentication, Authorization
Identification is claiming an identity (such as giving a username). Authentication is proving that claim (such as showing a password or key). Authorization comes afterward and determines what actions the authenticated party may perform.

Pre-shared keys and Session keys
A pre-shared key is a long-term secret shared in advance between two parties and often used with a trusted server. A session key is temporary and unique to one session, which reduces exposure if the key is compromised.

Mutual authentication
Protocols often require both sides to prove they know the shared secret, not just one. This ensures that neither side is tricked by an impostor.

Trusted third party (Trent)
Some protocols rely on a trusted server that shares long-term keys with users and helps them establish session keys with each other.

Nonces, Timestamps, and Session identifiers
These are ways to prove freshness:

Replay attack
An attacker may try to reuse an old ticket or session key. Without freshness checks, the receiver cannot tell the difference and may accept stale credentials.

Symmetric Key Protocols

These use a trusted third party (Trent) to distribute or relay session keys. Alice asks Trent for a key to talk to Bob, Trent provides one (encrypted with Alice's secret key) along with a ticket for Bob. The ticket contains the same session key encrypted with Bob's key. It's meaningless to Alice but she can send it to Bob, who can prove that he was able to decrypt it.

Needham–Schroeder
Adds nonces to confirm that a session key is fresh. Alice asks Trent for a key to talk to Bob and includes a random number (a nonce) in the request. Trent provides the session key along with a ticket for Bob. Alice knows it's a fresh response when she decrypts the message because it contains the same nonce. Weakness: if an old session key is exposed, an attacker can replay a ticket and impersonate Alice. Vulnerable if old session keys are exposed.

Denning–Sacco
Fixes the replay problem of an attacker replaying part of the protocol using an old compromised session key by using timestamps in tickets so Bob can reject replays. Bob checks that the timestamp is recent before accepting. This blocks replays but requires synchronized clocks.

Otway–Rees
Uses a session identifier carried in every message. Nonces from Alice and Bob, plus the identifier, prove that the session is unique and fresh. No synchronized clocks are required.

Kerberos

Kerberos is a practical system that puts the Denning–Sacco idea into operation: it uses tickets combined with timestamps to prevent replay attacks. Instead of sending passwords across the network, the user authenticates once with a password-derived key to an Authentication Server (AS). The AS returns a ticket and a session key for talking to the Ticket Granting Server (TGS).

With the TGS ticket, the user can request service tickets for any server on the network. Each service ticket contains a session key for the client and the server to use. To prove freshness, the client also sends a timestamp encrypted with the session key. The server checks the timestamp and replies with T+1.

This structure allows single sign-on: after authenticating once, the user can access many services without re-entering a password. Kerberos thus combines the principles of Denning–Sacco with a scalable ticketing infrastructure suitable for enterprise networks.

Properties:

Public Key Authentication

Public key cryptography removes the need for Trent. If Alice knows Bob’s public key, she can create a session key and encrypt it for Bob. Certificates bind public keys to identities, solving the problem of trust. In TLS, which we studied earlier, the server’s certificate authenticates its key. An ephemeral exchange (e.g., Diffie-Hellman) creates fresh session keys that ensure forward secrecy: past sessions remain secure even if a long-term private key is later stolen.

Password-Based Protocols

PAP (Password Authentication Protocol)
The client sends the password directly to the server. This is insecure because the password can be intercepted and reused.

CHAP (Challenge–Handshake Authentication Protocol)
The server sends the client a random value called a nonce. The client combines this nonce with its password and returns a hash of the result. The server performs the same calculation to verify it matches. The password itself never crosses the network, and replay is prevented because each challenge is different. CHAP still depends on the password being strong enough to resist guessing. Unlike challenge-based one-time passwords (covered later), CHAP relies on a static password reused across logins, not a device that generates one-time codes.

How Passwords Are Stored

Systems do not store passwords in plaintext. Instead, they store hashes: one-way transformations of the password. At login, the system hashes the candidate password and compares it to the stored value. This prevents an immediate leak of all passwords if a file is stolen, but the hashes are still vulnerable to offline guessing.

Weaknesses of Passwords

Systems do not store passwords in plaintext because that would reveal every user’s secret to anyone who obtained the file. Instead, they store hashes of passwords and compare hashes at login.

Online vs. offline guessing
Online guessing means trying passwords directly against a login prompt. Defenses include rate-limiting and account lockouts. Offline guessing happens if an attacker steals the password file; they can test guesses at unlimited speed with no lockouts, which is much more dangerous.

Dictionary attack
The attacker tries a large list of likely passwords. Online, this means repeated login attempts. Offline, it means hashing guesses and comparing against stored hashes.

Rainbow table attack
The attacker precomputes tables of password→hash pairs for a given hash function. A stolen password file can be cracked quickly with a lookup. Rainbow tables only work if hashes are unsalted.

Credential stuffing
Attackers take username/password pairs stolen from one site and try them against other services, hoping users have reused the same password.

Password spraying
Instead of many guesses against one account, attackers try one or a few common passwords across many accounts, avoiding lockouts.

Mitigations

Defenses focus on making stolen data less useful and slowing attackers.

Salts
A random value stored with each password hash. Two users with the same password will have different hashes if their salts differ. Salts make rainbow tables impractical.

Slow hashing functions
Deliberately expensive functions (bcrypt, scrypt, yescrypt) make each hash slower to compute, which hinders brute-force guessing but is barely noticeable for a user login.

Transport protection
Passwords should always travel over encrypted channels (TLS) to prevent interception.

Multi-factor authentication
A second factor (OTP, push, passkey) prevents a stolen password from being enough on its own.

Password policies
Block weak or previously breached passwords, enforce rate limits, and monitor for unusual login activity.

One-Time Passwords (OTPs)

Static passwords can be replayed. One-time password systems generate a new password for each login. The goal is to never reuse a password.

There are four main forms of one-time passwords:

  1. Sequence-based: derived from repeatedly applying a one-way function.

  2. Challenge-based: derived from a random challenge from the server and a shared secret.

  3. Counter-based: derived from a shared counter and a secret.

  4. Time-based: derived from the current time slice and a secret.

S/Key (sequence-based)
S/Key creates a sequence of values by starting from a random seed and applying a one-way function to each successive value. The server stores only the last value in the sequence. At login, the user provides the previous value, which the server can verify by applying the function once. Each login consumes one value, moving backward through the sequence. Even if an attacker sees a value, they cannot work out earlier ones because the function cannot be reversed.

Challenge-based OTP
The server issues a fresh challenge, and the user’s token or app computes a one-time response from the challenge and a secret. Replay is useless because each challenge is unique. This differs from CHAP: in CHAP the secret is a static password reused across logins, while in challenge-based OTPs the secret is stored in a device that proves possession.

HOTP (counter-based)
The server and client share a secret key and a counter. Each login uses the next counter value to generate a code with HMAC. Both sides must stay roughly in sync.

TOTP (time-based)
Like HOTP, but the counter is the current time slice (commonly 30 seconds). Codes expire quickly, making replays useless. Widely used in mobile authenticator apps (like Google Authenticator).

Multi-Factor Authentication (MFA)

MFA requires factors from different categories: something you know (password), something you have (token or phone), and something you are (biometrics).

One-time passwords (OTPs)
Codes generated by a device or app that prove you have a specific token or phone. They are short-lived and cannot be reused, making them a common possession factor.

Push notifications
Send a prompt to the user’s phone but are vulnerable to MFA fatigue, where attackers flood the user with requests hoping for one accidental approval.

Number matching
Improves push MFA by requiring the user to type in a number shown on the login screen, linking the approval to that specific login attempt.

--

Passkeys

MFA improves security but still relies on passwords as one factor, and phishing can sometimes bypass second factors. Passkeys eliminate passwords entirely. Each device generates a per-site key pair: the private key stays on the device, and the public key is stored with the service. At login, the device signs a challenge with the private key. Passkeys are unique per site, cannot be phished, and can be synchronized across devices.

Adversary-in-the-Middle Attacks

An adversary-in-the-middle sits between a client and a server, relaying and possibly altering traffic.

Defenses:


Biometrics TBD