Exam 3 study guide

The one-hour study guide for exam 3

Paul Krzyzanowski

Latest update: Wed Nov 15 09:37:45 PST 2017

Disclaimer: This study guide attempts to touch upon the most important topics that may be covered on the exam but does not claim to necessarily cover everything that one needs to know for the exam. Finally, don't take the one hour time window in the title literally.

Biometric authentication

Biometric authentication is the process of identifying a person based on their physical or behavioral characteristics as opposed to their ability to remember a password or their possession of some device. It is the third of the three factors of authentication: something you know, something you have, and something you are.

It is also fundamentally different than the other two factors because it does not deal with data that lends itself to exact comparisons. For instance, sensing the same fingerprint several times is not likely to give you identical results each time. The orientation may differ, the pressure and angle of the finger may result in some parts of the fingerprint to appear in one sample but not the other, and dirt, oil, and humidity may alter the image. Biometric authentication relies on statistical pattern recognition: we establish thresholds to determine whether two patterns are close enough to accept as being the same.

A false accept rate (FAR) is when a pair of different biometric samples (e.g., fingerprints from two different people) is accepted as a match. A false reject rate (FRR) is when a pair of identical biometric samples is rejected as a match. Based on the properties of the biometric data, the sensor, the feature extraction algorithms, and the comparison algorithms, each biometric device has a characteristic ROC (Receiver Operating Characteristic) curve. The name derives from early work on RADAR and maps the false accept versus false reject rates for a given biometric authentication device. For password authentication, the “curve” would be a single point at the origin: no false accepts and no false rejects. For biometric authentication, which is based on thresholds that determine if the match is “close enough”, we have a curve.

At one end of the curve, we can have an incredibly low false accept rate (FAR). This is good as it means that we will not have false matches: the enemy stays out. However, it also means that the false reject rate (FRR) will be very high. If you think of a fingerprint biometric, the stringent comparison needed to yield a low FAR means that the algorithm will not be forgiving to a speck of dirt, light pressure, or a finger held at a different angle. We get high security at the expense of inconveniencing legitimate users, you may have to present their finger over and over again for sensing, hoping that it will eventually be accepted.

At the other end of the curve, we have a very low false reject rate (FRR). This is good since it provides convenience to legitimate users. Their biometric data will likely be accepted as legitimate and they will not have to deal with the frustration of re-sensing their biometric, hoping that their finger is clean, not too greasy, not too dry, and pressed at the right angle with the correct pressure. The trade-off is that it’s more likely that another person’s biometric data will be considered close enough as well and accepted as legitimate.

There are numerous biological components that can be measured. They include fingerprints, irises, blood vessels on the retina, hand geometry, facial geometry, facial thermographs, and many others. Data such as signatures and voice can also be used, but these often vary significantly with one’s state of mind (your voice changes if you’re tired, ill, or angry). They are behavioral systems rather than purely physical systems, such as your iris patterns, length of your fingers, or fingerprints, and tend to have lower recognition rates. Other behavioral biometrics include keystroke dynamics, mouse use characteristics, gait analysis, and even cognitive tests.

Regardless of which biometric is used, the important thing to do in order to make it useful for authentication is to identify the elements that make it different. Most of us have swirls on our fingers. What makes fingerprints different from finger to finger are the various variations in those swirls: ridge endings, bifurcations, enclosures, and other elements beyond that of a gently sloping curve. These features are called minutia. The presence of minutia, their relative distances from each other an their relative positions can allow us to express the unique aspect of a fingerprint as a relatively compact stream of bits rather than a bitmap.

Two important elements of biometrics are robustness and distinctiveness. Robustness means that the biometric data will not change much over time. Your fingerprints will look mostly the same next year and the year after. Your fingers might grow fatter (or thinner) over the years and at some point in the future, you might need to re-register your hand geometry data.

Distinctiveness relates to the differences in the biometric pattern among the population. Distinctiveness is also affected by the precision of a sensor. A finger length sensor will not measure your finger length to the nanometer, so there will be quantized values in the measured data. Moreover, the measurements will need to account for normal hand swelling and shrinking based on temperature and humidity, making the data even less precise. Accounting for these factors, approximately one in a hundred people may have hand measurements similar to yours. A fingerprint sensor may typically detect 40–60 distinct features that can be used for comparing with other sensed fingerprints. An iris scan, on the other hand, will often capture over 250 distinct features, making it far more distinctive and more likely to identify a unique individual.

Some sensed data is difficult to normalize. Here, normalization refers to the ability to align different sensed data to some common orientation. For instance, identical fingers might be presented at different angles to the sensors. The comparison algorithm will have to account for possible rotation when comparing the two patterns. The inability to normalize data makes it difficult to perform efficient searches. There is no good way to search for a specific fingerprint short of performing a comparison against each stored pattern. Data such as iris scans lends itself to normalization, making it easier to find potentially matching patterns without going through an exhaustive search.

In general, the difficulty of normalization and the fact that no two measurements are ever likely to be the same makes biometric data not a good choice for identification. It is extremely difficult, for example, to construct a system that will store hundreds of thousands of fingerprints and allow the user to identify and authenticate themselves by presenting their finger. Such a system will require an exhaustive search through the stored data and each comparison will itself be time consuming as it will not be a simple bit-by-bit match test. Secondly, fingerprint data is not distinct enough for a population of that size. A more realistic system will use biometrics for verification and have users identify themselves through some other means (e.g., type their login name) and then present their biometric data. In this case, the software will only have to compare the pattern associated with that user.

The biometric authentication process comprises several steps:

  1. Enrollment. Before any authentication can be performed, the system needs to have stored biometric data of the user that it can use for comparison. The user will have to present the data to the sensor, distinctive features need to be extracted, and the resulting pattern stored. The system may also validate if the sensed data is of sufficiently high quality or ask the user to repeat the process several times to ensure consistency in the data.

  2. Sensing. The biological component needs to be measured by presenting it to a sensor, a dedicated piece of hardware that can capture the data (e.g., a camera for iris recognition, a capacitive fingerprint sensor). The sensor captures the raw data (e.g., an image).

  3. Feature extraction. This is a signal processing phase where the interesting and distinctive components are extracted from the raw sensed data to create a biometric pattern that can be used for matching. This process involves removing signal noise, discarding sensed data that is not distinctive or not useful for comparisons, and determining whether the resulting values are of sufficiently good quality that it makes sense to use them for comparison. A barely-sensed fingerprint, for instance, may not present enough minutia to be considered useful.

  4. Pattern matching. The extracted sample is now compared to the stored sample that was obtained during the enrollment phase. Features that match closely will have small distances. Given variations in measurements, it is unlikely that the distance will be zero, which would indicate a perfect match.

  5. Decision. The “distance” between the sensed and stored samples is now evaluated to decide if the match is close enough. The decision determination decides whether the system favors more false rejects or more false accepts.

Security implications

There are several security issues that relate to biometric authentication.

Sensing
Unlike passwords or encryption keys, biometric systems require sensors to gather the data. The sensor, its connectors, the software that processes sensed data, and the entire software stack around it (operating system, firmware, libraries) must all be trusted and tamper-proof.
Secure communication and storage
The communication path after the data is captured and sensed must also be secure so that attackers will have no ability to replace a stored biometric pattern with one of their own.
Liveness
Much biometric data can be forged. Gummy fingerprints can copy real fingerprints, pictures of faces or eyes can fool cameras into believing they are looking at a real person, and recordings can be used for voice-based authentication systems.
Thresholds
Since biometric data relies on “close-enough” matches, you can never be sure of a certain match. You will need to determine what threshold is good enough and hope that you do not annoy legitimate users too much or make it too easy for the enemy to get authenticated.
Lack of compartmentalization
You have a finite set of biological characteristics to present. Fingerprints and iris scans are the most popular biometric sources. Unlike passwords, where you can have distinct passwords for each service, you cannot have this with biometric data.
Theft of biometric data
If someone steals your password, you can create a new one. If someone steals your fingerprint, you have nine fingerprints left and then none. If someone gets a picture of your iris, you have one more left. Once biometric data is compromised, it remains compromised.

Detecting humans

CAPTCHA (Completely Automated Public Turning test to tell Computers and Humans Apart) is not a technique to authenticate users but rather a technique to identify whether a system is interacting with a human being or with automated software. The idea behind it is that humans can recognize highly distorted characters far better than character recognition software can.

CAPTCHA presents an image containing a string of distorted text and asks the user to identify the text. As optical character recognition (OCR) technology improved, this text had to be ever more distorted and often reached a point where legitimate users struggled to decode it. CAPTCHAs were designed to thwart scripts that would, for example, sign up for thousands of logins on a service or buy all tickets to an event. CAPTCHAs do this by having a human solve the CAPTCHA before proceeding.

This was not always successful. CAPTCHAs were susceptible to a form of a man-in-the-middle attack where the distorted image is presented to low-cost (or free) humans whose job is to decipher CAPTCHAs. These are called CAPTCHA farms. Ever-improving OCR technology also made text-based CAPTCHAs susceptible to attack. By 2014, Google found that they could use AI techniques to crack CAPTCHAs with 99.8% accuracy.

An alternative to text-based CAPTCHAs are CAPTCHAs that involve image recognition, such as “select all images that have mountains in them” or "select all squares in an image that have street signs’. A recent variation of CAPTCHA is Google’s No CAPTCHA reCAPTCHA. This simply asks users to check a box stating that I’m not a robot. The JavaScript behind the scenes, however, does several things:

  • It contacts the Google server to perform an “advanced risk analysis”. What this does is not defined but the act of contacting the server causes the web browser to send Google cookies to the server. If you’re logged in to a Google account, your identity is now known to the server via a cookie and the server can look at your past history to determine if you are a threat.

  • By contacting the Google server, the server can also check where the request came from and compare it with its list of known malicious IP addresses known to host bots

  • The JavaScript code monitors the user’s engagement with the CAPTCHA, measuring mouse movements, scroll bar movement, acceleration, and the precise location of clicks. If everything is too perfect then the software will assume it is not dealing with a human being.

The very latest variation of this system is the invisible reCAPTCHA. The user doesn’t even see the checkbox: it is oriented tens of thousands of pixels above the origin, so the JavaScript code is run but the reCAPTCHA frame is out of view. If the server-based risk analysis does not get sufficient information from the Google cookies then it relocates the reCAPTCHA frame back down to a visible part of the screen.

Finally, if the risk analysis part of the system fails, the software presents a CAPTCHA (recognize text on an image) or, for mobile users, a quiz to find matching images.

Network Security

The Internet was designed to support the interconnection of multiple networks, each of which may use different underlying hardware. The Internet Protocol, IP, is a logical network built on top of these physical networks. IP assumes that the underlying networks do not provide reliable communication. It is up to higher layers of the IP software stack (either TCP or thee application) to to detect lost packets. IP networks are connected by routers, which are computing elements that are connected to multiple networks. They receive packets on one network and forward them onto another network to get them to their destination. A packet from your computer will often flow through multiple networks and multiple routers that you know nothing about on its way to its destination. This poses security risks since you do not know of the trustworthiness of these elements.

Networking protocol stacks are usually described using the OSI layered model. For IP, the layers are:

  1. Physical. Represents the actual hardware.

  2. Data Link. The protocol for the local network, typically Ethernet (802.1) or Wi-Fi (802.11).

  3. Network. The protocol for creating a single logical network and routing packets across physical networks. The Internet Protocol (IP) is responsible for this.

  4. Transport. The transport layer is responsible for creating logical software endpoints (to ports) so that one application can send a stream of data to another. TCP uses sequence numbers, acknowledgement numbers, and retransmission to provide applications with a reliable, connection-oriented, bidirectional communication channel. UDP does not provide reliability and simply sends a packet to a given destination host and port.

Higher layers of the protocol stack are handled by applications and the libraries they use.

Data link layer

In an Ethernet network, the data link layer is handled by Ethernet transceivers and Ethernet switches. Security was not a consideration in the design of this layer and several fundamental attacks exist at this layer.

Switch CAM table overflow

Ethernet frames are delivered based on their 48-bit MAC address. IP address are meaningless to ethernet transceivers and switches as those are higher levels of the network stack. Ethernet was originally designed as a bus-based shared network. Any system could see all the traffic on the Ethernet. This resulted in increased congestion as more hosts were added to the local network. Ethernet switches alleviated this problem by using a dedicated cable between each host and the switch. The switch routes the frame only to the Ethernet port that is connected to the system that contains the desired destination address.

Unlike routers, switches are not programmed with routes. Instead, they learn them by looking at the source MAC addresses of incoming ethernet frames. An incoming frame indicates that the system with that source address is connected to that switch port.

To implement this, a switch contains a switch table (a MAC address table). This table contains entries for known MAC addresses and their interface. When a frame arrives for some destination address D, the switch looks up D in the switch table to find the interface If D is in the table and on a different port, then Forward the frame to that interface: queueing it if necessary. If D is not found in the table, then the switch assumes it has not yet learned what port that address is associated with, so it Forward the frame to ALL interfaces.

The switch table has to support extremely rapid lookups. For this reason, they are implemented using content addressable memory (CAM, also known as associative memory). CAM is expensive and switch tables are fixed-size and not huge. The switch will delete less-frequently used entries if it needs to make room for new ones.

The CAM table overflow attack exploits the size limit of this CAM-based switch table. The attacker sends bogus Ethernet frames with random source MAC addresses. Each newly-received address will displace an entry in the switch table, eventually filling up the table. With the CAM table full, legitimate traffic will be broadcast to all links A host on any port can now see all traffic. The CAM table overflow attack turns a switch into a hub.

Countermeasures for CAM table attacks require the use of managed switches that support port security. These switches allow you to limit the number of addresses per switch port.

VLAN hopping

One use of local area networks is to isolate broadcast traffic from other groups of systems. Related users can be connected to a single LAN. However, users can move in an office and switches may be used inefficiently. Virtual Local Area Networks (VLANs) create multiple virtual LANs over a single physical switch infrastructure. The network administrator can assign each port on a switch to a specific VLAN. Each VLAN is a separate broadcast domain so that each VLAN acts like a truly independent local area network.

Switches may be extended by cascading them with other switches: an ethernet cable from one switch simply connects to another switch. With VLANs, the connection between switches forms a VLAN trunk and carries traffic from all VLANs to the other switch. An extended Ethernet frame format is used for the Ethernet frames on this link since each frame needs to be identifies with the VLAN from which it originated.

A VLAN hopping attack employs switch spoofing: an attacker’s computer identifies itself as a switch with a trunk connection. It then receives traffic on all VLANs.

Defending against this attack requires a managed switch where an administrator can disable unused ports and associate them with some unused VLAN. Disable auto-trunking also needs to be disabled so that each port cannot become a trunk. Instead, trunk ports need to be configured explicitly.

ARP cache poisoning

Recall that IP is a logical network that sits on top of physical networks. If we are on an Ethernet network and need to send an IP datagram, that IP datagram needs to be encapsulated in an Ethernet frame. The Ethernet frame needs to contain a destination MAC address that corresponds to the destination machine (or router). To do this, we need to figure out what MAC address corresponds to a given IP address.

There is no relationship between an IP and Ethernet MAC address. To find the MAC address given an IP address, a system uses the Address Resolution Protocol, ARP. The source system creates an Ethernet frame that contains an ARP message with the IP address it wants to query. This ARP message is then broadcast All network adapters receive the message. If some system receives this message and notes that its IP address matches the address in the query, it responds to the ARP message. The response identifies the MAC address of the system that owns that IP address. To avoid the overhead of doing this query each time the system needs to use the IP address, the operating system maintains an ARP cache that stores recently used addresses. Moreover, hosts cache any ARP replies they see, even if they did not originate them. This is done on the assumption that many systems use the same set of IP addresses and the overhead of making an ARP query is substantial.

Note that there is no way to authenticate that a response is legitimate. The asking host does not have any idea of what MAC address is associated with the IP address. Hence it cannot tell whether a host that really has that IP address is responding or an imposter.

An ARP cache poisoning attack is one where an attacker creates fake ARP replies that contain the attacker’s MAC address and the target’s IP address. This will direct any traffic meant for the target to the attacker It enables man-in-the-middle or denial of service attacks since the real host will not be receiving any IP traffic.

There are several defenses against ARP cache poisoning. One defense is to ignore replies that are not associated with requests. However, you need to hope that the reply you get is a legitimate one since an attacker may respond more quickly or perhaps launch a denial of service attack against the legitimate host and then respond.

Another defense is to give up on ARP broadcasts and simply use static ARP entries. This works but can be an administrative nightmare since someone will have to keep the list of IP and MAC address mappings and the addition of new machines to the environment.

Finally, one can enable something called Dynamic ARP Inspection. This essentially builds a local ARP table by using DHCP Snooping data as well as static ARP entries. Any ARP responses will be validated against DHCP Snooping database information or static ARP entries.

DHCP spoofing

When a computer joins a network, it needs to be configured for that network. DHCP, the Dynamic Host Configuration Protocol broadcasts a DHCP Discover message. A DHCP server on the network picks up this request and sends back a response that contains configuration information for the new computer:

  • IP address
  • Subnet mask
  • Default router (gateway)
  • DNS servers
  • Lease time

As with ARP, we have the problem that a computer does not know where to go to for the information and has to rely on a broadcast query, hoping that it gets a legitimate response. With DHCP Spoofing, any system can pretend to be a DHCP server and spoof responses that would normally be sent by a valid DHCP server. This imposter can provide the new system with a legitimate IP address but with false addresses for the gateway (default router) and DNS servers. The result is that the imposter can field DNS requests, which convert domain names to IP addresses and can also redirect any traffic that leaves the local area network from the new machine.

As with ARP cache poisoning, the attacker may launch a denial of service attack against the legitimate DHCP server to keep it from responding or at least delay its responses. If the legitimate server sends its response after the imposter, the new host will simply ignore the response.

There aren’t many defenses against DHCP spoofing. Some switches (such as those by Cisco and Juniper) support DHCP snooping. This allows an administrator to configure specific switch ports as “trusted” or “untrusted." Only specific machines, those on trusted ports, will be permitted to send DHCP responses. The switch will use DHCP data to track client behavior to ensure that hosts use only the IP address assigned to them and to ensure that hosts do not fake ARP responses The switch will filter out DHCP responses from untrusted ports.

Network (IP) layer

Source IP address authentication

One really fundamental problem with IP communication is that there is absolutely no source IP address authentication. Clients are expected to use their own source IP address but anybody can override this by using raw sockets.

This enables one to forge messages to appear that they come from another system. Any software that authenticates requests based on their IP addresses will be at risk.

Anonymous denial of service

This technique can be used for anonymous denial of service attacks. If a system sends a packet that generates an error, the error will be sent back to the source address that was forged in the query. For example, a packet was sent with a small time-to-live, or TTL, value will cause a router that is hit when the TTL reaches zero to respond back with an ICMP Time to Live exceeded message. Error responses will be sent to the forged source IP address and it is possible to send a vast number of such messages from many machines across many networks and have errors all target a single system.

Transport layer (UDP, TCP)

UDP and TCP are transport layer protocols that allow applications to establish communication channels with each other. Each endpoint of such a channel is identified by a port number (a 16-bit integer that has nothing to do with Ethernet switch ports). Hence, both TCP and UDP packets contain not only source and destination addresses but also source and destination ports.

UDP, the User Datagram Protocol, is stateless, connectionless, and unreliable.

As we saw with IP source address forgery, anybody can send UDP messages with forged source IP addresses.

TCP, the Transmission Control Protocol, is stateful, connection-oriented, and reliable. Every packet contains a sequence number (byte offset) and the receiver assembles received packets into their correct order. The receiver also sends acknowledgements and any missing packets are retransmitted.

TCP needs to establish state at both endpoints. It does this through a connection setup process that comprises a three-way handshake.

  1. Client sends a SYN segment The client selects a random initial sequence number (client_isn).

  2. Server sends a SYN/ACK The server receives the SYN segment and knows that a client wants to connect to it. It allocates memory to store connection state and to hold out-of-order segments. The server generates an initial sequence number (server_isn) for its side of the data stream. This is also a random number. The response also contains an acknowledgement with the value client_isn+1.

  3. Client sends a final acknowledgement The client acknowledges receipt of the SYN/ACK message by sending a final ACK message that contains an acknowledgement number of server_isn+1.

Note that the initial sequence number are random rather than start at zero as one might expect. There are two reasons for this. The primary reason is that message delivery times on an IP network are unpredictable and it’s possible that a closed connection may receive delayed messages, confusing the server on the state of the connection. The security-sensitive reason is that if sequence numbers were predictable then it would be easy to launch a sequence number attack where an attacker would be able to guess at likely sequence numbers on a connection and send masqueraded packets that will appear to be part of the data stream. Random sequence numbers do not make the problem go away but make it more challenging to launch the attack, particularly if the attacker does not have the ability to see traffic on the network.

SYN flooding

In the second step of the three-way handshake, the server is informed that a client would like to connect and allocates memory to manage this connection. Given that kernel memory is a finite resource, the operating system will allocate only a finite number of TCP buffers, After that, it will refuse to accept any new connections.

In the SYN flooding attack, the attacker sends a large number of SYN segments to the target. These SYN messages contain a forged source address of an unreachable host, so the target’s SYN/ACK responses never get delivered anywhere. The handshake is never completed but the operating system has allocated resources for this connection. Depending on the operating system, it might be a minute or much longer before it times out on waiting for a response and cleans up these pending connections. Meanwhile, all TCP buffers have been allocated and the operating system refuses to accept any more TCP connections, even if they are from a legitimate source.

SYN flooding attacks cannot be prevented completely. One way of lessening their impact is the use of SYN cookies. With SYN cookies, the server does not allocate buffers & TCP state when a SYN segment is received. It responds with a SYN/ACK and creates an initial sequence number that is a hash of several known values:

hash(src_addr, dest_addr, src_port, dest_port, SECRET)

The “SECRET” is not shared with anyone; it is local to the operating system. When (if) the final ACK comes back from the client, the server needs to validate the acknowledgement number. Normally this requires comparing the number to the stored server initial sequence number plus 1. We did not allocate space to store this value but we can recompute the number by re-generating the hash, adding one, and comparing it to the acknowledgement number in the message. If it is valid, the kernel believes it was not the victim of a SYN flooding attack and allocate resources necessary for managing the connection.

TCP Reset

A somewhat simple attack is to send a RESET (RST) segment to an open TCP socket. If the server sequence number is correct then the connection will close. Hence, the tricky part is getting the correct sequence number to make it look like the RESET is part of the genuine message stream.

Sequence numbers are 32 bit values. The chance of successfully picking the correct sequence number is tiny: 1 in 232 or approximately one in four billion. However, many systems will accept a large range of sequence numbers that are approximately in the correct range (accounting for the fact that packets can arrive out of orders so they shouldn’t necessarily be rejected just because the sequence number is not exactly correct. This can reduce the search space tremendously and the attacker can send a flood of RST packets with varying sequence numbers and a forged source address until the connection is broken.

Routing protocols

The Internet was designed as an interconnection of multiple independently-managed networks, each of which may use different hardware. Routers connect local area networks as well as wide area networks together. A collection of consecutive IP addresses (most significant bits, called prefixes) as well as the underlying routers and network infrastructure, all managed as one administrative entity, is called an Autonomous System (AS). For example, the part of the Internet managed by Comcast constitutes an autonomous system. The networks managed by Verizon happen to constitute a few autonomous systems.

The routers within an autonomous system have to share routing information so that those routers can route packets efficiently toward their destination. An Interior Gateway Protocol is used within an autonomous system. The most common is OSPF, Open Shortest Path First. While security issues exist within autonomous system, we will turn our attention to the sharing of information between autonomous systems. These use an Exterior Gateway Protocol (EGP) called the Border Gateway Protocol, or BGP. With BGP, each autonomous system exchanges routing and reachability information with the autonomous systems with which it connects. For example, Comcast can tell Verizon what parts of the Internet it can reach. BGP uses a distance vector routing algorithm to enable the routers in an autonomous system to determine the most efficient path to use to send packets that are destined for other networks. Unless an administrator explicitly configures a route, BGP will pick the shortest route.

So what are the security problems with BGP? Edge routers use BGP to send route advertisements to routers they are connected to on neighboring autonomous systems. An advertisement is a list of IP address prefixes they can reach (shorter prefixes mean more addresses) and the distance to that route. Thee are TCP messages with no authentication, integrity checks, or encryption. Any malicious party can inject advertisements for arbitrary routes. This information will propagate throughout the Internet. A BGP attack can be used for eavesdropping (direct network traffic to a specific network by telling everyone that you’re offering a really short path) or a denial of service (DoS) attack (make parts of the network unreachable by redirecting traffic and then dropping it.

It is difficult to change BGP since a lot of independent entities use it worldwide. Two partial solutions to this problem emerged. the Resource Public Key Infrastructure (RPKI) framework simply has each AS get an X.509 digital certificate from a trusted entity (the Regional Internet Registry). Each AS signs its list of route advertisements with its private key and any other AS can validate that list of advertisements using the AS’s certificate.

A related solution is BGPsec, which is still a draft standard. Instead of signing an individual AS’s routes, every BGP message between ASes is signed.

Both solutions require every single AS to employ this solution. If some AS is willing to accept untrusted route advertisements but will relay them to other ASes as signed messages then integrity is meaningless.

A high profile BGP attack occurred against YouTube in 2008. Pakistan Telecom received a censorship order from the telecommunications ministry to block YouTube traffic to the country. The company sent spoofed BGP messages claiming to be the best route for the range of IP addresses used by YouTube. It used a longer address prefix than the one advertised by YouTube (longer prefix = fewer addresses). Because the longer prefix was deemed to be more specific, BGP gave it a higher priority. Within minutes, routers worldwide were directing their YouTube requests to Pakistan Telecom, which would simply drop them.

Domain Name System (DNS)

The Domain Name System (DNS) is a Hierarchical service that maps Internet domain names to IP addresses. A user’s computer runs the DNS protocol via a program known as a DNS stub resolver. It first checks a local file for specific preconfigured name-to-address mappings. Then it checks its cache of previously-found mappings. Finally, it contacts an external DNS resolver, which is usually located at the ISP or is run as a public service, such as Google Public DNS or OpenDNS.

We trust that the name-to-address mapping is legitimate. Web browsers, for instance, rely on this to enforce their same-origin policy. However, DNS queries and responses are sent using UDP with no authentication or integrity checks. The only check is that each DNS query contains a Query ID (QID). A DNS response must have a matching QID so that the client can match it to the query. These responses can be intercepted and modified or just forged. Malicious responses can return a different IP address that will direct IP traffic to different hosts

A solution called DNSsec has been proposed. It is a secure extension to the DNS protocol that provide authenticated requests & responses. However, few sites support it.

DNS cache poisoning

DNS queries first check the local host’s DNS cache to see if the results of a past query have been cached. This yields a huge improvement in performance since a network query can be avoided. If the cached name-to-address mapping is invalid, then the wrong IP address is returned. Modifying this cached mapping is called DNS cache poisoning. One way that DNS cache poisoning is done is via JavaScript on a malicious website.

The browser requests access to a legitimate site. For example, a.bank.com. Because the system does not have the address of a.bank.com cached, it sends a DNS query to an external DNS resolver using the DNS protocol. The query includes a query ID (QID) x1. At the same time that the request for a.bank.com is made, JavaScript launches an attacker thread that sends 256 responses with random QIDs (y1. y2, …}. Each of these DNS responses tells the server that the DNS server for bank.com is at the attacker’s IP address. If one of these responses happens to have a matching QUD, the host system will accept it as truth that all future queries for anything at bank.com should be directed to the name server for bank.com, which is run by the attacker. If the responses don’t work, the script can try again with a different sub-domain, b.bank.com. It might take many minutes, but there is a high likelihood that the attack will eventually succeed.

There are two defenses against this attack but they both require non-standard actions that will need to be coded into the system. One is to randomize the source port number of the query. Since the attacker does not get to see the query, it will not know where to send the bogus responses. There are 216 (65,536) ports to try. The second defense is to force all DNS queries to be issued twice. The attacker will have to guess the 32-bit query ID twice in a row and the chances of doing that successfully are infinitesimally small.

DNS amplification attack

We have seen how source address spoofing can be used to carry out an anonymous denial of service attack. Ideally, to overload a system, the attacker would like to send a small amount of data that would create a large response that would be sent to the target. This is called amplification. An obvious method would be to send a URL request over HTTP that will cause the server to respond with a large page reply. However, this does not work as HTTP uses TCP and the target would not have the TCP session established. DNS happens to be a UDP-based service. DNS amplification uses a collection of compromised systems that will carry out the attack (a botnet). Each system will send a small DNS query using a forged source address. These systems can contact their own ISP’s DNS servers since the goal is not to overwhelm any DNS server. The query asks for “ANY”, a request for all known information about the DNS zone. Each such query will cause the DNS server to send back a far larger reply.

Virtual Private Networks (VPNs)

IP networking relies on store-and-forward routing. Network data passes through routers, which are often unknown and untrusted. We also have seen that routes may be altered to pass data through malicious hosts or to malicious hosts that accept packets destined for the legitimate host. Even with TCP connections, data can be modified and sessions can be hijacked. We also saw that there is no source authentication on IP packets: a host can place any address it would like as the source. What we would like is the ability to communicate securely, with the assurance that our traffic cannot be modified and that we are truly communicating with the correct endpoints.

Virtual private networks (VPNs) allow separate local area networks to communicate securely over the public Internet, saving money by using a shared public network (Internet) instead of leased lines. This is achieved by tunneling, the encapsulation of an IP datagram within another datagram. A datagram that is destined for a remote subnet, which will usually have local source and destination IP addresses that may not be routable over the public Internet, will be treated as payload and be placed inside an IP datagram that is routed over the public Internet. The source and destination addresses of this outer datagram are the VPN endpoints at both sides, VPN-aware routers.

When the VPN endpoint (router) receives this encapsulated datagram, it extracts the data, which is a full IP datagram, and routes it on the local area network. This tunneling behavior gives us the virtual network part of the VPN.

To achieve security (the “private” part of VPN), an administrator setting up a VPN will usually be concerned that the data contents are not readable and the data has not been modified. To ensure this, the encapsulated packets can be encrypted and signed. Signing a packet does not hide its data but enables the receiver to validate that the data has not been modified in transit. Encrypting ensures that intruders would not be able to make sense of the data, which is the encapsulated datagram.

IPsec is a popular VPN protocol that is really a set of two protocols. The IPsec Authentication Header (AH) is a VPN protocol that does not encrypt data but simply affixes a signature (HMAC) to each datagram to allow the recipient to validate that the packet has not been modified since it left the originator. It provides authentication and integrity assurance. The Encapsulating Security Payload (ESP) provides the same authentication and integrity assurance but also adds encryption to the payload to ensure confidentiality. Data is encrypted with a symmetric cipher (usually AES) and the Diffie-Hellman algorithm is usually used for key generation. Authentication Header mode is rarely used since the overhead of encrypting data these days is quite low.

SSL/TLS

Secure Sockets Layer (SSL) was designed as a layer of software above TCP that provides authentication, integrity, and encrypted communication while preserving the abstraction of a sockets interface to applications. It was designed with the web (HTTP) in mind – to enable secure web sessions. An HTTPS connection is simply the HTTP protocol transmitted over SSL. As SSL evolved, it morphed into a new version called TLS, Transport Layer Security. While SSL is commonly used in conversation, all current implementations are TLS.

TLS uses a hybrid cryptosystem and relies on public keys for authentication. If both the sender and receiver have X.509 digital certificates, TLS can validate them and use nonce-based public key authentication to validate that each party has the corresponding private key.

The steps in a TLS session are:

  1. The client connects to the server and sends information about its version and the ciphers it supports.

  2. The server responds with its certificate (or just a public key if there is no certificate), the protocol version and ciphers it is willing to use, and, possibly, a request for a client certificate.

  3. The client validates the server’s certificate.

  4. The client generates a random session key and sends it to the server. TLS supports the use of public key encryption, Diffie-Hellman key exchange, or pre-shared keys (pre-configured keys).

  5. Optionally, the client responds with its certificate.

  6. If the client responds with its certificate, the server validates the certificate and the client.

  7. The client and server can now exchange data. Each message is first compressed and then encrypted with a symmetric algorithm. An HMAC (hash MAC) for the message is also sent to allow the other side to validate message integrity.

TLS supports multiple algorithms for key exchange, symmetric encryption, and HMAC. It is up to the client and server to negotiate for the ones they will use. The protocol provides the benefits of adding integrity and privacy to the data stream. If you trust the server’s CA, it also validates the authenticity of the server.

TLS is widely used and generally considered secure if strong cryptography is used. The biggest problem was a man-in-the-middle attack where the attacker can send a message to renegotiate the protocol and select one that disables encryption. Another attack was a denial-of-service attack where an attacker initiates a TLS connection but keeps requesting a regeneration of the encryption key, using up the server’s resources in the process.

One notable aspect of TLS is that, in most cases, only the server will present a certificate. Hence, the server will not authenticate or know the identity of the client. Client-side certificates have usually been problematic. Generating keys and obtaining certificates is not an easy process. A user would have to install the certificate and the corresponding private key on every system she uses. This would not be practical for shared systems. Moreover, if a client did have a certificate, any server can request it during TLS connection setup, thus obtaining the identity of the client. This could be desirable for legitimate banking transactions but not for sites where a user would like to remain anonymous. We generally rely on other authentication mechanisms, such as the password authentication protocol, but carry them out over TLS’s secure communication channel.

Firewalls

A firewall protects the junction between an untrusted network (e.g., external Internet) and a trusted network (e.g., internal network). Two approaches to firewalling are packet filtering and proxies. A packet filter, or screening router, determines not only the route of a packet but whether the packet should be dropped based on contents in the IP header, TCP/UDP header, and the interface on which the packet arrived. It is usually implemented inside a border router, also known as the gateway router that manages the flow of traffic between the ISP and internal network. The basic principle of firewalls is to never have a direct inbound connection from the originating host from the Internet to an internal host; all traffic must flow through a firewall and be inspected.

The packet filter evaluates a set of rules to determine whether to drop or accept a packet. This set of rules forms an access control list, often called a chain. Strong security follows a default deny model, where packets are dropped unless some rule in the chain specifically permits them. With stateless inspection, a packet is examined on its own with no context based on previously-seen packets. Stateful packet inspection (SPI) allows the router to keep track of TCP connections and understand the relationship between packets. For example, a port that needs to be enabled for the FTP data channel once an FTP connection has been established or that return packets should be allowed into the network in response to outbound requests.

Packet filters traditionally do not look above the transport layer. Deep packet inspection (DPI) allows a firewall to examine application data as well and make decisions based on its contents. Deep packet inspection can validate the protocol of an application as well as check for malicious content such as malformed URLs or other security attacks. DPI is generally considered to be part of Intrusion Prevention Systems.

An application proxy is software that presents the same protocol to the outside network as the application for which it is a proxy. For example, a mail server proxy will listen on port 25 and understand SMTP, the Simple Mail Transfer Protocol. The primary job of the proxy is to validate the application protocol and thus guard against protocol attacks (extra commands, bad arguments) that may exploit bugs in the service. Valid requests are then regenerated by the proxy to the real application that is running on another server and is not accessible from the outside network. The proxy is the only one that can communicate with the internal network. Unlike DPI, a proxy may modify the data stream, such as stripping headers or modifying machine names. It may also restructure the commands in the protocol used to communicate with the actual servers (that is, it does not have to relay everything that it receives).

A typical firewalled environment is a screened subnet architecture, with a separate subnet for systems that run externally-accessible services (such as web servers and mail servers) and another one for internal systems that do not offer services and should not be accessed from the outside. The subnet that contains externally-accessible services is called the DMZ (demilitarized zone). The DMZ contains all the hosts that may be offering services to the external network (usually the Internet). Machines on the internal network are not accessible from the Internet. All machines within an organization will be either in the DMZ or in the internal network.

Both subnets will be protected by screening routers. They will ensure that no packet from the outside network is permitted into the inside network. Logically, we can view our setup as containing two screening routers:

  1. The exterior router allows IP packets only to the machines/ports in the DMZ that are offering valid services. It would also reject any packets that are masqueraded to appear to come from the internal network.

  2. The interior router allows packets to only come from designated machines in the DMZ that need to access services in the internal network. Any packets not targeting the appropriate services in the internal network will be rejected. Both routers will generally allow traffic to flow from the internal network to the Internet, although an organization may block certain services (ports) or force users to use a proxy (for web access, for example).

Note that the two screening routers may be easily replaced with a single router since filtering rules can specify interfaces. Each rule can thus state whether an interface is the DMZ, internal network, or Internet (ISP).

Firewalls generally intercept all packets entering or leaving a local area network. A host-based firewall, on the other hand, runs on a user’s computer. Unlike network-based firewalls, a host-based firewall can associate network traffic with individual applications. Its goal is to prevent malware from accessing the network. Only approved applications will be allowed to send or receive network data. Host-based firewalls are particularly useful in light of deperimiterization: the boundaries of external and internal networks are sometimes fuzzy as people connect their mobile devices to different networks and import data on flash drives. A concern with host-based firewalls is that if malware manages to get elevated privileges, it may be able to shut off the firewall or change its rules.

A variation on screening routers is the use of intrusion detection systems (IDS). A screening router simply makes decisions based on packet headers. Intrusion detection systems try to identify malicious behavior. There are three forms of IDS:

  1. A protocol-based IDS validates specific network protocols for conformance. For example, it can implement a state machine to ensure that messages are sent in the proper sequence, that only valid commands are sent, and that replies match requests.

  2. A signature-based IDS is similar to a PC-based virus checker. It scans the bits of application data in incoming packets to try to discern if there is evidence of “bad data”, which may include malformed URLs, extra-long strings that may trigger buffer overflows, or bit patterns that match known viruses.

  3. An anomaly-based IDS looks for statistical aberrations in network activity. Instead of having predefined patterns, normal behavior is first measured and used as a baseline. An unexpected use of certain protocols, ports, or even amount of data sent to a specific service may trigger a warning.

Anomaly-based detection implies that we know normal behavior and flag any unusual activity as bad. This is difficult since it is hard to characterize what normal behavior is, particularly since normal behavior can change over time and may exhibit random network accesses (e.g., people web surfing to different places). Too many false positives will annoy administrators and lead them to disregard alarms.

A signature-based system employs misuse-based detection. It knows bad behavior: the rules that define invalid packets or invalid application layer data (e.g., ssh root login attempts). Anything else is considered good.

Type Description
Firewall (screening router) 1st generation packet filter that filters packets between networks. Blocks/accepts traffic based on IP addresses, ports, protocols
Stateful inspection firewall Like a screening router but also takes into account TCP connection state and information from previous connections (e.g., related ports for TCP)
Application proxy Gateway between two networks for a specific application. Prevents direct connections to the application from outside the network. Responsible for validating the protocol
IDS/IPS Can usually do what a stateful inspection firewall does + examine application-layer data for protocol attacks or malicious content
Host-based firewall Typically screening router with per-application awareness. Sometimes includes anti-virus software for application-layer signature checking
Host-based IPS Typically allows real-time blocking of remote hosts performing suspicious operations (port scanning, ssh logins)

Snort

Snort is an example of a widely used network intrusion detection system. Snort sniffs all traffic on the network and parses each packet to extract fields that can be matched by rules. A set of rules is then applied to the data extracted from each pocket. These include tests of the form performed by stateful screening routers on network and transport layer fields as well as searching for patterns in the data payload of the packets. These rules enable one to test for known protocol attacks (e.g., bad URLs, root logins) as well as access to services. Violations can be logged or raised as alerts.

Web security

When the web browser was first created, it was relatively simple: it parsed static content for display and presented it to the user. The content could contain links to other pages. As such, the browser was not an interesting security target. Any dynamic modification of pages was done on servers and all security attacks were focused on those servers. These attacks included malformed URLs, buffer overflows, root paths, and unicode attacks.

The situation is vastly different now. Browsers have become insanely complex:

  • Built-in JavaScript to execute arbitrary downloaded code

  • The Document Object Model (DOM), which allows JavaScript code to change the content and appearance of a web page.

  • XMLHttpRequest, which enables JavaScript to make HTTP requests back to the server and fetch content asynchronously.

  • WebSockets, which provide a more direct link between client and server without the need to send HTTP requests.

  • Multimedia support; HTML5 added direct support for <audio>, <video>~, and <track>~ tags, as well as MediaStream recording of both audio and video and even speech recognition and synthesis (with the Chrome browser, for now).

  • Access to on-device sensors, including geolocation and tilt

  • the NaCl framework on Chrome, providing the Ability to run native apps in a sandbox within the browser

The model evolved from simple page presentation to that of running an application. All these features provide a broader attack surface. The fact that many features are relatively new and more are being developed increases the likelihood of more bugs and therefore more security holes. Many browser features are complex and developers won’t always pay attention to every detail of the specs (see quirksmode.org. This leads to an environment where certain less-common uses of a feature may have bugs or security holes on certain browsers.

Multiple sources

Traditional software is installed as a single application. The application may use external libraries, but these are linked in by the author and tested. Web apps, on the other hand, dynamically load components from different places. These include fonts, images, scripts, and video as well as embedded iFrames that embed HTML documents within each other. The JavaScript code may issue XMLHttpRequests to yet additional sites.

One security concern is that of software stability. If you import JavaScript from several different places, will your page still display correctly and work properly in the future as those scripts are updated and web standards change? Do those scripts attempt to do anything malicious? Might they be modified by their author to do something malicious in the future?

Then there’s the question of how elements on a page should be allowed to interact. Can some analytics code access JavaScript variables that come from a script downloaded from jQuery.com on the same web page? The scripts came from different places the page author selected them for the page, so maybe it’s ok for them to interact. Can analytics scripts interact with event handlers? If the author wanted to measure mouse movements and keystrokes, perhaps it’s ok for a downloaded script to use the event handler. How about embedded frames? To the user, the content within a frame looks like it is part of the rest of the page. Should scripts work any differently?

Frames and iFrames

A browser window may contain a collection of documents from different sources. Each document is rendered inside a frame. In the most basic case, there is just one frame: the document window. A frame is a rigid division that is part of a frameset, a collection of frames. Frames are not officially supported in HTML, the latest version of HTML, but many browsers still support them. An iFrame is a floating inline frame that moves with the surrounding content. iFrames are supported. When we talk about frames, we will be talking about the frames created with an iFrame tag.

Frames are generally invisible to users and are used to delegate screen area to content from another source. A very basic goal of browser security is to isolate visits to separate pages in distinct windows or tabs. If you visit a.com and b.com in two separate tabs, the address bar will identify each of them and they will not share information. Alternatively, a.com may have frames within it (e.g., to show ads from other sites, so b.com may be a frame within a.com. Here, too, we would like the browser to provide isolation between a.com and b.com even though b.com is not visible as a distinct site to the user.

Same-origin policy

The security model used by web browsers is the same-origin policy. A browser permits scripts in one page to access data in a second page only if both pages have the same origin. An origin is defined to be the URI scheme (http vs. https), the hostname, and the port. For example

http://www.poopybrain.com/419/test.html

and

http://www.poopybrain.com/index.html

have the same origin since they both use http, both use port 80 (the default http port since none is specified), and the same hostname (www.poopybrain.com). If any of those components were different, the origin would not be the same. For instance, www.poopybrain.com is not the same hostname as poopybrain.com.

Under the same-origin policy, each origin has access to common client-side resources that include:

  • Cookies: Key-value data that clients or servers can set. Cookies associated with the origin are sent with each http request.

  • JavaScript namespace: Any functions and variables defined or downloaded into a frame share that frame’s origin.

  • DOM tree: This is the JavaScript definition of the HTML structure of the page.

  • DOM storage: Local key-value storage.

Each frame gets the origin of its URL. Many pages will have just one frame: the browser window. Other pages may embed other frames. Each of those embedded frames will not have the origin of the outer frame but rather the URL of the frame contents. Any JavaScript code downloaded into a frame will execute with the authority of its frame’s origin. For instance, if cnn.com loads JavaScript from jQuery.com, the script runs with the authority of cnn.com. Passive content, which is non-executable content such as CSS files and images, has no authority. This normally should not matter as passive content does not contain executable code but there have been attacks in the past that had code in passive content and made that passive content turn active.

Cross-origin content

As we saw, it is common for a page to load content from multiple origins. The same-origin policy states that JavaScript code from anywhere runs with the authority of the frame’s origin. Content from other origins is generally not readable by JavaScript.

A frame can load images from other origins but cannot inspect that image. However, it can infer the size of the image by examining the changes to surrounding elements after it is rendered.

A frame may embed CSS (cascading stylesheets) from any origin but cannot inspect the CSS content. However, JavaScript in the frame can discover what the stylesheet does by creating new DOM nodes (e.g., a heading tag) and see how the styling changes.

A frame can load JavaScript, which executes with the authority of the frame’s origin. If the source is downloaded from another origin, it is executable but not readable. However, one can use JavaScript’s toString method to decompile the function and get a string representation of the function’s declaration.

All these restrictions are somewhat ineffective anyway since a curious user can download any of that content directly (e.g., via the curl command) and inspect it.

Cross-Origin Resource Sharing (CORS)

Even though content may be loaded from different origins, browsers restrict cross-origin HTTP requests that are initiated from scripts (e.g., via XMLHttpRequest or Fetch). This can be problematic at times since sites such as poopybrain.com and www.poopybrain.com are considered distinct origins, as are http://poopybrain.com and https://poopybrain.com.

Cross-Origin Resource Sharing (CORS) was created to allow web servers to specify cross-domain access permission. This will allow scripts on a page to issue HTTP requests to approved sites. It also allows access to Web Fonts, inspectable images, and access to stylesheets. CORS is enabled by an HTTP header that states allowable origins. For example,

Access-Control-Allow-Origin: http://www.example.com

means that the URL http://www.example.com will be treated as the same origin as the frame’s URL.

Cookies

Cookies are name-value sets that are designed to maintain state between a web browser and a server. Cookies are sent to the server along with HTTP requests and servers may send back cookies with a response. Uses for cookies include storing a session ID that identifies your browsing session to the server (including a reference to your shopping cart or partially-completed form), storing shopping cart contents directly, or tracking which pages you visited on the site in the past (tracking cookies). Cookies are also used to store authentication information so you can be logged into a page automatically upon visiting it (authentication cookies).

Now the question is: which cookies should be sent to a server when a browser makes an HTTP request? Cookies don’t quite use the same concept of an origin. The scope of a cookie is defined by its domain and path. Unlike origins, the scheme (http or https) is ignored by default, as is the port. The path is the path under the root URL, which is ignored for determining origins but used with cookies. Unless otherwise defined by the server, the default domain and path are those of the frame that made the request.

A client cannot set cookies for a different domain. A server, however, can specify top-level or deeper domains. Setting a cookie for a domain example.com will cause that cookie to be sent whenever example.com or any domain under example.com is accessed (e.g., www.example.com). For the cookie to be accepted by the browser, the domain must include the origin domain of the frame. For example, if you are on the page www.example.com, your browser will accept a cookie for example.com but will not accept a cookie for foo.example.com.

Cookies often contain user names, complete authentication information, or shopping cart contents. If malicious code running on the web page could access those cookies, it could modify your cart, get your login credentials, or even modify cookies related to cloud-based services to have your documents or email get stored to a different account. This is a very real problem and two safeguards were put in place:

A server can tag a cookie with an HttpOnly flag. This will not allow scripts on the page to access the cookie, so it is useful to keep scripts from modifying or reading user identities or session state.

HTTP messages are sent via TCP. Nothing is encrypted. An attacker that has access to the data stream (e.g., a man in the middle or a packet sniffer) can freely read or even modify cookies. A Secure flag was added to cookies to specify that they can be sent only over an HTTPS connection:

Set-Cookie: username=paul; path=/; HttpOnly; Secure

Cross-Site Request Forgery (XSRF)

Cross-site request forgery is an attack that sends unauthorized requests from a user that the web server trusts. Let’s consider an example. You previously logged into Netflix. Because of that, the Netflix server sent an authentication cookie to your browser; you will not have to log in the time you visit netflix.com. Now you go to another website that contains a malicious link or JavaScript code to access a URL. The URL is:

http://www.netflix.com/JSON/AddToQueue?movieid=860103

By hitting this link on this other website, the attacker added Plan 9 from Outer Space to your movie queue (this attack really worked with Netflix but has been fixed). This may be a minor annoyance but the same attack could create more malicious outcomes. If, instead of Netflix, the attack could take place against an e-commerce site that accepted your credentials but allows the attacker to add a different shipping address on the URL. More dangerously, a banking site may use your stored credentials and account number. Going to the malicious website may enable the attacker to request a funds transfer to another account:

http://www.bank.com/action=transfer*amount=1000000&to_account=417824919

Note that the attack works because of how cookies work. You visited a random website but inadvertently requested another site. Your browser dutifully sends and HTTP GET request to that site with the URL specified in the link and also sends all the cookies for that site. The attacker never steals your cookies and does not intercept any traffic. The attack is simply the creation of a URL that makes it look like you requested some action.

There are several defenses against Cross-site request forgery:

  • The server can validate the Referer header on the request. This will tell it whether the request came via a link or directly from a user (or from a link on a trusted site).

  • The server can require some unique token to be present in the request. For instance, visiting netflix.com might cause the Netflix server to return a token that will need to be passed to any successive URL. An attacker will not be able to create a static URL on her site that will contain this random token.

  • The interaction with the server can use HTTP POST requests instead GET requests, placing all parameters into the body of the request rather than in the URL. State information can be passed via hidden input fields instead of cookies.

Clickjacking

Clickjacking is an attack where the attacker overlays an image to have the user think that he is clicking some legitimate link or image but is really requesting something else. For example, a site may present a “win a free iPad” image. However, malicious JavaScript can place an invisible frame over this image that contains a link. Nothing is displayed to obstruct the “win a free iPad” image but when a user clicks on it, the link that is processed is the one in the invisible frame. This malicious link could download malware, change security settings for the Flash plug-in, or redirect the user to a page containing malware or a phishing attack.

The defense for clickjacking is to have JavaScript in the legitimate code check that the content is at the topmost layer:

window.self == window.top

If it isn’t then it means the content is obstructed, possibly by an invisible clickjacking attack. Another defense is to have the server send an X-Frame-Options HTTP header to instruct the browser to not allow framing from other domains.

Screen sharing

HTML5, the latest standard for HTML, added a screen-sharing API. Normally, no cross-origin communication is permitted between client and server. The screen-sharing API violates this. If a user grants screen-sharing permission to a frame, the frame can take a screenshot of the entire display (the entire monitor, all windows, and the browser). It can also get screenshots of pages hidden by tabs in a browser.

This is not a security hole and there are no exploits (yet) to enable screen sharing without the user’s explicit opt-in. However, it is a risk because the user might not be aware of the scope or duration of screen sharing. If you believe that you are sharing one browser window, you may be surprised to discover that the server was examining all your screen content.

Input sanitization

In the past we saw how user input that becomes a part of database queries or commands can alter those commands and, in many cases, enable an attacker to add arbitrary queries or commands. The same applies to URLs, HTML source, and JavaScript. Any user input needs to be parsed carefully before it can be made part of a URL, HTML content, or JavaScript. Consider a script that is generated with some in-line data that came from a malicious user:

<script> var x = "untrusted_data"; </script>

The malicious user might define that untrusted_data to be

Hi"; </script> <h1> Hey, some text! </h1> <script> malicious code... x="bye

The resulting script to set the variable x now becomes

<script> var x = "Hi"; </script> <h1> Hey, some text! </h1> <script> malicious code... x=bye"; </script>

Cross-site scripting

Cross-site Scripting (XSS) is a code injection attack that allows the attacker to inject client-side scripts into web pages. It can be used to bypass the same-origin policy and other access controls. Cross-site scripting has been one of the most popular browser attacks.

The attack may be carried out in two ways: a URL that a user clicks on and gets back a page with the malicious code and by going to a page that contains user content that may include scripts.

In a Reflected XSS attack, all malicious content is in a page request, typically a link that an unsuspecting user will click on. The server will accept the request without sanitizing the user input and present a page in response. This page will include that original content. A common example is a search page that will display the search string before presenting the results (or a “not found” message). Another example is an invalid login request that will return with the name of the user and a “not found” message. Consider a case where the search string or the login name is not just a bunch of characters but text to a script. The server treats it as a string, does the query, cannot find the result, and sends back a page that contains that string, which is now processed as inline JavaScript code.

www.mysite.com/login.asp?user=<script>malicious_code(…) </script>

In a Persistent XSS attack, user input is stored at a site and later presented to other users. Consider online forums or comment sections for news postings and blogs. If you enter inline JavaScript as a comment, it will be placed into the page that the server constructs for any future people who view the article. The victim will not even have to click a link to run the malicious payload.

Cross-site scripting is a problem of input sanitization. Servers will need to parse input that is expected to be a string to ensure that it does not contain embedded HTML or JavaScript. The problem is more difficult with HTML because of its support for encoded characters. A parser will need to check not only for “script” but also for “%3cscript%3e”. As we saw earlier, there may be several acceptable Unicode encodings for the same character.

Cross-site scripting, by executing arbitrary JavaScript code can:

  • Access cookies related to that website
  • Hijack a session
  • Create arbitrary HTTP requests with arbitrary content via XMLHtttpRequest
  • Make arbitrary modifications to the HTML document by modifying the DOM
  • Install keyloggers
  • Download malware – or run JavaScript ransomware
  • Try phishing by manipulating the DOM and adding a fake login page

The main defense against cross-site scripting is to sanitize all input. Some web frameworks do this automatically. For instance, Django templates allow the author to specify where generated-content is inserted (e.g., <b> hello, {{name}} </b>) and performs the necessary sanitization to ensure it does not modify the HTML or add JavaScript.

Other defenses are:

  • Use a less-expressive markup language for user input, such as markdown if you want to give users the ability to enter rich text. However, input sanitization is still needed to ensure there are no HTML or JavaScript escapes

  • Employ a form of privilege separation by placing untrusted content inside a frame with a different origin. For example, user comments may be placed in a separate domain. This does not stop XSS damage but limits it to the domain.

  • Use the Content Security Policy (CSP). The content security policy was designed to defend agains XSS and clickjacking attacks. It allows website owners to tell clients what content is allowed, whether inline code is permitted, and whether the origin should be redefined to be unique.

SQL injection

We previously saw that SQL injection is an issue in any software that uses user input as part of the the SQL query. The same applies to browsers. Many web services have databases behind them and links often contain queries mixed with user input. If input is not properly sanitized, it can alter the SQL query to modify the database, force a user authentication, or return the wrong data.

GIFAR attack

The GIFAR attack is a way to embed malicious code into an image file. Sites that allow user-uploadable pictures are vulnerable. GIFAR is a pseudo-concatenation of GIF and JAR.

Java applets are sent as JAR files. A Java JAR file is essentially a zip file, a popular format for compressing and archiving multiple files. In Jar files, the header that contains information about the content is stored at the end of the file.

GIF files are lossless image files. The header in GIF files, as with most other file formats, is stored at the beginning of the file.

GIF and JAR files can be combined together to create a GIFAR file. Because the GIF header is at the beginning of the file, the browser believes it is an image and opens it as such, trusts its content, unaware that it contains code. Meanwhile the Java virtual machine (JVM) recognizes the JAR part of the file, which is run as an applet in the victim’s browser.

An attacker can use cross-site scripting to inject a request to invoke the applet (<applet archive:"myimage.gif">), which will cause it to run in the context of the origin (the server that hosted it). Because the code is run as a Java applet rather than JavaScript, it bypasses the “no authority” restriction the browser imposes on JavaScript in images.

HTML image tag vulnerability

We saw that the same-origin policy treats images as static content with no authority. It would seem that images should not cause problems (ignoring the now-patched GIFAR vulnerability that allowed images to inject Java applets). However, an image tag (IMG) can pass parameters to the server, just like any other URL:

<img src="http://evil.com/images/balloons.jpg?extra_information" height="300" width="400"/>

This can be used to notify the server that the image was requested from specific content. The web server will also know the IP address that sent the request. The image itself can be practically hidden by setting its size to a single pixel:

<img src="..." height="1" width="1" />

This is sometimes done to track messages sent to user. If I send you HTML-formatted mail that contains a one-pixel image, you will not notice the image but my server will be notified that the image was downloaded. If the IMG tag contained some text to identify that this is related to the mail message I sent you, I will now know that you read the message.

Images can also be used for social engineering: to disguise a site by appropriating logos from well-known brands or adding certification logos.

Mixed HTTP and HTTPS content

A web page that was served via HTTPS might contain a reference to a URL, such as a script, that specifies HTTP content:

<script src="http://www.mysite.com/script.js"> </script>

The browser would follow the scheme in the URL and download that content via HTTP rather than over the secure link. An active network attacker can hijack that session and modify the content. A safer approach is to not specify the scheme for scripts, which would cause them to be served over the same protocol as their embedding frame.

<script src="//www.mysite.com/script.js"> </script>

Some browsers give warning of mixed content but the risks and knowledge of what really is going on might not be clear to users.

Extended Validation Certificates

TLS establishes a secure communication link between a client and server. For the authentication to be meaningful, the user must be convinced that the server’s X.509 certificate truly belongs to the entity that is identified in the certificate. Would you trust a bankofamerica.com certificate issued the Rubber Ducky Cert Shack? Even legitimate issuers such as Symantec offer varying levels of validating a certificate owner’s identity.

The lowest level of identity assurance for organizations is a domain validated certificate. To validate the user, the certificate authority will validate that some contact at that domain approves the request. This is usually done through email. It does not prove that the user has legal authority to act on behalf of the company nor is there any validation of the company. They require consent of the domain owner but do not try to validate who that owner is. They offer only incrementally more identity binding than self-signed certificates.

With extended validation (EV) certificates, the certificate authority uses a more rigorous, human-driven validation process. The legal and physical presence of the organization is validated. Then, the organization is contacted through a verified phone number and both the contact and the contact’s supervisor must confirm the authenticity of the request.

An extended validation certificate contains the usual data in a certificate (public key, issuer, organization, …) but must also contain a government-registered serial number and a physical address of the organization.

Domain validated certificate
Domain validated certificate
Extended validation certificate
Extended validation certificate

An attacker could get a low-level certificate and set up a web site. Targets would go to it, see the lock icon on their browser’s address bar that indicates an SSL connection, and feel secure. This led users to a false sense of security: the connection is encrypted but there is no reason to believe that there is validity to the organization on the other side.

Modern browsers identify and validate EV certificates. Once validated, the browser presents an enhanced security indicator that identifies the certificate owner.

Browser status bar

Most browsers offer an option to display a status bar that shows the URL of a link before you click it. This bar is trivial to spoof by adding an onclick attribute to the link that invokes JavaScript to take the page to a different link. In this example, hovering over the PayPal link will show a link to http://www.paypal.com/signin, which appears to be a legitimate PayPal login page. Clicking on that link, however, will take the user to http://www.evil.com.

<a href="http://www.paypal.com/signin"
    onclick="this.href = 'http://www.evil.com/';">PayPal</a>

Mobile device security

What makes mobile devices unique?

In many ways, mobile devices should not be different from laptops or other computer systems. They run operating systems that are derived from those those systems, run multiple apps, and connect to the network. There are differences, however, that make them attractive targets.

Users

Several user factors make phones different from most computing devices:

  • Mobile users often do not think of their phones as real computers. They may not have the same level of paranoia that malware may get in or their activities may be monitored.

  • Social engineering may work more easily on phones. People are often in distracted environments when using their phones and may not pay attention to realize they are experiencing a phishing attack.

  • Phones are small. Users may be less likely to notice some security indicators, such as an EV certificate indicator. It is also easier to lose the phone … or have it stolen.

  • A lot of phones are protected with bad PINs. Four-digit PINs still dominate and, as with passwords, people tend to pick bad ones – or at least common ones. In fact, four PINs (1234, 1111, 0000, 1212, 7777) account for over 20% of PINs chosen by users.

  • While phones have safeguards for resources that apps can access, Users may grant app permission requests without thinking: they will just click through during installation to get the app up and running.

Interfaces

Phones have many sensors built into them: GSM, Wi-Fi, Bluetooth, and NFC radios as well as a GPS, microphone, camera. 6-axis gyroscope and accelerometer, and even barometer. These sensors enable attackers to monitor the world around you: identify where you are and whether you are moving. They can record conversations and even capture video. The sensors are so sensitive that it has been demonstrated that a phone on a desk next to a keyboard can pick up vibrations from a user typing on the neighboring keyboard. This led to a word recovery rate of 80%.

Apps

There are a lot of mobile apps. Currently, there are 2.8 million Android apps and 2.2 million iOS apps. Most of these apps are written by untrusted parties. We would be be wary of downloading many of these on our PCs but think nothing of doing so on our phones. We place our trust in several areas:

  • The testing & approval process by Google (automated) and Apple (automated + manual)
  • The ability of the operating system to sandbox an application
  • The operating system’s requirement of users granting permissions to access certain resources.

This trust may be misplaced. The approval process is far from foolproof. Overtly misadvertised or malicious apps can be detected but it is impossible to discern what a program will do in the future. Sandboxes have been broken in the past and users may be too happy to grant permissions to apps. Moreover, apps often ask for more permissions than they use.

Most apps do not get security updates. There is little economic incentive for a developer to support existing apps, particularly if newer ones have been deployed.

Platform

Mobile phones are comparable to desktop systems in complexity. In some cases, they may even be more complex. This points to the fact that, like all large systems, they will have bugs and some of these bugs will be security sensitive. For instance, in late March, 2017, Apple released an upgrade for iOS, stating that they fixed over 80 security flaws. This is almost 10 years after the release of the iPhone. You can be certain there are many more flaws lurking in the system and more will be added as new features are introduced.

Because of bugs in the system, malicious apps may be able to get root privileges. If they do, they can install rootkits, enabling long-term control while concealing their presence

Unlike desktop systems and laptops, phones enforce a single user environment. Although PCs are usually used as single-user systems, they support multiple user accounts and run a general-purpose timesharing operating system. Mobile devices are more carefully tuned to the single-user environment.

Threats

Mobile devices are are threats to personal privacy as well as at risk of traditional security violations. Personal privacy threats include identifying users and user location, accessing the camera and microphone, and leaking personal data from the phone over the network. Additional threats include traditional phishing attacks, malware installation, malicious Android intents (messages to other apps or services), and overly-broad access to system resources and sensors.

Android security

App code in Android runs under the Dalvik virtual machine, which is a variant of of the Java virtual machine (JVM). Originally, the intention was that apps would be written only in Java but it soon became clear that support for native (C and C++) code was needed and Google introduced the Native Development Kit to support this.

Android is based on Linux, which is multi-user. Under Linux, each user has a distinct user ID and all apps run by that user run with the privileges of the user (ignoring set UID apps). Android supports only a single user and uses user IDs for separating app privileges. Under Android, each app normally runs under a different user ID.

Related apps may share the same Linux user ID if a sharedUserID attribute is set to the same domain for two or more applications as long as those apps are also signed by the same certificate. This would allow these related apps to share files and they can be configured to even share the same Dalvik virtual machine.

Android relies on process sandboxing for most of its security. Each app runs in its own Dalvik virtual machine. Each virtual machine is isolated in its own Linux process running under a unique user ID. Unlike Java, Android does not rely on the Dalvik virtual machine to enforce security. Instead, it relies on Linux’s user ID based protections and permissions to access certain APIs.

The operating system and Dalvik virtual machine provide memory isolation. Linux provides address space layout randomization (ASLR), the compiler provides provides stack canaries, and the memory management libraries provide some heap overflow protections (checks of backward & forward pointers).

A permission model is used to determine what APIs, and hence resources, apps are allowed to access. The list of what an app wants is included in the app’s package. The user grants access to each request and the system builds up a whitelist of allowable permissions. All questions are asked during app installation.

Apps communicate using the app framework. A mechanism called intents enables apps to send and receive requests. An intent is a message containing an action, the data to act on, and the component to handle the intent.

Android supports whole disk encryption so that if a device is stolen, an attacker will not be able to recover file contents even with raw access to the flash file system.

Unlike iOS, Android supports the concurrent execution of multiple apps. It is up to the developer to think about being frugal with battery life. Apps store state their state in persistent memory so they can be stopped and restarted at any time. This ability to stop an app also helps with DoS attacks as the app is not accepting requests or using system resources.

Security concerns

An app can probe whether another app has specific permissions by specifying a permission with an intent method call to that app. This can help an attacker identify a target app. Receivers need to be able to handle malicious intents, even for actions they do not expect to handle and for data that might not make sense for the action.

Apps may also exploit permissions re-delegation. An app, not having a certain permission, may be able gain those privileges by communicating through another app. If a public component does not explicitly have an access permission listed in its manifest definition, Android permits any app to access it. For example, the Power Control Widget (a default Android widget) allows third-party apps to change protected system settings without requesting permissions to control those settings. This is done by presenting the user with a pop-up interface to control power-related settings. A malicious app can send a fake intent to the Power Control Widget while simulating the pressure of the widget button to switch power-related settings. It is effectively simulating a user’s actions on the screen.

By using external storage, apps can exercise permissions avoidance. By default, all apps have access to external storage. Many apps store data in external storage without specifying any protection, allowing other apps to access that data.

Another way permissions avoidance is used is that Android intents allow opening some system apps without requiring permissions. These apps include the camera, SMS, contact list, and browser. Opening a browser via an intent can be dangerous since it enables data transmission, receiving remote commands, and downloading files without user intervention.

iOS security

Apple’s iOS provides runtime protection via OS-level sandboxing. System resources and the kernel are shielded from user apps. The app sandbox restricts the ability of one app to access another app’s data and resources.

Each app has its own sandbox directory. The OS enforces the sandbox and limits access to files within that directory, as well as restricting access to preferences, the network, and other resources.

Inter-app communication can take place only through iOS APIs. Code generation by an app is prevented because data memory pages cannot be made executable and executable memory pages are not writable by user processes.

iOS requires mandatory code signing. The app package must be signed using an Apple Developer certificate. This does not provide security but identifies the registered developer and ensures that the app has not been modified after it has been signed.

Data protection

File contents are encrypted with a unique per-file key. This per-file key is encrypted with a class key & stored with the file’s metadata (the part of the file system that describes attributes of the file, such as size, modification time, and access permissions). The class key is generated from a hardware key in the device and the passcode. Unless the passcode is entered, the class key cannot be created and the file key cannot be decrypted.

The file system’s metadata is also encrypted. A file system key is used for this, which is derived directly from the hardware key.

A hardware AES engine encrypts and decrypts the file as it is written/read on flash memory.

App data can also be protected using libraries that access built-in hardware encryption capabilities.

Masque attacks

iOS has been hit several times with masque attacks. While there have been various forms of these, the basic attack is to get users to install malicious apps that have been created with the same bundle identifier as some exiting legitimate app. This malicious app replaces the legitimate app and masquerades as that app. Since Apple will not host an app with a duplicate bundle identifier, the installation of these apps has to bypass the App Store. Enterprise provisioning is used to get users to install this. which typically requires the user going to a URL that redirects the user to an XML manifest file hosted on a server. The ability to launch this attack is somewhat limited as the user will generally need to have an enterprise certificate installed to make the installation seamless.

Web apps

Both iOS and Android have full web browsers that can be used to access web applications. They also permit web apps to appear as a regular app icon. The risks here are the same as those for web browsers in general: loading untrusted content and leaking cookies and URLs to foreign apps.

Mobile-focused web-based attacks can take advantage of the sensors on phones. The HTML5 Geolocation API allows JavaScript to find your location. A Use Current Location permission dialog appears, so the attacker has to hope the user will approve but there the attacker can provide incentives via a Trojan horse approach: provide a service that may legitimately need your location.

Recently, a proof of concept web attack showed how JavaScript could access the phone’s accelerometers to detect movements of the phone as a user enters a PIN. The team that implemented this achieved a 100% success rate of recognizing a four-digit PIN within five attempts of a user entering it. Apple patched this specific vulnerability but there may be more undiscovered ones.

Hardware support for security

ARM TrustZone worlds
ARM TrustZone worlds

All Android and iOS phones use ARM processors. ARM provides a dedicated security module, called TrustZone, that coexists with the normal processor. The hardware is separated into two “worlds”: secure (trusted) and non-secure (non-trusted) worlds. Any software will reside in one of these two worlds and the processor executes in only one world at a time. Each of these worlds has its own operating system and applications. Logically, you can think of the two worlds as two distinct processors, each running their own operating system with their own data and their own memory. The only communication is through a messaging API.

The phone’s operating system and all applications reside in the non-secure world. Secure components, such as certain keys, signature services, and encryption services live in the secure world. Even the operating system kernel does not have access to any of the code or data in the secure world. Hence, even if an app manages a privilege escalation attack and gains root access, it will be unable to access certain data.

Applications for the secure world include key management, secure boot, digital rights management, secure payment processing, and biometric authentication.

Apple Secure Enclave

Apple uses modified ARM processors. In 2013, they announced Secure Enclave for their processors. The details are confidential but it seems to be a customized version of ARM’s TrustZone.

Secure Enclave is a coprocessor that runs its own operating system (a modified L4 microkernel. The processor has its own secure boot and custom software update mechanism. It uses encrypted memory so that anything outside the Secure Enclave cannot access its data.

It provides:

  • All cryptographic operations for data protection & key management
  • Random number generation
  • Secure key store, including Touch ID (fingerprint) data
  • Data storage for payment processing

The Secure Enclave maintains the confidentiality and integrity of data even if the iOS kernel has been compromised.

Content protection

Digital content, whether it is software, music, photos, video, or documents, being a string of bits, is simple to copy and distribute. While this is great for content consumers, it is not always a good situation for content producers. How do producers have a chance of selling their work if their content could be freely copied and distributed on a large scale. In his well-known 1979 Open Letter to Hobbyists Micro-Soft General Partner Bill Gates asserts that almost 90% of deployments of his company’s BASIC interpreters are stolen.

The initial challenge was making software distribution more difficult, followed by other content.

One approach was to associate software with a specific computer. The software will check that association and refuse to function if copied onto another system. To do this, one needs to identify how one computer is different from another. Several characteristics have been used:

  1. A CPU serial number. Early microprocessors did not have one but all have them now.

  2. If you don’t have a unique serial number, you can create a unique ID based on the system’s configuration (amount of memory, disk size, CPU type, MAC address, any serial numbers on attached hardware).

  3. Add uniqueness by adding special hardware to the system, such as a USB dongle that contains a unique key or, in some cases, runs some aspect of the software.

Another solution was to install software in a way that cannot be copied. Some PC installers would install software but then mark some of the disk blocks as bad. Attempts to copy the software would fail because the copy program would refuse to read bad blocks. This approach was no longer viable with modern operating systems because the OS would not allow that low level of access and control of disk blocks.

Code was added to programs to check that it is running on the approved device. Later, as network access became widespread, some software would contact a license server, identify itself and its computer, and wait for an acknowledgement. This is still the solution used for subscription-based software, such as Office 365 and Adobe Creative Cloud. At times, software vendors would add timebombs, that would force the software to cease to exist after some time if it was determined to be installed illegally. Timebombs were deemed illegal in certain areas.

A problem with any copy protection approach was that an attacker could step through the software with a debugger and remove these copy protection checks. This was easier when executables were smaller (one of the first spreadsheets, VisiCalc, was under 27 KB; today’s Microsoft Excel executable is over 26 MB and the entire software package is over 1.7 GB).

Perhaps the ultimate solution to avoiding issues of copying software is to host it in the cloud. The company provides you with the computing platform as well as the software. When your subscription expires, you can no longer use the platform.

DRM

The issue of software protection extended onto other media as computers got more storage and networks became widespread. People started sharing music on a large scale with services such as Napster. The fear of widespread copying and distribution led music and video industries to demand software on computer systems that protects their content. The goal was digital rights management: software that enforces rules on the playback, copy, and distribution of digital content. Ideally, the rules would be attached to the content (e.g., number of copies, owner of the content, devices that can play it) and software on any device that could play that form of content would validate – and abide by – those rules.

Digital Video Broadcasting (DVB)

Digital Video Broadcasting (DVB) is a set of standards that define digital broadcasting via cable, satellite, and terrestrial infrastructures. The system relies on trusted hardware: dedicated, tamper-proof hardware with built-in software and keys. The encrypted data stream containing the video is decrypted using the keys that are usually stored in smart cards that contain subscriber information. Symmetric cryptography is used throughout.

It would be impractical to send individual video streams encrypted for each subscriber. Instead, a video stream is encrypted with a random key, called a control word. This key changes several times a minute, so a single movie is encrypted hundreds of keys. The broadcaster (the head end) must also send this stream of keys to every subscriber. These keys are also encrypted with a shared key that is stored in each subscriber’s smart card. This final shared key is called the Entitlement Management Message (EMM*) and is updated less frequently (every few days or weeks). Updates of this key are encrypted and broadcast for every subscriber individually (which is why it might take a while to send them to everyone).

CableCARD

CableCARD is a standard whose goal was to avoid the need for custom cable TV and satellite decoder boxes: a customer would be able to use non-proprietary hardware, such as a TV set or DVR, and all the secrets will be stored in a credit-card-sized smart card called the CableCARD. The mechanisms are, at a high level, similar to those used by DVB.

The tamper-proof card stores all keys and contains decryption hardware. It also contains a unique subscriber’s key. None of the keys will ever leave the device.

The card identifies and authorizes the subscriber and all received content, which is encrypted, is sent into the card. The card decrypts it and provides an MPEG–2 video stream to the host, which contains the receiver, tuner, and MPEG decoder.

Each subscriber periodically receives an Entitlement Management Messages (EMM), which is sent to the CableCARD and decrypted by the card. It identifies which content the subscriber is entitled to watch.

Encrypted keys are continuously for the content are sent continuously and decrypted by the keys in the EMM.

DVD and Blu-Ray

Copy protection was an issue with movies distributed on physical media. DVD and Blu-Ray discs require trusted players that are pre-programmed with a key. Software decoders must obfuscate their code to make it difficult to disassemble and reverse-engineer.

The mechanisms for DVD and Blu-Ray are different: DVD uses CSS (Content Scrambling System) while Blu-Ray uses AACS (Advanced Access Control System). Conceptually, though, they are similar.

A movie on the disc is encrypted with a random media key. The disc then contains lots of encryptions of this random key: one for each device family. For DVDs, each key covered a manufacturer. Blu-Ray contains many more keys, often identifying individual devices. The idea was that if one key leaked, future discs would no longer encrypt the media key with that particular device key. In practice, this was not done.

DVDs used a weak stream cipher to encrypt the media that could be broken in 225 tries. The cipher was broken and keys were extracted as well. Blu-Ray had the sense to use AES to encrypt their content. However, many AACS keys were recovered since 2007 using debuggers. This allowed recovery of the media key (which is unique for each movie). Thus far, over 20,000 media keys have been decrypted, so Blu-Ray security is also effectively pointless.

Legal protections

The final, and possibly most effective, approach for protecting content is a legal one. The Digital Millennium Copyright Act (DMCA) criminalizes the production and dissemination of technology, devices, or services intended to circumvent measures (DRM) that control access to copyrighted works. It also criminalizes the act of circumventing an access control, whether or not there is actual infringement of copyright itself.

This makes it illegal for a legitimate company to sell Blu-Ray or cable TV decoders. Without DMCA, anyone would be able to build a set-top box to decode video signals. Even if the source encryption is difficult to crack, one could crack crack HDCP (High Definition Content Protection) - the encryption on an HDMI cable between a TV and player and extract content that way. Finally, there is always the analog hole: you an use an audio recorder to re-record music or a camcorder pointed at a TV set to re-record a movie.

Steganography and Watermarking

Cryptography’s goal is to hide the contents of a message. Steganography’s goal is to hide the very existence of the message. Classic techniques included the use of invisible ink, writing a message on one’s head and allowing the hair to cover it, microdots, and carefully-clipped newspaper articles that together communicate the message.

A null cipher is one where the actual message is hidden among irrelevant data. For example, the message may comprise the first letter of each word (or each sentence, or every second letter, etc.). Chaffing and winnowing entails the transmission of a bunch of messages, of which only certain ones are legitimate. Each message is signed with a key known only to trusted parties (e.g., a MAC). Intruders can see the messages but can’t validate the signatures to distinguish the valid messages from the bogus ones.

Messages can be embedded into images. There are a couple of ways of hiding a message in an image:

  1. A straightforward method to hide a message in an image is to use low-order bits of an image, where the user is unlikely to notice slight changes in color. An image is a collection of RGB pixels. You can mess around with the least-significant bits and nobody will notice changes in the image, so you can just encode the entire message by spreading the bits of the message among the least-significant bits of the image.

  2. You can do a similar thing but apply a frequency domain transformation, like JPEG compression does, by using a Discrete Cosine Transform (DCT). The frequency domain maps the image as a collection ranging from high-frequency areas (e.g., “noisy” parts such as leaves, grass, and edges of things) through low-frequency areas (e.g., a clear blue sky). Changes to high frequency areas will mostly be unnoticed by humans: that’s why jpeg compression works. It also means that you can add your message into those areas and then transform it back to the spatial domain. Now your message is spread throughout the higher-frequency parts of the image and can be extracted if you do the DCT again and know where to look for the message.

Many laser printers embed a serial number and date simply by printing very faint color splotches.

Steganography is closely related to watermarking. and the terms “steganography” and “watermarking” are often used interchangeably.

The primary goal of watermarking is to create an indelible imprint on a message such that an intruder cannot remove or replace the message. It is often used to assert ownership, authenticity, or encode DRM rules. The message may be, but does not have to be, invisible.

The goal of steganography is to allow primarily one-to-one communication while hiding the existence of a message. An intruder – someone who does not know what to look for – cannot even detect the message in the data.