Introduction to Computer Security

Thinking about Security

Paul Krzyzanowski

January 27, 2017

Computer Security

Computer security addresses three areas: confidentiality, integrity, and availability.

Confidentiality

Confidentiality refers to keeping data hidden. It can refer to an operating system disallowing you from reading the contents of a file or it may refer to data that is encrypted so that you can read it but cannot make sense of it. In some cases, confidentiality can also deal with hiding the very existence of data, computers, or transmitters. Simply knowing that two parties are communicating may be important for an adversary. RFC 4949, the Internet Security Glossary, defines data confidentiality as:

“The property that information is not made available or disclosed to unauthorized individuals, entities, or processes [i.e., to any unauthorized system entity].”

Confidentiality is often confused with privacy. Privacy limits what information can be shared with other while confidentiality provides the ability to to conceal messages or to exchange messages without anybody else being able to see them. It controls how others can use information about you. Privacy may also enable one to send messages anonymously.

RFC 4949, the Internet Security Glossary, citing the U.S. HIPAA Privacy Act of 1964, defines privacy as:

The right of an entity (normally a person), acting in its own behalf, to determine the degree to which it will interact with its environment, including the degree to which the entity is willing to share its personal information with others.

The need for privacy is a reason for implementing confidentiality.

Integrity

Integrity is ensuring that data and all system resources are trustworthy. This means that they are not maliciously or accidentally modified. There are three categories of integrity:

  1. Data integrity is the property that data has not been modified or destroyed in an unauthorized or accidental manner. Someone cannot override or delete your files.

  2. Origin integrity is the property that a message has been created by its author and not modified since then. It includes authentication: a person or program proving their identity.

  3. System integrity is the property that the entire system is working as designed, without any deliberate or accidental modification of data or manipulation of processing that data.

In many cases, integrity is of greater value than confidentiality.

Availability

Availability deals with systems and data being available for use. We want a system to operate correctly on the right data (integrity) but we also want that system to be accessible and capable of performing to its designed specifications. For example, a denial of service (DoS) attack is an attack on availability. It does not access any data or modify the function of any processes. However, it either dramatically slows down or completely disallows access to the system, hurting availability.

We want all three of these properties in a secure system. For example, we can get confidentiality and integrity simply by turning off a computer but then we lose availability. Integrity on its own is useful, but does not provide the confidentiality that is needed to ensure privacy. Confidentiality without integrity is generally useless since you may access data that was modified without your knowledge or use a program that is manipulating the data in a manner that you did not intend.

Thinking about security

Algorithms, cryptography, and math on their own do not provide security. Security is not about simply adding encryption to a program, enforcing the use of complex passwords, or placing your systems behind a firewall. Security is a systems issue and is based on all the components of the system: the hardware, firmware, operating systems, application software, networking components, and the people. Security needs are based on people, their relationships with each other, and their interactions with machines. Hence, security also includes processes, procedures and policies. Security also has to address detection of intruders and the ability to perform forensics: that is, figure out what damage was done and how.

“Security is a chain: it’s only as secure as the weakest link” — Bruce Schneier

Security is difficult. If it was not, we would not see near-daily occurrences of successful attacks on systems, including high-value systems such as banks[1], governments[2], hospitals[3], large retailers[4], and high-profile websites[5].

A few recent high-profile examples where computer security was an issue include:

  • 2016 U.S. Elections, which included infiltrating Democratic National Committee servers, private email hacking, and alleged voting machine hacking.

  • Iranian nuclear power plants, which were attacked in 2010 by Stuxnet, a computer worm that targeted Windows systems running Siemens software and compromised connected programmable logic controllers (PLCs) to destroy centrifuges.

  • Yahoo, who in 2016 announced that over a billion accounts were compromised in 2013 and 2014, revealing names, telephone numbers, dates of birth, encrypted passwords and unencrypted security questions that could be used to reset a password.

  • TJX, the parent company of TJ Maxx, had 40 billion credit cards stolen in 2007, resulting in a theft of about $1 billion.

  • Sony Pictures was hacked in 2014 and personal information about employees, their families, salaries, email, and unreleased films was disclosed.

  • 744,408 BTC ($350 million at the time) was stolen in 2010 from one of the first and largest Bitcoin exchanges, Japan’s Mt. Gox. In 2016, more than $60M worth of bitcoin (119,756 BTC) was stolen from Bitfinex.

  • The October 2016 distributed denial of service (DDoS) attack on NDS provider Dyn was the largest of its type in history and made a vast number of sites unreachable.

  • In 2016, MedSec, a vulnerability research company focused on medical technology, claimed it found serious vulnerabilities in implantable pacemakers and defibrillators.

At a software level, security is difficult because so much of the software we use is incredibly complex. Microsoft Windows 10 has been estimated to comprise approximately 50 million lines of code. A full Linux Fedora distribution comprises around 200 million lines of code, and all Google services have been counted as taking up around two billion lines of code. It is not feasible to audit all this code and there is no doubt that there are many bugs lurking in it, many of which may have an impact on security.

But security is about systems, not a single program. Systems themselves are complex with many components ranging from firmware on various pieces of hardware to servers, load balancers, networks, clients, and other components. Systems often interact with cloud services and programs often make use of third- party libraries (you didn’t write your own compiler or JSON parser). There are complex interaction models that make it essentially impossible to test every possible permutation of inputs to a system. Moreover, all components are often not under the control of one administrator. A corporate administrator may have little or no control of the software employees put on their phones or laptops or the security in place at various cloud services that might be employed by the organization (e.g., Slack, Dropbox, Office365). Security must permeate the system: all of its components: hardware, software, networking, and people.

People themselves are a huge — and dominant – problem in building a secure system. They can be careless, unpredictable, overly-trusting, and malicious. Most security problems are not based in algorithms but in the underlying system and people that use it. The human factor, and social engineering in particular, is the biggest problem and top threat to systems. Social engineering is a set of techniques aimed at deceiving humans to obtain needed information. It often relies on pretexting, where a person or a program pretends to be someone else to obtain the needed data.

Security System Goals

We saw that computer security addressed three areas of concern. The design of ecurity systems also has three goals.

  1. Prevention. Prevention aims at preventing attackers from violating your security policy. Implementing this requires creating mechanisms that users cannot override. A simple example of prevention having software accept and validate a password. Without the correct password, an intruder cannot proceed.

  2. Detection. Detection attempts to detect and report security attacks. It is particularly important as a safeguard when prevention fails. Detection will allow us to find where the weaknesses were in the mechanism that was supposed to enforce prevention. Detection is also useful in detecting active attacks even if the prevention mechanisms are working properly. It allows us to know that an attack is being attempted, identify where it is originating from, and what it is trying to do.

  3. Recovery. Recovery has the goals of stopping any active attack and repairing any damage that was done. A simple but common example of recovery is restoring a system from a backup. Recovery includes forensics, which is the gathering of evidence to understand exactly what happened and what was damaged.

Policies & Mechanisms

Policies and mechanisms are at the core of designing secure systems. A policy specifies what is or is not allowed. For example, only people the human resources department have access to certain files or only people in the IT group can reboot a system. Policies can be expressed in natural language, such as a policy document. Policies can be defined more precisely in mathematical notation but that is rarely useful for most humans or software. Policies are often described in a policy language for specific components of the system. This language provides a high degree of precision along with the ability of being readable by humans. Web Service Security Policy Language is an example of a security policy language that defines constraints and requirements for SOAP-based web services.

Mechanism are the components that implement and enforce the policies. For example, a policy might dictate that users have names and passwords. A mechanism will implement asking for a password and authenticating it.

Security Engineering

We are interested in examining computer security from an engineering point of view. At the core, we have to address security architecture: how do we design a secure system and identify potential weaknesses in that system? Security engineering is the task of implementing the necessary mechanisms and defining policies across all the components of the system.

An important aspect of engineering is understanding risks and making compromises. For example, a structural engineer does not set out to build the ultimate earthquake-proof and storm-proof building when designing a skyscraper in New York City but instead follows the wind load recommendations set forth in the New York City Building Code. Similarly, there’s no such thing as an unbreakable or fireproof vault or safe. Safes are rated by how much fire or attack they can sustain. For instance, a class 150 safe can sustain an internal temperature of less than 150° F (66° C) and 85% humidity for a specific amount of time (e.g., 1 hour). A class TL–30 combination safe will resist abuse from mechanical and electrical tools for 30 minutes. Watches are another example. No watch is truly waterproof. Instead they are rated for water resistance at a specific depth (pressure), although watches such as the Rolex Deepsea are waterproof for all practical purposes but even it is rated not to infinite depth but to 12,800 feet (3,900 meters).

Engineering tradeoffs relate to economic needs. Do you need to spend $10,150 on the Rolex Deepsea or will the Sea-Dweller, which is rated for only 1,200 meters but costs $1,400 less, good enough? Do you buy a JS-CF25 safe for $4,245, which is rated TL–30, or spend $500 less and get the identical-looking JS-CE25 that is rated TL–15? The same applies to security. No system is 100% secure against all attackers for all time. If someone is determined enough and smart enough, they will get in. The engineering challenge is to understand the tradeoffs and balance security vs. cost, performance, acceptability, and usability. It may be cheaper to recover from certain attacks than to prevent the attack.

We want to secure our systems … but what do we secure them against or from whom? There is a wide range of possible attackers that you may want to guard against. For example, you may want to secure yourself against:

  • Yourself accidentally deleting important system files.

  • Your colleagues, so they not being able to look at your files on a file server.

  • An adversary trying to find out about you and get personal data.

  • A phone carrier tracking your movements.

  • An enemy who plans to throw a grenade on your computer.

  • The NSA?

Protecting yourself from accidentally destroying critical system files is a far easier task than defending your system from the NSA if the agency is determined to look for something there. Assessing a threat is called risk analysis. We want to determine what parts of the system need to be protected, to what degree, and how much effort (and expense) we should expand into protecting them.

As part of risk analysis, we may need to consider laws and customs and assess whether any types of security measures are illegal. That can restrict how we design our system. For example, certain forms of cryptography were illegal to export outside the U.S. and some restrictions still exist. We also need to consider user acceptability, or customs. Will people put up with the security measures, try to bypass them, or revolt altogether? For instance, we may decide to authenticate a user by performing a retina scan (which requires looking into an eyepiece of a scanner) along with a DNA test (which requires swabbing the mouth and waiting 90 minutes when using the a solid-state DNA testing chip). While these mechanisms are proven techniques for authentication, few people would be willing to put up with the inconvenience. On the systems side, one would also need to consider the expense of the special equipment needed. We need to balance security with effort, convenience, and cost.

Attacks and threats

When our security systems are compromised, it is because of a vulnerability. A vulnerability is a weakness in the security system. It could be a poorly defined policy, a bribed individual, or a flaw in the underlying mechanism that enforces security. An attack is a means of exploiting a vulnerability. For example, trying common passwords to log into a system is an attack. A threat is the potential harm from an attack on the system.

Threats fall into four broad categories [6]:

  • Disclosure: Unauthorized access to data, which covers exposure, interception, interference, and intrusion. This includes stealing data, improperly making data available to others, or snooping on the flow of data.

  • Deception: Accepting false data as true. This includes masquerading, which is posing as an authorized entity; substitution or insertion of includes the injection of false data or modification of existing data; repudiation, where someone falsely denies receiving or originating data.

  • Disruption: Some change that interrupts or prevents the correct operation of the system. This can include maliciously changing the logic of a program, a human error that disables a system, an electrical outage, or a failure in the system due to a bug. It can also refer to any obstruction that hinders the functioning of the system.

  • Usurpation: Unauthorized control of some part of a system. This includes theft of service or theft of data as well as any misuse of the system such as tampering or actions that result in the violation of system privileges.

Many attacks are combinations of these threat categories. For example,

Snooping is the unauthorized interception of information
It is a form of disclosure and is countered with confidentiality mechanisms.
Modification or alteration is the making of unauthorized changes to information
This is a form of deception, disruption or usurpation and is countered with integrity mechanisms.
Masquerading or spoofing is the impersonation of one entity by another
It is a form of deception and usurpation and is countered with integrity mechanisms.
Repudiation of origin is the false denial that an entity sent or created something.
It is a form of deception and may be a form of usurpation and is countered with integrity mechanisms.
Denial of receipt is the false denial that an entity received data or a message
It is a form of deception and is countered with integrity and availability mechanisms.
Delay is the temporary inhibition of a service
It is a form of disruption and may be a form of usurpation. It is countered with availability mechanisms.
Denial of service is the long-term inhibition of a service
It is a form of disruption and may be a form of usurpation. It is countered with availability mechanisms.

Computer vs. real world risks

In the physical world. for most of people and companies, security risks are usually low. Most people are not attacked and most companies are not victims of espionage. Computer threats mimic rear-world threats. Systems are subject to theft, vandalism, extortion, fraud, coercion, and con games. The motivations are the same but the mechanisms are different.

More significantly, attacking in the computer world is often much easier, less risky, and hence more common. The risks of getting attacked are therefore higher.

  • Privacy rules can be the same in the computer world but accessing data is easier. For example, collecting data on recent real-estate sales can be done automatically. This can help in social engineering. Access to data & storage is cheap and easy. Networking is cheap. Robocalls are cheap. It is easy to collect, search, sort, and use data. By mining and correlating marketing data, we can target potential targets (“customers”) better. For example, one can buy data from credit databases such as Experian or Equifax.

  • Attacks from a distance are possible. You do not need to be physically present. Hence there is less physical danger, which allows cowards to attack. You can also be in a different state or even a different country. Networking and communications enables knowledge sharing. Only the first attacker needs to be skilled; others can use the same tools. Prosecution is difficult. Usually there’s a high degree of anonymity.

  • It is easier to cast a wide net via scripting. You can try thousands or millions of potential targets and see if any of them have security weaknesses that can you exploit. Automation enables attacks on a large scale. It makes attacks that have tiny rates of return or small chances of success profitable. This includes email scams, transferring fractional cents from bank accounts, and attempts to exploit known weaknesses.

Criminal attacks

We have looked at the consequences of threats. Let us consider some of the motivations of attackers. These mirror real-word threats.

Fraud
Attacks that involve deception or a breach of confidence, usually involving money but may also be used to discredit an opponent.
Scams
A scam is a type of fraud that usually involves money and a business transaction. The victim often pay for something and gets little or nothing in return. Pyramid schemes and fake auctions are examples of scams.
Destruction
Destruction is often instigated as revenge by [ex-] employees, disgruntled customers, or political adversaries. It can include disruption rather than destruction, such as Distributed Denial of Service (DDoS) attacks to make services unreachable or outright destruction of files, computers, or websites.
Intellectual property theft
This includes theft of an owner’s intellectual property, which can include software, product designs, unlicensed use of patents, marketing plans, business plans, etc. This includes not just theft through hacking but also the distribution of copies and unauthorized use of software, music, movies, photos, and books. A goal in security is to keep private data private. However, we sometimes want to make data publicly accessible while keeping control of how it is redistributed.
Identity theft
If someone can impersonate you, they can access whatever you are allowed to access: withdraw money, sell your car, log into your work account, etc.
Brand theft
Companies spend a lot of money building up their brand and reputation. Someone can steal the brand outright (e.g., sell an Apple phone that’s not made by apple, or a Rolex watch) or make minor modifications to product design, packaging, or presentation [7]. The goal may be to confuse customers (e.g., the fake Apple store in Kunming) or to boost the perception of the adversary’s reputation. For example, a web site can advertise that it is “Norton secured” when it has not obtained a Symantec SSL Certificate or advertise itself as a Microsoft Partner (or even a Gold Certified Partner) when it is not.

Privacy Violations

Some attacks result in privacy violation. They may not be destructive or even detected by the entity that is attacked but can provide valuable data for future scams or deceptions.

Surveillance
Surveillance attacks include various forms of snooping on users. They can include the use of directional microphones, lasers to detect glass vibration, hidden cameras and microphones, keyloggers, GPS trackers, wide-area video surveillance, and toll collection systems.
Databases
Mining data is easy if it is computer-accessible. The data is useful for social engineering attacks and determining where users have presence. Free and paid databases include credit databases, health databases, political donor lists, real estate transfer, death, birth, and marriage records. Additional corporate databases contain Amazon shopping history, Netflix movies, and Facebook posts. The company Campaign Aanlytica built a vast database on psychological profiles of 230 million adult Americans via Facebook quizzes.
Traffic analysis
You may not see the message or be able to decrypt its contents but you can see communication patterns: who is talking to whom. This allows you to identify social circles. Governments used this to identify groups of subversives. If you were tagged as a subversive, the people with whom you communicate would, at the least, be suspicious. In Secrets and Lies[8], Bruce Schneier points out that “in the hours preceding the U.S. bombing of Iraq in 1991, pizza deliveries to the pentagon increased 100-fold.”
Large-scale surveillance
ECHELON is a massive surveillance program led by the NSA and operated by intelligence agencies in the U.S., UK, Canada, Australia, and New Zealand. It intercepts phone calls, email, Internet downloads, and satellite transmissions. The wide scope of communication mechanisms it monitors gives it an incredible ability to identify social networks. However, the challenge with massive data gathering operations is information overload. How do you identify critical events from noise? For instance, how do you distinguish a real threat to kill the president or detonate a bomb from someone’s wishful thinking?
Publicity attacks
Publicity attacks were popular in the early days of the web. Hackers took pride in showing off their exploits: “Look, I ‘hacked’ into the CIA[9]!” These attacks still exist but lost much of their luster.
Service attacks
Denial of Service (DoS) attacks take systems out of service, often by saturating them with requests to render them effectively inoperative. Since it can be difficult for one machine on one network to do this, this is often done on a large scale by many widely distributed computers, leading to something known as a Distributed Denial of Service (DDoS) attack. Denial of service can also be the process of getting the target to disable the defenses they have set up. For example, if your car alarm goes off every night for apparently no good reason, you are likely to disable it, giving the attacker unfettered access.

Adversaries: Know Thine Enemy

“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”
— Sun Tzu, The Art of War

Characteristics

We have seen that motives in computer attacks are generally similar to those in the physical world. Adversaries in the digital world also mimic those of the physical world. People are people, after all. Various factors determine what kind of person may attack a specific system. We need to know our attackers and their skill level to determine what kind of defense we need to mount.

Goals

There can be a wide variety of goals that cause an attacker to target a system. These mirror the attack types we just examined. The goals of an attack may be to inflict damage, financial gain, or simply get information on someone for future attacks. It is important to understand goals to know what countermeasures will be effective. Inflicting damage on a system (e.g., throw a grenade) is a totally different goal than trying to steal someone’s credit card number.

Levels of access

Insiders have access and tend to be trusted. They also have an intimate knowledge of the systems and software and tend to have a good level of expertise in navigating the systems, although generally not in attacking those systems.

Risk tolerance

Terrorists are willing to die for their cause. Career criminals will risk jail time. Publicity seekers, on the other hand, do not want to risk getting jailed.

Resources

Some adversaries operate on a tiny budget while others are extremely well-funded. With funding, one can buy computing resources, social resources (e.g., bribe someone), and expertise. Time is also a resource and some adversaries might have all the time in the world to attack a system while others either have a deadline or a time limit where it no longer makes economic sense to attack.

Expertise

Some adversaries are highly skilled while others are just poking around or using somebody else’s software to see if they can attack a system.

Economics

All these goals distill into economics, where a rational adversary will balance time, money, skills, risk, and likelihood of success to make a decision of whether it is worthwhile to attack a target.

Who are the adversaries?

Hackers

Hackers experiment with limitations of systems and may be good or evil. Their ultimate goal is to get to know a system better than the designers. Classic examples include phone hackers whose goal was to get free phone calls and lock pickers[10], who work on developing skills in picking locks even if they never plan to steal anything locked by one of those locks.

Only a small percentage of hackers are truly smart and innovative. Most just follow instructions or use programs created by those smart hackers. This broader set of hackers are often referred to as “script kiddies”. The trading of hacking tools and techniques enables a large community to get a powerful arsenal for hacking. Hackers often organize into groups with distinct cultures among the various groups. Typical hackers have a lot of time but not much money. Personalities vary, of course. Some hackers are extremely risk averse while others are not. Regardless of intent, the activity is generally criminal. Hackers, through their discovery and sharing of techniques, often enable other adversaries to get their job done.

To defend from hackers, one must look at the system from the outside as an attacker, not from the inside as a designer.

Lone criminals

Individuals or small groups with criminal intent form the largest group of adversaries. They are the ones who set up ATM skimmers and cameras to replicate ARM cards … or any number of other schemes. They often do not reap huge money but can be creative.

Malicious insiders

Insiders are among the most insidious of adversaries because they are indistinguishable from legitimate, trusted insiders. Perimeter defenses don’t work on them: they have clearance to get into the physical premises and can authenticate themselves to servers. They often have a high level of access and are trusted by the very systems that they are attacking. For example, an insider may program a payroll system to give herself a raise, turn off an alarm at a specific time, create back-door access to bypass authentication, implement a key generator for software installations, or delete log files.

Insiders are difficult to identify and stop since they have been cleared for access and, effectively, belong there. Most security defenses are designed to deal with external attackers, not internal ones. Their goals are varied. They might be seeking revenge, exposing corruption, or trying to get money or services.

Industrial spies

Industrial espionage involves getting confidential information about a company. This can include finding out about new product designs, trade secrets, bids made on a project, or corporate finances. Industrial espionage may include hiring and bribing employees to reveal trade secrets, eavesdropping, or dumpster diving.

If the adversary is a competing company or a country, it may be extremely well-funded. For example, a country may want to ensure that its aerospace company wins a bid on a large aircraft contract and will try to find information about the competition. While well-funded, industrial spies are often risk-averse since the attacker’s company’s (or country’s) reputation can be damaged if it is caught spying.

Press

Reporters have a lot of incentive to be the first to get the scoop on a news item: it can drive up circulation, boost their credibility, and boost their salary. To do this may involve identifying several targets (government agencies, celebrities, companies) and spying on them in a variety of ways: social engineering, bribing, dumpster diving[11], tracking movements, eavesdropping, or breaking in. As with industrial spies, the press is generally risk averse as well for fear of losing one’s reputation.

Organized crime

The digital landscape offered organized crime more opportunities to make money. One can steal cell phone IDs, credit card numbers, ATM information, or bank accounts to get cash. Money laundering is a lot easier with electronic funds transfers (EFTs) and anonymous currency systems such as bitcoin.

Organized crime can spend good money — if the rewords are worth it — and can purchase expertise and access. They also have higher risk tolerance than most individuals.

Police

Police are risk averse but have the law on their side. For example, they can get search warrants and take evidence. They also are not above breaking the law via illegal wiretaps, destruction of evidence, disabling body cameras, or illegal search and seizure.

Terrorists (freedom fighters[12])

Terrorists, and terrorist-like organizations, are motivated by geopolitics, religion, or their set of ethics. Example organizations include Earth First, Hezbollah, ISIS, Aryan Nations, Greenpeace, and PETA. Terrorists are usually more concerned with causing harm than with getting specific information. Although highly publicized, there are very few terrorists in the world and, statistically, terrorist activity is scant. Terrorists usually (not always) have relatively low budgets and low skill levels.

National intelligence organizations

This includes groups such as the Canadian Security Intelligence Service, Federal Security Service of the Russian Federation (FSB), UK Security Service (MI5), and Mossad. Within the U.S., there are 17 intelligence agencies, including the Central Intelligence Agency (CIA), National Security Agency (NSA), National Reconnaissance Office (NRO), Defense Intelligence Agency (DIA), Federal Bureau of Investigation (FBI). Look here for a lengthy list.

These groups have huge money and long-term goals: maintaining a watch on activities essentially forever. They are somewhat risk averse for fear of bad public relations. More importantly, they do not want any leaks or government actions to reveal their attack techniques. For example, as soon as the Soviet Union deduced that the U.S. press was reporting information that could have only been obtained from eavesdropping on cell phone calls, all government cell phone communication became encrypted. Even through the allies were able to break German cryptography during World War II, using too much information from cryptographic communication would have told the Germans that their messages were no longer secret. Intelligence agencies may also work with companies to ensure that the nation’s companies can underbid competitors or have every competitive technology advantage. Due to their wealth and influence, they often have the ability to influence standards In the U.S., for example, the NSA was instrumental in the adoption of 56-bit keys for DES or the Dual_EC_DRBG (Dual Elliptic Curve Deterministic Random Bit Generator). Lenovo computers, owned partially by the Chinese government’s Academy of Sciences has been banned by US, Britain, Canada, Australia, and New Zealand intelligence agencies because of “malicious circuits” built into the computers. Edward Snowden revealed that the NSA planted backdoors into Cisco routers built for export that allows the NSA to intercept any communications through those routers.

Infowarriors - cyber warfare

In time of war, nothing is safe and it has long been speculated that warring nations will launch electronic attacks together with conventional military ones. These attacks can aim to disrupt a nation’s power grids, wreak havoc on transportation systems. They can also aim to do large-scale damage to electronic equipment in general with EMP (electromagnetic pulse) weapons. Like national intelligence organizations, attackers will have access to vast amounts of money and resources. Unlike intelligence agencies, the goals are short-term ones. There is no risk of public relations embarrassment and no need for continued surveillance.

The recent (2016) U.S. presidential election added a new twist to this arena. Disruptions need not be strikes that wipe out targets such as power grids but can instead be a steady stream of misinformation, amplification of information, and possibly blackmail that can steer public opinion. Tampering with electronic voting machines can further manipulate the political process.

In the past, there was a distinction between military and civilian systems: military systems used custom hardware, operating systems, application software, and even custom networks and protocols. That distinction has largely evaporated. Commercial technology advances too quickly and an increasing number of military and government systems use essentially the same hardware and software as civilian companies. In 2012, for example, U.S. soldiers received Android tablets and phones, running a modified version of Android. In 2016, U.S. Army Special Operations Command reported that they are replacing their Android tactical smartphone with an iPhone 6s. The report stated that the iPhone is “faster; smoother; Android freezes up and has to be restarted too often”. Bugs, with security ramifications, have become de rigueur for the military. Vulnerabilities present in commercial systems now apply to military and government systems. Moreover, governments tend to have slower refresh cycles, so there’s a higher likelihood of older systems with more documented vulnerabilities being present. These systems include routers, Wi-Fi access points, and embedded systems, all of which are likely to not be treated as “computers” and patched or upgraded on a regular basis.

References