First, a Dive into Cryptographic Keys Theory
In cryptography, the encryption and/or decryption of sensitive and classified information is achieved through the combined use of cryptographic algorithms and keys. Keys are characterized by their key size or key length, which is the number of bits in the key used in the cryptographic algorithm. NIST SP 800-57 Part 1, rev. 4 defines a cryptographic key as “A parameter used in conjunction with a cryptographic algorithm that determines its operation in such a way that an entity with knowledge of the key can reproduce, reverse or verify the operation, while an entity without knowledge of the key cannot.”
Key length defines an algorithm's security, since it is “associated with the amount of work (that is, the number of operations) that is required to break a cryptographic algorithm or system.” Ideally, key length would coincide with the lower-bound on an algorithm's security and most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e., Triple DES has 112 bits of security).
Therefore, a key should be large enough so that a brute-force attack is not feasible, meaning it would take too long to execute and the result would be of no use. Shannon's work on information theory showed that to achieve so called perfect secrecy, the key length must be at least as large as the message and only used once (One-time pad algorithm). Because of the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses on computational security, under which the computational requirements of breaking an encrypted text must not be feasible for an attacker.
NIST calls “the time span during which a specific key is authorized for use by legitimate entities” a cryptoperiod. According to NIST SP 800-57 part 1 rev. 4, “A suitably defined cryptoperiod limits the amount of exposure if a single key is compromised, limits the time available for attempts to penetrate physical, procedural, and logical access mechanisms that protect a key from unauthorized disclosure, limits the period within which information may be compromised by inadvertent disclosure of keying material to unauthorized entities, and limits the time available for computationally intensive cryptanalytic attacks.” The length of a cryptoperiod is defined by various factors, such as the operating environment, the classification and volume of protected data, the personnel rotation, etc.
Based on the above criteria, NIST recommends that the maximum cryptoperiod of private keys associated to certificates should be between one and three years and should be shorter than the cryptoperiod of the corresponding public key. Scott Helme says that “you should rotate your private keys at least every year” and doing otherwise “is bad hygiene and the longer a given cryptographic key is in use the more likely it is to face compromise.”
The Key Reuse Problem
Despite recommendations and the inherent security risks, many vendors are motivated to reuse cryptographic keys, because key reuse can reduce:
- storage requirements for certificates and keys,
- the costs of key certification,
- the certificate verification time, and
- the footprint of cryptographic code and development effort.
EMV Protocol for Credit Card Transactions
One example of a successful key reuse is the debit/credit cards with chips and PIN. Credit/debit cards use the EMV standard, which specifies the inter-operation of the cards with the Point-of-Sales (POS) terminals and the Automated Teller Machines (ATMs). An EMV card contains a chip which allows it to perform cryptographic computations and contains a symmetric key which shares with the Issuing Bank. Most cards are also equipped with RSA keys to compute signatures for card authentication and transaction authorization, and to encrypt the PIN between the terminal and the card.
An EMV transaction progresses over three stages:
- Card Authentication: Assures the terminal which bank issued the card, and that the card data have not been altered
- Cardholder Verification: Assures the terminal that the PIN entered by thecustomer matches the one for this card
- Transaction Authorization: Assures the terminal that the bank which issued the card authorizes the transaction A successful transaction ends with the card producing a Transaction Certificate (TC), which is a MAC computed over the transaction details.
Given the limited on-card processing environment, reducing the storage and computation consumed by the cryptographic functions in EMV is very important. The EMV standard allows the same RSA key-pair to be used for both PIN encryption and card authentication signature generation.
Although no formal analysis exists to determine whether this key reuse is detrimental to the security of EMV or not, scientific research has demonstrated that it is possible to launch successful MiTM attacks on the EMV protocol taking advantage of the RSA key-reuse (wedge attacks).
In a typical wedge attack, the wedge manipulates the communication between the card and the terminal so that the terminal believes PIN verification was successful. Such attacks have been known about for some time, but received a lot of publicity because of a paper by Murdoch et. al., known as “The Cambridge Attack”.
Attacks on IPsec Key Establishment
Dennis Felsch and other security researchers from the Universities of Bochum, Germany, and Opole, Poland, demonstrated in 2018 that key reuse across protocols as implemented in certain network equipment carries high security risks.
IPsec enables cryptographic protection of IP packets. It is commonly used to build VPNs (Virtual Private Networks). For key establishment, the IKE (Internet Key Exchange) protocol is used. IKE exists in two versions, each with different modes, different phases, several authentication methods, and configuration options. In their paper, the researchers demonstrated that reusing a key pair across different versions and modes of IKE can lead to cross-protocol authentication bypasses, enabling the impersonation of a victim host or network by attackers.
The researchers exploited a Bleichenbacher oracle in an IKEv1 mode, where RSA encrypted nonces are used for authentication. Using this exploit, they broke these RSA encryption-based modes of four large network equipment manufacturers, Cisco, Huawei, Clavister, and ZyXEL. In addition, they broke RSA signature-based authentication in both IKEv1 and IKEv2 when given access to powerful network equipment.
Key Reuse in Embedded Products
The big problem is that vendors of embedded products, such as Internet routers, gateways and modems, often leave hardcoded SSH keys and HTTPS server certificates in their devices so as to enable web access to the devices and for use by other protocols such as EAP/802.1X or FTPS. These keys have been embedded, essentially “baked in” the firmware image (operating system) of devices and are mostly used for providing HTTPS and SSH access to the device. This is a problem because all devices that use the firmware use the exact same keys. Since these keys and certificates are the same across multiple products, they are relatively easy to exploit.
There are many reasons why this large number of devices are accessible from the Internet via HTTPS and SSH. These include:
- Insecure default configurations by vendors
- Automatic port forwarding via UPnP
- Provisioning by ISPs that configure their subscribers' devices for remote management
With the proliferation of IoT devices in various sectors, such as healthcare, energy, oil and refining, and water grid, the problem is more serious than we think. Back in 2015, SEC Consult analyzed the firmware on more than 4,000 embedded devices from over 70 vendors. The company examined the use of cryptographic public keys, private keys, and certificates in the firmware images of products like routers, modems, and IP cameras and found more than 580 unique private keys distributed across the devices.
In 2016 SEC Consult said the number of web devices with known private keys for HTTPS server certificates has increased by 40% since the previous year. From 3.2 million devices in November 2015, the number of IPv4 hosts using a known private key were at 4.7 million.
The security implications of cryptographic key reuse are substantial. Impersonation, man-in-the-middle or passive decryption attacks are possible. These attacks allow an attacker to gain access to sensitive information like administrator credentials which can be used in further attacks. In order to exploit this vulnerability, an attacker must be in the position to monitor/intercept communication. This is easily feasible when the attacker is located within the same network segment (local network). Exploiting this vulnerability via the Internet is significantly more difficult, as an attacker must be able to get access to the data that is exchanged.
Searching for key fingerprints in data from Internet-wide scans is a low-cost way of finding the IP addresses of specific products/product groups. This enables researchers to measure the extent of the problem, but attackers can use this approach as well to exploit vulnerabilities (e.g. weak passwords or vulnerabilities in firmware) at scale.
John Green of the Aruba Networks, which builds firmware for various network devices, said that “In the past we were persuaded by the ‘but certificates are too complicated—just leave the factory default cert as-is and customers who care about security can update it’ argument, but I now think we're doing a disservice to customers by giving them too much rope with which to hang themselves.”
Key reuse is "bad hygiene"
Key reuse is also a problem for DevOps environments. A 2017 Venafi study indicated that 68 percent of mature DevOps respondents and 79 percent of adopting respondents allow key reuse. This is a “bad hygiene” practice, as the security of keys and certificates requires more attention. If the keys and certificates used by DevOps teams are not properly protected, cyber criminals will be able to exploit SSL/TLS keys and certificates to create their own encrypted tunnels. Attackers can use misappropriated SSH keys to pivot inside the network, elevate their own privileged access, install malware or exfiltrate large quantities of sensitive corporate data and intellectual property, while remaining undetected.
Mitigating key reuse is twofold
Mitigating this threat is a twofold activity.
First, users of IoT devices should not rely on the factory-default certificate to protect HTTPS communication. This certificate is providing with very little security because with a known private key, an attacker can conduct a man-in-the-middle attack without the user knowing it. What can be done? Buy a certificate from a public CA. On the other side of the equation, vendors and app developers should make sure that each device or application uses random, unique cryptographic keys in accordance with the established certificate and key management practices described in NIST publications.