This week it matters. The open and fertile terrain that gave rise to innovations like Facebook and Amazon, Wikileaks and countless open source projects may have to answer to the Department of Justice now before deploying anything new. Ever. The anti-encryption backdoor debate and the issue of government data access is approaching accountability as a bill drafted earlier this year by Senators Lindsey Graham (R- SC ) and Richard Blumenthal (D- Conn.) could put the jurisdiction of (any and all) internet companies under the auspices of the Attorney General of the United States. This time, legislation will affect Section 230, a landmark law protecting free speech online. In the proposed revision of the EARN IT Act, many of those protections will be withheld unless internet companies comply with “best practices” generally translated to mean encryption backdoors. In the name of child protection, we come to a summit moment where the fate of personal data privacy, free speech and internet innovation hang in the balance as full government oversight and individual privacy rights are at an impasse. If we had to choose just one, the Graham-Blumenthal bill makes us do so. At this point, it will be up to Congress to decide.
In America, the encryption battle may finally be coming to a head. Are we weeks away from adopting similar practices to Australia, a la the lawful-access backdoor laws of 2018? Will we have our own industry insights to report two years later as heavyweights like Google and Facebook fall in line behind the Attorney General and allow for weakening of cryptographic safeguards?
First, let’s find out what’s at stake.
- Section 230 puts the onus for free speech on the person who said it, not on the internet company that hosted the comment. The new EARN IT act could change that, and strip away internet company protections—unless they comply with security “best practices” outlined by a body of 15 members. The Electronic Frontier Foundation suggests backdoors would be high on the list of “best practices.” Does that alter the principle of who’s ultimately accountable for their free speech? Just a question.
- The fulcrum of the bill revolves around preventing online child exploitation, a noble end, and to that end they’ve selected profiles for the 15 members. 9 of the members work on the side of law enforcement, or closely therewith, four work in online services with “experience in child safety” and two are technologists with “experience in computer science or software engineering.”
- Wait a minute. If this is really about online child exploitation, something doesn’t sit right. The penalty for failing to comply with all “best practices” is unlimited liability for crimes against children committed on fully encrypted sites (of Apple and Facebook, for example). Crimes that will somehow be caught regardless of the full end-to-end encryption on those platforms.
Crimes that didn’t need a backdoor to catch. This brings up more questions than answers.
Technologists have been speaking out against the bill for the very reason that it was invented—safety. In line with child safety are the possibly unforeseen effects to national safety. In keeping with this concern, the EFF even added to their article a direct email send to your local Congressman containing a pre-written letter expressing support for end-to-encryption. I filled one out.
Congress has some weighty decisions to make, and how this vote falls could determine more than political rivalries and WhatsApp chats. It sets a fundamental precedent for free speech online, and who is responsible for it—and why. And, considering some of the nuances of the bill, postures some questions as to practicality and motive.
A weakening of encryption across all platforms leaves every bit of infrastructure that’s online (or every bit of infrastructure) vulnerable to cyberattack. Weak for one means weak for all. Military, law enforcement and national defense administrators will have to deal with the repercussions in our national security posture, facing cyberthreats in what could be an increasingly compromised position.
As the EFF states, “International diplomats from many countries, including the U.S. State Department, rely heavily on encrypted services to get their work done. The U.S. military also relies on encryption, and Congressman Ro Khanna has spoken up about the importance of encryption to national defense.”
- Battle of the Backdoors in Networking Infrastructure: Intentional vs. Incidental
- Venafi Survey: The Negative Impact of Government Mandated Encryption Backdoors
- Why are Government Officials Who Know Next to Nothing About Encryption So Eager to Mandate Encryption Backdoors?
If you, Android app developer, haven’t done so already, Google suggests you encrypt all your apps. Now.
Aside from the keynote reason, here’s why. There’s two different ways app data is stored—in a sandbox on the actual device, where app information can’t be shared with other apps, and externally—where security is often weak or nonexistent. Google suggests we cut down on the risk of storing personally identifiable information (which we all give out so readily with an Accept and Continue button) by storing our externally housed data in sandboxes as well.
In addition, they suggest that app developers encrypt that external sandboxed information, to add an extra layer of protection. That sounds like security best-practice to me.
To aid in this decision, Google suggested its security library Jetpack Security (JetSec) which enables developers to encrypt and decrypt files containing sensitive information gleaned on apps.
Is there anything to gain by Google simultaneously issuing a strong suggestion to encrypt all Android app data and offering a product that can do so? Maybe, but it might be beside the point. The bigger issue might be that we simply can’t afford to leave personal data exposed.
Additionally, Google suggests adding biometrics as an added security feature. Seeing as Android has a 75% market share in the mobile phone industry, Google owns Android, and big game lately has been the target of data breaches—I'm going to have to think about that one. For better or worse, that information is being stored somewhere and we know biometrics databases get hacked.
Here's trying to prevent what we can, while we can; one fully-encrypted sandbox at a time.
- Is Mobile Encryption Really an Urgent Public Safety Issue?
- Why True End-To-End Encryption is Important for Distributed Apps
- Securing the Supply Chain: Machine Identity Management in IoT Applications
We all want faster internet. But at what cost? It may not make a difference.
So, you can download a full 2-hour movie in 3.6 seconds or scroll rapidly through Feedly with zero latency. Cool. 5g is purportedly 100 times faster than its 4G predecessor at top speeds. Very cool. What’s not so cool is that 5G rolled out with sub-par cryptographic protections. And that leaves us with a sad bit of irony, you could get hacked even faster.
This seems to be a pressing issue these days. DevOps guys (or simply “developers” to the up-and-coming) have directives to roll new stuff out, fast. InfoSec is struggling to keep a handle on all the cats they have to herd, with developers often taking the sensible route at the time and relying on the buy-in security options of the software they’re using, or (even worse) just rolling out packages with less than hearty security oversight. That could be the issue. Or, there could be others.
That’s up to you to decide.
The (Lingering) Issue of Unencrypted Pre-Authentication Messages
- Part of the issue stems from the fact that we still don’t encrypt our cell towers, or the messages coming from them to local mobile phones, saying “I’m your tower.” In other words, vulnerability is inherent in the pre-authentication messages between user and carrier tower. That’s a huge opportunity for bad operators to swoop in and steal signal, as we’ve highlightedhere.
- (Pro tip: There’s actually an opt-in 5G Subscription Permanent Identifier (SUPI) protection plan that will secure you against these attacks. It’s worth looking into, but as common sense and researchers say—until it's no longer optional, implementation is dubious).
- “You could, and probably should, use digital certificates to provide these devices with a way to cryptographically verify that they are indeed talking with a base station...If you use digital certificates, you can very easily decide which certificate authorities you trust” offers technology expert Roger Piqueras Jover at this year’s Shmoocon conference. It’s a fix.
- All things considered, according to Bruce Schneier, cryptographer and fellow at the Harvard Kennedy School, “There are 20 problems with 5G, and [vulnerability in pre-authentication messages] might be problem number 17.”
The Huawei and Government Backdoor Issue
- The US is still wary of Huawei, and a lot of Huawei 5Gs are, and will be, under suspicion of ferrying data back to the Chinese government.
- Not only the Chinese government: “A lot of those vulnerabilities that carried over from 4G were put there by the government, or at least were not fixed in the [standards-setting body] ITU,” said Schneier. This leads to some overwhelming questions.
Not meant to be safe?
Ultimately, it looks like mobile phones, their networks and associated apps were not intended to be places of high-walled security and impenetrable privacy. Conscientious researchers are hoping to ameliorate that with industry standard fixes, and “the greatest thing that has happened in cellular,” the 5GReasoner proposal, lays out a potential framework for making 5G cryptographically safe.
According to Schneier however, the problem stems back farther than Jover, the proposal or any after-market fixes can accomplish—the problem is systemic, and it’s the carry-on of the encryption debate between backdoors and privacy, with the motivations that underly it.
In a comment to CSO, Schneier put it bluntly: “Nobody wants 5G security...Governments like spying on 5G. Carriers don’t care very much. They’ll do what the law says.”