fbpx

Interview with Julian Durand: The evolving role of PKI

Posted on

By Team Intertrust

Share


The second installment of our thought leadership series features Julian Durant, Intertrust’s Vice President of Product Management, as we discuss the evolving role of PKI, especially how it relates to the IoT.

—Before we get to how and why PKI has evolved, can you share why we need it in the first place?

Long before the invention of public key cryptography, critical comms were protected through ciphers. While they worked for thousands of years, they were always a problem because of key management and distribution.

One of the most famous and simplest ciphers is the Caesar cipher. It’s named after Julius Caesar, who used it to send military messages. It uses a secret key that would shift or transpose letters in a document. Each letter is replaced by a letter with a fixed number of positions down the alphabet. For example, the letter A would shift five spaces down to become the letter E, and so on. So the key in this case would be held by the message’s receiver who would know to shift all letters in the document by five letters. 

If you’re sending a caesar-shifted document, I can’t decipher it unless I get the secrets somehow, right? And so that’s the key distribution problem. How do you send that secret to me, and how do you ensure it arrives safely? 

And it gets even worse as you scale it because, if I were a double agent, I can then take that secret key, the transposing of the letters, and sell it or give it to somebody else. It’s also easy to crack due to the simple encryption. Cyphers got more sophisticated, however, and culminated with the Enigma machine, originally developed by Germany at the end of the First World War. It used a series of electromechanical rotors that scrambled the alphabet into billions of combinations. But the machine on the sending and receiving end had to be on the same settings for the cypher to work. So, as a result, they were sending key tables and secrets all over the place. Despite the sophistication, you run into the same key distribution problem. 

—So the distribution of keys, this issue leads to the development of a public key infrastructure?  

Yes, although it didn’t happen until the 1970s, with the invention of the Diffie-Helman-Merkle key exchange. 

PKI allows a message to be encrypted without the sender knowing the key and is based on the idea of paired secrets. There is a private and public key. The private key authenticates the sender and the public key ensures that only the recipient can open and read it.

It works due to the difficult problem of factoring large prime numbers. When you multiply two large prime numbers together you get a new number that is so large you cannot factor it. So you end up with this big encryption key, but if you know one of the two prime numbers you can figure out the other one via a backdoor calculation between the sender and receiver. But If you try to brute force it, you run out of compute capacity.

So PKI solves the key management and distribution problem. You still generate and distribute symmetric ciphers, but now you have a secure mechanism for doing so. In fact, many protocols establish such a symmetric session key as part of their protocol. Transport Layer Security, or TLS, is probably the most famous. 

—What about managing devices and certificates in an IoT infrastructure? Does it strain the traditional PKI approach?

Well trouble is, when we get to managing Internet of Things devices, PKI becomes very difficult to scale and to revoke and you can also get into performance troubles.

The devices are often constrained. There are performance penalties to running big modular math processes which is what public key requires. So Ideally you would want to run everything in hardware. That is possible with symmetric ciphers, so not only do you not get a performance hit, you’re not sucking energy out of devices and that’s important for battery life. 

Scale is another factor to consider. A big site might have tens of millions of identities to manage, with IoT you are talking about hundreds of millions to billions of devices, so how do you scale this enormous trust model with the computationally intensive elements of a public key?

And with the scale of devices we’re talking about here, revocation is also an issue. When you have so many devices, if one of them is compromised, how do you handle the revocation? Send around a 10-20 megabyte certificate revocation list every day? Not only labor intensive, but when you’re paying expensive data costs to a constraint device, that’s not a really good solution. 

So what about doing an OCSP, or an online certificate status protocol lookup to determine the status of the devices? The trouble with that, is not all devices may have access to the internet to perform that lookup. And now there is more and more autonomy at the edge, in many cases, as much as 70 or 80 percent of data will never go back to the cloud or a data center. So instead, information is being processed at the edge itself.

—Are there special considerations for remote devices processing information autonomously, especially in IoT ecosystems?  

The big question here is how do you determine the status of trust at the edge.

This is where device attestation comes into play and that’s a new development largely inspired as a response to the Pegasus spyware attacks.

Pegasus revealed a major vulnerability that was being overlooked. If a device that is sending you a message is doing so with malicious intent, it can because of this flawed model, where the system naively trusts that message and starts to process it. 

Pegasus targets were sent a malformed gif and when a device’s operating system started parsing it, it blew up the messaging stack, causing a buffer overflow and enabling Pegasus to deploy an exploit kit. 

What makes IoT devices uniquely vulnerable is not only their autonomy but also their situation in the field. They are in physical environments where anyone can pick up the device and start tampering with it. 

—Can you explain how Intertrust offers ways to navigate zero trust and protect IoT devices at the edge?

Sure. We saw the problem of not being able to trust the network, and even the data coming from the device. That’s why we built an edge-to-cloud security solution called XPN, or Intertrust Explicit Private Networking. XPN ensures that the state of a device is good and that it is persistently protecting itself, even in zero-trust environments. It is a lightweight mechanism that is hyper scalable for IoT. 

An XPN client assumes and trusts in the security of the endpoint through techniques like signed software and entity attestation tokens. 

XPN extends beyond today’s connection-oriented protocols, like VPNs and TLS–that are all about securing the pipe. Data gets wrapped in an XPN packet within a secure enclave and it only gets unwrapped and verified when it arrives at the XPN service, where it’s then fully governed and managed. 

We believe our trust model, focused on protecting the data, not the wires, is the future for IoT device security. Even in a sea of untrusted networks, rife with malicious actors, you can use XPN to keep billions and billions of devices secure, predictable, and operating in a controlled, efficient fashion.

Share

Related blog posts

Blog

Common PKI deployment mistakes when safeguarding DERs

Read more

Blog

What’s the most effective PKI for distributed energy resources?

Read more

Blog

Key advantages of a managed PKI service for distributed energy resources

Read more