Open-source SDKs improve confidential computing

If you’ve been following the confidential computing space, you may have noticed that many software development kits (SDKs) that use Trusted Execution Environments (TEEs) are following the open-source route.

It may seem counter-intuitive that SDKs promoting privacy-first computing allow their source code to be visible to all. However, it is necessary to open-source the SDK in order to improve the trust and transparency of the privacy-enhancing technology.

What is confidential computing, and how can developers get started?

Confidential computing is a rapidly growing technology that keeps data secure at rest, in transit, and during processing. Whereas traditional encryption techniques focus only on protecting data at rest or in transit, confidential computing goes the extra mile to protect the data being processed. One of the ways to implement confidential computing is to use Trusted Execution Environment (TEE). TEEs use hardware-based technologies to secure an area of ​​the CPU in which code and data are physically isolated and cannot be tampered with, even by cloud administrators or systems. No. Such protected areas are called enclaves.

With Enclave, use cases that were not possible before are now possible. For example, insurance companies may address multiple claims for the same bill without disclosing customer data to their competitors. There are many other use cases in which an organization can develop internal applications that protect customer data.

Those driving the confidential computing movement are now ensuring simplicity of development and deployment. For example, some SDKs provide powerful high-level APIs that can hide the low-level complexities of using TEE from manufacturers such as Intel and AMD. Thankfully, you can now develop enclaves using high-level languages ​​such as Java, Kotlin, and JavaScript, making coding confidential applications easier.

With such an easy-to-use SDK, you can focus more on your enclave’s business logic and applications. Some of these core SDKs also offer cloud solutions to provide privacy-preserving SaaS platforms for deploying confidential, event-driven workloads.

the decision to go open source

So, why would an SDK promoting privacy-first computing reveal its source code to the world?

Confidential computing aims to remove the need to rely on the various participants involved in application development. By using confidential computing, you can remove the need to rely on your cloud service provider, application service provider, or any software stack. So, if you can’t verify the source code of an SDK that simplifies confidential computing, how will you be able to trust that service? Therefore, open sourcing is an important option, allowing users to verify and audit the source code of the service, even removing the service provider from your trust model. If you use an open-source SDK to code your privacy-first application, in many cases the only source of trust is the CPU manufacturing company.

Building and developing a community around any trust technology is also essential. Open Sourcing – Builds an active community of developers invested in the nuances of technology. It also allows for easier access, better transparency, collaborative development, innovation and faster adoption of confidential computing technologies.

Making confidential computing accessible to all developers

Some confidential computing techniques such as zero-knowledge proofs and homomorphic encryption require developers with advanced skills in cryptography to develop secure applications. However, if we intend to make data privacy the default for any business, it should be easy for most developers to work on a privacy-first application. The SDKs that TEE uses are more user-friendly than other confidential computing technologies.

The following are the key factors that the SDK should take into account to improve its usability.

  • Developer Friendliness: render High-level, intuitive APIs to help developers write secure applications.
  • Platform Independence: Make it easy to work on different operating systems.
  • Support multiple languages: developers should be Able to write code in Java, Kotlin, JavaScript, Python or any other language.
  • Eliminate messaging complications: Implement easy-to-use, end-to-end encrypted messaging technologies to communicate securely with Enclave.
  • Cloud Support: Any Core SDK solution must integrate tightly with a complementary cloud offering.
  • Stability: Create stable applications that maintain security guarantees even when the enclave is running on multiple physical machines or when the enclave is restarted.
  • Enhanced security and performance: Use technologies like GraalVM Native Image for tighter memory usage and faster start-up times.
  • Cloud Deployment: Simplify the deployment of secure applications to the cloud.
  • Auditability: Application developers or auditors should be able to remotely verify the source code of the enclave.
  • Automatic and abstract: Use plugins to automate and abstract the code hash of the enclave, signing and computing it.
  • Robust testing framework: Implement a comprehensive and easy-to-use framework for testing and remote validation of applications.

With a Small TCB Come Big Responsibilities

Intel’s SGX (Software Guard Extension) based CPUs have a very small Trusted Computing Base (TCB). If you implement privacy-first application that works on SGX-based, you only need to rely on CPU and some support software. An SGX-based secure application assumes that everything in the computer other The CPU is malicious and defends against them using encryption and authentication.

While the smaller TCB strengthens the security model of the SGX-based enclave, it has some less-obvious implications.

An enclave cannot directly access the hardware. Like any other program, it must ask the operating system’s kernel to handle requests on its behalf, as the kernel is the only program that can control the hardware. However, inside an enclave, there is no direct access to the OS. Enclaves can communicate with the OS only by exchanging messages through the host code.

For example, the system clock falls outside the TCB. Developers often use the present time to code application logic. Because the real-time clock chip on the outside of the CPU maintains the current time, developers must consider the risk of an untrustworthy machine owner tinkering with it. Therefore, you should use the current time inside an enclave only if an error in it is not detrimental to the security of the application.

If any component (host software, kernel, hardware itself) is malicious, a security issue occurs when enclaves access the hardware by exchanging messages with the host. All developers who want to start deploying their privacy-enhancing applications should remember that Enclave security doesn’t apply to the entire computer. They are a feature of the CPU.

next steps

If you’re hearing about confidential computing for the first time, it’s an excellent idea to start with the Hello World sample to see how easy it is to create enclaves in languages ​​like Kotlin and Java.

Happy coding!

Leave a Comment