TEE Computation Nodes
Last updated
Last updated
Trusted Execution Environments (TEEs) are advanced technologies that enable hardware-assisted confidential computing. TEEs facilitate the execution of isolated and verifiable code within protected memory regions known as enclaves or secure worlds. These environments ensure that both code and data remain safeguarded from external threats, such as malicious software and unauthorized access. Such isolation is crucial for preserving the integrity of machine learning models, particularly in potentially unsecure settings like edge devices or cloud platforms. Developers utilize TEEs to protect the intellectual property within their models, ensuring that the output is reliable and resistant to tampering.
The Glacier TEE Node operates on hosts that support TEE technology, contributing both the TEE environment and computational resources to the network, while earning rewards. Currently, the supported TEE technologies include Intel SGX and Nvidia Confidential Computing.
Each TEE Node registers its unique Enclave ID on a blockchain contract during operation. This ID, guaranteed to be immutable by the hardware chip, uniquely identifies a TEE Node.
Developers can rapidly develop TEE applications using the Glacier TEE SDK. This SDK ensures that all communications and data flows between the TEE and the external world are encrypted. Only the user and the internal chip environment of the TEE can decrypt the data, ensuring that others, including the TEE host, cannot access private information. In a TEE, users who wish to utilize an enclave must first establish a secure connection (attestation) through a key generated by a secure processor. This allows users to verify the integrity of the enclave through a manufacturer-certified attestation process.
Applications developed with the Glacier TEE SDK generate a Work Proof from the TEE host environment. Due to the protective measures of the chip environment, this Work Proof can only be generated internally and verified externally for its security.
Hardware attestation forms a trust foundation directly in the device itself, eliminating the need for excessive computation or capital investment. Each device acts as a proxy for human users and provides a default proof of stake based on the device’s intrinsic value. This trust is enhanced by the manufacturer’s validation processes:
The device can sign the data it produces and share this signed data on the blockchain.
It can also attest to the authenticity of the code it runs, increasing trust in the operational software.
To perform inference within TEEs, data providers and model providers must first encrypt their datasets and models to ensure security during transmission. Following this, a remote attestation ceremony occurs where identities and keys are exchanged, and a secure communication channel is established. Once trust is secured, the encrypted data and models are transferred into the TEE, where they are decrypted and processed. The results of the inference are encrypted before being returned to the data provider, ensuring security throughout the process.
Training a model in TEEs follows a similar protocol to inference. A data provider loads an encrypted dataset and an initial model into the TEE following secure key exchanges and attestation. The training process occurs in isolation, free from external influence. After several training epochs, the refined model weights are securely transmitted back to the data provider.