Glacier Network
WebsiteDevelopersGithub
  • 👋Welcome
  • 🙌Getting started
    • AI Industry
    • Technical Architecture
      • Background
      • GlacierDB
      • GlacierAI
      • GlacierDA
      • Trusted Execution Environments (TEE)
      • Secure Multiparty Computation(MPC)
      • Fully Homomorphic Encryption (FHE)
      • EigenLayer AVS
    • Glacier Nodes
      • Verifier Nodes
      • Data Availability(DA) Nodes
      • TEE Computation Nodes
      • Indexer Nodes
      • Run Mainnet Nodes
        • Nodes Portal Tutorial
        • Linux CLI
        • Using Docker
        • FAQ
      • Testnet
        • Linux CLI
        • Using Docker
        • FAQ
  • 📜tutorial
    • ▪️Choose Your Storage Layer
    • ▪️Build Datasets on Arweave
      • ▪️Connect Wallet
      • ▪️Playground
        • ▫️Create Namespace
        • ▫️Create Dataset
        • ▫️Create Collection
        • ▫️CRUD
      • ▪️Scan
    • ▪️Build Datasets on BNB Greenfield
      • ▪️Connect Wallet
      • ▪️Playground
        • ▫️Create Namespace
        • ▫️Create Dataset
        • ▫️Create Collection
        • ▫️CRUD
      • ▪️Scan
    • ▪️Build Datasets on Filecoin
      • ▪️Connect Wallet
      • ▪️Playground
        • ▫️Create Namespace
        • ▫️Create Dataset
        • ▫️Create Collection
        • ▫️CRUD
      • ▪️Scan
    • ▪️How to Use Chatbot-Bench
    • ▪️Mint NFT
      • ▪️Preparation
      • ▪️NFT Mint for Monetization
  • 📖Resources
    • Contract Addresses
    • 🌐Website
    • 📄Whitepaper
    • 🐦Twitter
    • 👾Discord
    • Telegram
    • 📱Telegram_news
    • 📹Youtube
    • 📃Medium
    • 📧Email
Powered by GitBook
On this page
  1. Getting started
  2. Technical Architecture

Trusted Execution Environments (TEE)

PreviousGlacierDANextSecure Multiparty Computation(MPC)

Last updated 6 months ago

Trusted Execution Environments (TEEs) represent a pivotal advancement in hardware-assisted confidential computing. These environments facilitate the execution of code within secure, isolated compartments in memory, commonly referred to as enclaves or secure worlds. The primary function of TEEs is to shield the code and data within these enclaves from external threats, including malicious software and unauthorized access. This level of isolation is essential for preserving the integrity of machine learning models, particularly when they are operational in potentially compromised settings such as edge devices or cloud platforms. Utilizing TEEs enables developers to protect the intellectual property inherent in their models while ensuring the production of dependable and tamper-proof results.

The process of conducting inference within a TEE involves several critical steps. Initially, the data providers and the model provider must prepare their respective dataset and model by encrypting them to ensure security during transmission. Following this, a remote attestation ceremony is conducted where the data provider, model provider, and the TEE participate in a secure key exchange. This process involves verifying each other’s identities, exchanging cryptographic keys, and establishing a secure communication channel.

Once trust has been established, the encrypted dataset and model are securely transferred into the TEE. Inside the TEE, these elements are decrypted, setting the stage for the inference process. During inference, the TEE executes the model’s analysis on the data while maintaining a secure and isolated environment. After the inference is complete, the results, such as computed labels, are encrypted prior to being sent back to the data provider. At this point, the model, having fulfilled its function, can be safely discarded.

The accompanying diagram illustrates the complete process of performing inference inside a TEE, from initial data and model preparation to the final secure delivery of inference results. This visual aid helps clarify the sequence of operations and the robust security measures in place to protect sensitive data and intellectual property throughout the process.

🙌