▪️Glacier Nodes
Last updated
Last updated
In a decentralized network like a data-centric blockchain such as Glacier Network, a node refers to a device or computer (representing individuals in the network) that plays a critical role in maintaining the network's functionality, security, and data integrity.
A good analogy is that nodes can be considered as participants in a given market. The more people making trades in this market, the more prosperous the market is. In essence, nodes help maintain the overall functionality, security, and growth of Glacier Network's AI-powered blockchain, and metaphorically, nodes represent votes of confidence in a project.
The Glacier Network consists of two types of nodes: Validator Nodes and Worker Nodes.
The Worker Nodes are subdivided into Data Availability (DA) Nodes, TEE Computation Nodes and Indexer Nodes. Each node type plays a crucial role in maintaining the Glacier network's functionality, from transaction validation and data storage to ensuring data availability and retrieval and privacy-preserving computation. Here, we'll introduce you to the various types of nodes in Glacier, each playing a crucial role in maintaining the network's health and security.
Responsibility: Verifier Nodes act as the guardians of the Glacier network, responsible for verifying the Work Proof of Worker Nodes.
Technical Principle: Verifier Nodes run consensus algorithms to confirm that transactions comply with the network’s rules. They verify cryptographic proofs and validate the legitimacy of transactions, ensuring they are accurate and unaltered.
Node Task: Verifier nodes secure the network, monitor the behavior of Worker Nodes for mischief, and ensure smooth operations. If any misbehavior is detected from a Worker Node, the tokens staked by that Worker Node may be subject to slashing.
Responsibility: Data Availability(DA) Nodes are responsible for generating the blocks, and guaranteeing the data is stored immutably, and remains retrievable, and intact. They ensure the persistence and long-term availability of significant volumes of data, such as training datasets and large AI models.
Technical Principle: The technical lifecycle within the DA nodes include: Transaction Submission, Erasure Coding, Commitment Creation, Block Propagation, Light Client Network using Data Availability Sampling (DAS), Proof Verification and store within Permanent Storages.
Node Task: DA Nodes generate and store block data on decentralized storage layers. They will randomly pick and download small portions of the data block , without downloading the entire one. They then verify these portions using Merkle proofs. If these randomly picked portions can be verified, it's a good indication that the entire block data is available and correct.
Responsibility: TEE Computation Nodes ensure that all communications and data flows are encrypted, remaining safeguarded from external threats, and protect the intellectual property of advanced AI models and datasets.
Technical Principle: TEE Computation Nodes utilize hardware-assisted confidential computing, including Intel SGX, AMD SEV, and Nvidia Confidential Computin.
Node Task: Each TEE Node registers its unique Enclave ID on a blockchain contract during operation, and operates on hosts that support TEE technology, contributing both the TEE environment and computational resources to the network, while earning rewards.
Responsibility: Indexer nodes provide indexing and query processing services for Glacier Database and Vector Database, ensuring the data conforms structurally to predefined schemas and can be efficiently retrieved for analysis and application use.
Technical Principle: The Glacier Database Engine efficiently stores and manages large volumes of unstructured vector data in a decentralized environment. They harness the power of decentralized agents (dAgents), large language models (LLMs) like OpenAI's ChatGPT-4, and blockchain technology to provide a secure and scalable AI data storage solution.
Node Task: Indexers nodes respond to user requests which are specified with a schema to create, update, and manage datasets, then process the data CRUD requests in Database Engine, where a data Index or vector data index is constructed.