Data Availability(DA) Nodes
Last updated
Last updated
Glacier DA offers an innovative solution to this challenge, providing a scalable data availability service that integrates decentralized storage with data availability sampling networks. It is designed to meet the specific needs of blockchain and AI applications, ensuring efficient and reliable data verification and storage. Glacier DA redefines blockchain scalability by combining erasure coding, KZG polynomial commitments, data availability sampling, and decentralized storage to deliver world-class data availability guarantees. It functions as a foundational (base) layer, offering scalable data hosting without transaction execution, specifically for rollups and GenAI which relies on heavy data workloads.
Glacier DA sampling network is composed of full sampling nodes and light sampling clients. The Data Availability Sampling (DAS) used in the Glacier DA ensures that all the data in a block is available for validation, without requiring every light sampling client node to download the entire block.
Here's how it works:
Imagine a block of data as a large grid or matrix. In Glacier DA, this matrix is created using a 2-dimensional Reed-Solomon encoding scheme. The original block data is split into smaller parts, called shares, and arranged in this matrix. Additional parity data is added to extend the matrix.
Now, instead of downloading the entire block, light nodes in the network randomly pick and download small portions of this matrix. They then verify these portions using Merkle proofs. If these randomly picked portions can be verified, it's a good indication that the entire block data is available and correct.
This method allows for larger blocks (i.e., with more transactions) while still keeping the process manageable for resource-limited light nodes. It's a key part of how Glacier DA achieves scalability and efficiency in its blockchain network.
The lifecycle of a transaction within the Glacier DA network is designed to ensure that block data is accessible, verifiable, and efficiently managed:
Transaction Submission
Erasure Coding
Commitment Creation
Block Propagation
Light Client Network
Proof Verification
Permanent Storage
The life cycle commences when Glacier Vector DB, as one of primary users of Glacier DA, submit transactions to the network. Each transaction is assigned a unique ID to identify its origin and purpose within the broader ecosystem.
Upon arrival at Glacier DA, transactions undergo erasure coding, a process that injects redundancy to bolster data reliability and integrity. This coding splits blocks into 'n' original segments, expanding them to '2n' to facilitate reconstruction from any subset of 'n' out of the '2n' segments. Although Glacier DA includes fraud proof mechanisms, the validator consensus, requiring honesty from over 2/3 of validators, is the cornerstone for ensuring the security of the erasure-coded data. Full nodes have the capability to generate and circulate fraud proofs, which enables light clients to authenticate the validity of block headers.
Glacier DA takes the redundant data and applies KZG polynomial commitments to each block. These commitments serve as cryptographic proofs of the data's integrity, ensuring that what is stored is accurate and tamper-proof. The commitments are used by validators to confirm the data's integrity before it is attested and transmitted to the sampling network via Glacier DA’s data bridge.
Validators play a pivotal role in Glacier DA. They receive the commitment-laden blocks, regenerate the commitments to verify their accuracy and reach a consensus on the block, which requires agreement from at least two-thirds (super majority). Validators ensure that only verified and agreed-upon data is propagated through the network. They reach consensus This stage is vital for ensuring that the data, once validated, can be relayed via Glacier DA’s data attestation bridge.
Light clients within Glacier DA's ecosystem use Data Availability Sampling (DAS) to verify block data integrity. They check KZG polynomial openings against the commitments in the block header for each sampled cell, enabling them to independently and instantly verify data availability. This method bypasses the need for reconstructing full KZG commitments or relying on fraud proofs, underpinning Glacier DA's high security and data integrity standards maintained by decentralized verification. However, for more comprehensive data integrity checks, especially for row integrity within the data matrix, light clients perform KZG reconstruction. This approach is more optimal for verifying the integrity of entire rows than validating individual cells.
On the other side, full sampling nodes use Kate commitments for two primary purposes: reconstructing the full data for network-wide verification or creating fraud proofs to challenge any discrepancies in the data. This dual mechanism of light clients and full nodes working in tandem also strengthens the overall security and reliability of the network.
The journey culminates with light clients performing proof verification. This process involves generating cell-level proofs from the data matrix, enabling light clients to efficiently and independently verify the state of the blockchain. This decentralized approach to verification underpins the security and integrity of Glacier DA.
Glacier DA decentralized storage layer guarantees that once block data is stored, it remains immutable, retrievable, and intact, safeguarding against loss, corruption, or unauthorized alteration. The permanence of data is essential for various reasons, including the ability to audit and verify past transactions or machine learning computations, which is fundamental for trust and accountability in blockchain systems. Decentralized storage layer ensures data is not only stored permanently but also distributed in a manner that aligns with the principles of decentralization, enhancing data resilience and availability across the network.