Bank Of England Open To Review Stablecoin Ownership Cap Proposal Following Backlash

Why Starfish Matters


A Technical Deep Dive into the New IOTA Consensus

Why Starfish Matters

padding: 12px;
background: var(–button-shadow-color-normal);
border-radius: 8px;
“>
TL;DR:

Last week we published an overview of the Starfish mainnet launch. This post, written by Sebastian Mueller of the IOTA Foundation’s Research team, goes deeper: the technical design decisions behind Starfish, why they matter, and what the first mainnet metrics show.

Starfish is now enabled on IOTA Mainnet. The v1.21.1 Mainnet release introduced protocol version 24 and enabled Starfish consensus on Mainnet.

That is the release-note version.

The research version is more interesting: Starfish is an attempt to fix a part of DAG-based Byzantine consensus that has often been treated as plumbing, even though it shapes the whole protocol. That part is dissemination.

Looking for the broader picture first? Read our introduction to the Starfish Mainnet launch.

Consensus is usually described as an agreement problem. A group of validators has to agree on one history, even when messages are delayed and some participants may be Byzantine. That framing is correct, but it hides a practical constraint: a validator cannot vote on a block it has not seen, cannot certify data it cannot reconstruct, and cannot help the protocol make progress if the information it needs is always arriving one request too late.

So beneath the agreement problem there is a synchronization problem. How does the right information reach the right nodes quickly enough? That is the question I think Starfish answers in a cleaner way than previous (uncertified) DAG protocols.

DAG-based BFT protocols make this question unusually visible. A DAG – a directed acyclic graph – is not just a way to arrange blocks. It is a record of mutual knowledge. Each new block points back to earlier blocks the validator has seen. If many validators reference the same block, the network is converging around shared information. If references are missing, the DAG exposes the gap.

In that sense, the DAG is not merely a commitment structure. It is a synchronization structure. 

This is why dissemination matters so much.

The idea that dissemination should be carried inside the protocol itself – rather than handled by a separate broadcast layer – has a clean name in the literature: cordial dissemination. The Cordial Miners paper (Keidar, Naor, Shapiro — DISC 2023) gave it a formal treatment. Participants do not only gossip their own blocks; they forward what they believe other honest parties need in order to keep up.

There are two basic strategies for being cordial.

The first is pull. A validator asks for missing blocks only after it discovers that it needs them. Pull is efficient in a narrow sense: nothing is sent unless it is requested. But it is bad for latency, because every missing block introduces another request-response round trip. Worse, under load, pull can amplify the problem it is trying to solve. Slow validators issue more requests. Fast validators spend more bandwidth answering them. The network becomes busiest exactly when it needs to recover.

The second strategy is push. Validators proactively forward information that others are likely to need. Push is better for latency because the data is often already present when the recipient discovers that it needs it. But naïve push is expensive. If every validator pushes full blocks to every other validator, bandwidth grows quickly with the validator set.

This is the old tension: pull is lean but late; push is fast but heavy.

Why Starfish Matters
Outbound request rate – a push-vs.-pull signal: Mysticeti (blue) vs Starfish (green). Outbound requests are the pull path: a validator realizes it is missing something, asks a peer for it, and waits. Pull is bandwidth-efficient, but it is a bad place to put latency because it happens after the gap already exists. Post- Starfish, outbound requests drop by roughly an order of magnitude — validators stop far less often to recover missing history on the critical path.This is one of the main things Starfish was designed to change.

The first design move is to separate metadata from payload. Metadata is the part consensus needs immediately: references, votes, acknowledgments, timing, and commitments. Payload is the transaction data. Previous DAG designs often carried payload too tightly along the consensus path. That makes the protocol fast in small settings, but it becomes expensive as throughput grows.

Starfish keeps the consensus path light. Block headers carry the consensus-relevant structure and a commitment to the payload, while the payload itself is handled separately.

This lets the protocol push the small thing aggressively – the header – without pushing the heavy thing in full.

The second design move is Reed-Solomon encoding. Reed-Solomon coding is the same idea that lets QR codes survive scratches and CDs survive scuffs: take some data, produce a number of pieces with carefully designed redundancy, and arrange it so the original can be rebuilt from any sufficient subset of the pieces — even if the rest are missing. In Starfish, a block’s transaction data is broken into fragments – one per validator – with enough redundancy that a small subset of valid fragments can reconstruct the whole.

One distinction worth being careful about: the reconstruction threshold is not 2f+1 fragments. The Reed–Solomon code is set up so that any f+1 valid fragments are enough to reconstruct the payload. The 2f+1 number appears in the availability certificate: if 2f+1 validators acknowledge availability, then even if up to f of them are Byzantine, at least f+1 honest validators must hold valid fragments — which is enough for reconstruction.



Source link

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *