Block Structure

The tendermint consensus engine records all agreements by a supermajority of nodes into a blockchain, which is replicated among all nodes. This blockchain is accessible via various rpc endpoints, mainly /block?height= to get the full block, as well as /blockchain?minHeight=_&maxHeight=_ to get a list of headers. But what exactly is stored in these blocks?


A Block contains:

  • a Header contains merkle hashes for various chain states
  • the Data is all transactions which are to be processed
  • the LastCommit > 2/3 signatures for the last block

The signatures returned along with block H are those validating block H-1. This can be a little confusing, but we must also consider that the Header also contains the LastCommitHash. It would be impossible for a Header to include the commits that sign it, as it would cause an infinite loop here. But when we get block H, we find Header.LastCommitHash, which must match the hash of LastCommit.


The Commit contains a set of Votes that were made by the validator set to reach consensus on this block. This is the key to the security in any PoS system, and actually no data that cannot be traced back to a block header with a valid set of Votes can be trusted. Thus, getting the Commit data and verifying the votes is extremely important.

As mentioned above, in order to find the precommit votes for block header H, we need to query block H+1. Then we need to check the votes, make sure they really are for that block, and properly formatted. Much of this code is implemented in Go in the light-client package. If you look at the code, you will notice that we need to provide the chainID of the blockchain in order to properly calculate the votes. This is to protect anyone from swapping votes between chains to fake (or frame) a validator. Also note that this chainID is in the genesis.json from Tendermint, not the genesis.json from the basecoin app (that is a different chainID…).

Once we have those votes, and we calculated the proper sign bytes using the chainID and a nice helper function, we can verify them. The light client is responsible for maintaining a set of validators that we trust. Each vote only stores the validators Address, as well as the Signature. Assuming we have a local copy of the trusted validator set, we can look up the Public Key of the validator given its Address, then verify that the Signature matches the SignBytes and Public Key. Then we sum up the total voting power of all validators, whose votes fulfilled all these stringent requirements. If the total number of voting power for a single block is greater than 2/3 of all voting power, then we can finally trust the block header, the AppHash, and the proof we got from the ABCI application.

Vote Sign Bytes

The sign-bytes of a vote is produced by taking a stable-json-like deterministic JSON wire encoding of the vote (excluding the Signature field), and wrapping it with {"chain_id":"my_chain","vote":...}.

For example, a precommit vote might have the following sign-bytes:


Block Hash

The block hash is the Simple Tree hash of the fields of the block Header encoded as a list of KVPairs.


A transaction is any sequence of bytes. It is up to your ABCI application to accept or reject transactions.


Many of these data structures refer to the BlockID, which is the BlockHash (hash of the block header, also referred to by the next block) along with the PartSetHeader. The PartSetHeader is explained below and is used internally to orchestrate the p2p propogation. For clients, it is basically opaque bytes, but they must match for all votes.


The PartSetHeader contains the total number of pieces in a PartSet, and the Merkle root hash of those pieces.


PartSet is used to split a byteslice of data into parts (pieces) for transmission. By splitting data into smaller parts and computing a Merkle root hash on the list, you can verify that a part is legitimately part of the complete data, and the part can be forwarded to other peers before all the parts are known. In short, it’s a fast way to securely propagate a large chunk of data (like a block) over a gossip network.

PartSet was inspired by the LibSwift project.


data := RandBytes(2 << 20) // Something large

partSet := NewPartSetFromData(data)
partSet.Total()     // Total number of 4KB parts
partSet.Count()     // Equal to the Total, since we already have all the parts
partSet.Hash()      // The Merkle root hash
partSet.BitArray()  // A BitArray of partSet.Total() 1's

header := partSet.Header() // Send this to the peer
header.Total        // Total number of parts
header.Hash         // The merkle root hash

// Now we'll reconstruct the data from the parts
partSet2 := NewPartSetFromHeader(header)
partSet2.Total()    // Same total as partSet.Total()
partSet2.Count()    // Zero, since this PartSet doesn't have any parts yet.
partSet2.Hash()     // Same hash as in partSet.Hash()
partSet2.BitArray() // A BitArray of partSet.Total() 0's

// In a gossip network the parts would arrive in arbitrary order, perhaps
// in response to explicit requests for parts, or optimistically in response
// to the receiving peer's partSet.BitArray().
for !partSet2.IsComplete() {
    part := receivePartFromGossipNetwork()
    added, err := partSet2.AddPart(part)
    if err != nil {
    // A wrong part,
        // the merkle trail does not hash to partSet2.Hash()
    } else if !added {
        // A duplicate part already received

data2, _ := ioutil.ReadAll(partSet2.GetReader())
bytes.Equal(data, data2) // true