What is UTS?
Universal Timestamps (UTS) is a decentralized timestamping protocol that enables anyone to create cryptographic, publicly verifiable proofs that data existed at a specific point in time.
The Problem
Consider a common scenario: you write a document, invent a design, or generate a dataset. Later, someone disputes when that data came into existence. How do you prove the data existed at a certain time without relying on a trusted third party?
Traditional approaches — notary stamps, trusted servers, email timestamps — all share a common weakness: they depend on a single entity that must be trusted not to backdate, forge, or lose records. A compromised notary or a deleted email server destroys the proof.
The Analogy: Digital Notarization
Think of UTS as a digital notary backed by a public blockchain:
- You bring your document (any data) to the notary.
- The notary doesn’t read your document — it only sees a cryptographic hash (a fixed-size fingerprint).
- The notary records that hash into a public, append-only ledger that anyone can audit.
- Later, anyone can verify the timestamp by re-hashing the original data and checking the ledger.
Unlike a physical notary, UTS requires no trust in any single party. The ledger is a blockchain — immutable, publicly verifiable, and decentralized.
Why Blockchain?
Blockchains provide three properties that are ideal for timestamping:
- Immutability — once a transaction is confirmed, it cannot be altered or removed.
- Public verifiability — anyone can independently verify that a hash was recorded at a given block height.
- No trusted third party — the security guarantee comes from the consensus mechanism, not from any single operator.
OpenTimestamps Heritage
UTS extends the OpenTimestamps protocol, which pioneered blockchain-based timestamping on Bitcoin. OpenTimestamps introduced several key ideas:
- A compact binary codec (
.otsfiles) that encodes hash operations as a directed acyclic graph of opcodes. - Calendar servers that aggregate many timestamp requests and batch them into a single on-chain transaction.
- Merkle tree batching — thousands of timestamps share a single blockchain transaction by constructing a Merkle tree and recording only the root on-chain.
UTS builds on this foundation and extends it to Ethereum via the Ethereum Attestation Service (EAS), using a dual-layer architecture across L2 (Scroll) and L1 (Ethereum mainnet).
Key Insight: Cost Amortization
A single Ethereum transaction costs gas regardless of whether it timestamps one hash or one thousand. UTS exploits this by batching: a calendar server collects many user digests, builds a Merkle tree from them, and records only the 32-byte Merkle root on-chain. Each user receives a Merkle proof that links their specific hash to that on-chain root.
The result: the per-timestamp cost drops by orders of magnitude, making cryptographic timestamping practical for everyday use.
What You’ll Learn
This book walks through the UTS architecture from first principles:
- Chapter 2 gives a high-level system overview and introduces all components.
- Chapter 3 explains the core data structures: Merkle trees, the OTS codec, and the journal.
- Chapter 4 traces the calendar timestamping pipeline end-to-end.
- Chapter 5 covers the L1 anchoring pipeline for cross-chain security.
- Chapter 6 describes the storage architecture.
- Chapter 7 discusses security considerations.
- Appendix A explains the drand beacon injector.
System Architecture Overview
UTS is organized as a Rust workspace of 11 crates plus a set of Solidity smart contracts. This chapter provides a bird’s-eye view of the system, its components, and the two main pipelines.
Component Diagram
graph TB
subgraph User
CLI[uts-cli]
end
subgraph "Calendar Server (L2)"
CAL[uts-calendar]
STAMP[uts-stamper]
JOURNAL[uts-journal]
KV[(RocksDB KV)]
SQL[(SQLite)]
end
subgraph "Core Libraries"
CORE[uts-core]
BMT[uts-bmt]
CONTRACTS[uts-contracts]
SQLUTIL[uts-sql]
end
subgraph "Relayer Service"
RELAYER[uts-relayer]
RELAYDB[(SQLite)]
end
subgraph "Beacon Service"
BEACON[uts-beacon-injector]
end
subgraph "Smart Contracts (On-Chain)"
EAS[EAS Contract]
EASHELPER[EASHelper.sol]
MT[MerkleTree.sol]
L1GW[L1AnchoringGateway]
L2MGR[L2AnchoringManager]
FEE[FeeOracle]
NFT[NFTGenerator]
end
CLI -->|POST /digest| CAL
CLI -->|GET /digest/commitment| CAL
CAL --> JOURNAL
CAL --> STAMP
STAMP --> KV
STAMP --> SQL
STAMP -->|EAS.timestamp| EAS
RELAYER -->|submitBatch| L1GW
RELAYER -->|finalizeBatch| L2MGR
L1GW -->|timestamp| EAS
L1GW -->|cross-chain msg| L2MGR
L2MGR --> FEE
L2MGR --> NFT
L2MGR --> MT
BEACON -->|EAS.attest| EAS
BEACON -->|submitForL1Anchoring| L2MGR
BEACON -->|POST /digest| CAL
STAMP --> BMT
STAMP --> CORE
STAMP --> CONTRACTS
RELAYER --> BMT
RELAYER --> CONTRACTS
CAL --> CORE
CLI --> CORE
Component Inventory
| Crate | Purpose |
|---|---|
uts-bmt | Binary Merkle Tree — flat-array, power-of-two, proof generation |
uts-core | OTS codec (opcodes, timestamps, attestations), verification logic |
uts-journal | RocksDB-backed write-ahead log with at-least-once delivery |
uts-calendar | HTTP calendar server — accepts digests, serves proofs |
uts-stamper | Batching engine — builds Merkle trees, submits attestations |
uts-cli | Command-line tool — stamp, verify, inspect, upgrade |
uts-contracts | Rust bindings for EAS and L2AnchoringManager contracts |
uts-relayer | L2→L1→L2 relay service with batch state machine |
uts-beacon-injector | Injects drand beacon randomness into the timestamping pipeline |
uts-sql | SQLite utilities and Alloy type wrappers |
Two Pipelines
UTS operates two complementary pipelines:
Pipeline A: Calendar Timestamping (L2 Direct)
The fast path. User digests are batched into a Merkle tree and the root is timestamped directly on L2 (Scroll) via EAS. This provides low-latency, low-cost timestamps.
sequenceDiagram
participant U as User (CLI)
participant C as Calendar Server
participant J as Journal
participant S as Stamper
participant EAS as EAS (L2)
U->>C: POST /digest (hash)
C->>C: Sign (EIP-191)
C->>J: commit(commitment)
C-->>U: OTS file + commitment
loop Every batch interval
S->>J: read entries
S->>S: Build Merkle tree
S->>EAS: timestamp(root)
EAS-->>S: tx receipt
end
U->>C: GET /digest/{commitment}
C->>C: Merkle proof + EASTimestamped
C-->>U: Updated OTS file
Pipeline B: L1 Anchoring (Cross-Chain)
The high-security path. L2 attestation roots are batched again and anchored on L1 Ethereum, providing L1-level finality guarantees. A relayer service orchestrates the cross-chain lifecycle.
sequenceDiagram
participant U as User
participant L2 as L2AnchoringManager
participant R as Relayer
participant L1 as L1AnchoringGateway
participant EAS1 as EAS (L1)
participant MSG as Scroll Messenger
U->>L2: submitForL1Anchoring(attestationId)
L2->>L2: Validate + queue
R->>R: Pack batch (Merkle tree)
R->>L1: submitBatch(root, startIndex, count)
L1->>EAS1: timestamp(root)
L1->>MSG: sendMessage(notifyAnchored)
MSG->>L2: notifyAnchored(root, ...)
R->>L2: finalizeBatch()
L2->>L2: Verify Merkle root on-chain
U->>L2: claimNFT(attestationId)
Crate Dependency Graph
graph LR
CLI[uts-cli] --> CORE[uts-core]
CLI --> CONTRACTS[uts-contracts]
CAL[uts-calendar] --> CORE
CAL --> JOURNAL[uts-journal]
CAL --> STAMPER[uts-stamper]
STAMPER --> BMT[uts-bmt]
STAMPER --> CORE
STAMPER --> CONTRACTS
STAMPER --> SQL_UTIL[uts-sql]
RELAYER[uts-relayer] --> BMT
RELAYER --> CONTRACTS
RELAYER --> SQL_UTIL
BEACON[uts-beacon-injector] --> CONTRACTS
CONTRACTS --> CORE
Beacon Injector
The beacon injector is an auxiliary service that injects drand randomness beacons into the timestamping pipeline. It submits beacon signatures to both the calendar server and the L1 anchoring pipeline, providing a continuous stream of publicly verifiable, unpredictable timestamps. See Appendix A for details.
Binary Merkle Tree
The binary Merkle tree (uts-bmt) is the fundamental data structure that enables UTS to batch thousands of timestamps into a single on-chain transaction. Each user’s digest becomes a leaf, and only the 32-byte root is recorded on-chain.
What is a Merkle Tree?
A Merkle tree is a binary tree of hashes. Each leaf node contains the hash of a data element, and each internal node contains the hash of its two children. The single hash at the top — the root — is a compact commitment to all the data below.
The key property: given a leaf and a short proof (the sibling hashes along the path to the root), anyone can verify that the leaf is included in the tree without knowing the other leaves.
Flat Array Layout
UTS uses a flat array representation rather than pointer-based tree nodes. The array stores 2N elements where N is the number of leaves (padded to the nearest power of two):
- Index
0is unused (sentinel). - Indices
[1, N)store internal nodes (index1is the root). - Indices
[N, 2N)store leaf nodes.
graph TB
subgraph "Array Layout (N=4 leaves)"
I0["[0] unused"]
I1["[1] root"]
I2["[2] internal"]
I3["[3] internal"]
I4["[4] leaf₀"]
I5["[5] leaf₁"]
I6["[6] leaf₂"]
I7["[7] leaf₃"]
end
I1 --- I2
I1 --- I3
I2 --- I4
I2 --- I5
I3 --- I6
I3 --- I7
style I1 fill:#e8a838
style I4 fill:#4a9eda
style I5 fill:#4a9eda
style I6 fill:#4a9eda
style I7 fill:#4a9eda
Navigation is pure arithmetic:
- Parent of node
i:i >> 1(right shift) - Sibling of node
i:i ^ 1(XOR with 1) - Children of node
i:2i(left) and2i + 1(right)
Power-of-Two Sizing
Input data is always padded to the nearest power of two. If you have 5 leaves, the tree is built with 8 slots (3 padded with zero-hashes). This guarantees a perfect binary tree and simplifies indexing.
Inner Node Prefix
To prevent second-preimage attacks (where an internal node could be confused with a leaf), internal nodes are hashed with a distinguishing prefix byte:
$$ \text{node}(i) = H(\mathtt{0x01} || \text{left}(i) || \text{right}(i)) $$
The constant INNER_NODE_PREFIX = 0x01 is prepended before hashing children. Leaf nodes are stored as-is (they are already hashes of user data).
Tree Construction
Construction happens in two phases:
new_unhashed— allocates the flat array and places leaves at their positions.finalize— computes internal nodes bottom-up by hashing pairs of children.
#![allow(unused)]
fn main() {
// From crates/bmt/src/lib.rs
let tree = MerkleTree::<Keccak256>::new(&leaves);
let root = tree.root(); // &[u8; 32]
}
Proof Generation
A Merkle proof is a sequence of sibling hashes from the leaf to the root. The SiblingIter walks up the tree using bitwise operations:
graph TB
R["root ✓"] --- N2["H(01 ∥ leaf₀ ∥ leaf₁)"]
R --- N3["H(01 ∥ leaf₂ ∥ leaf₃) — sibling₁"]
N2 --- L0["leaf₀ — TARGET"]
N2 --- L1["leaf₁ — sibling₀"]
N3 --- L2["leaf₂"]
N3 --- L3["leaf₃"]
style L0 fill:#4a9eda
style L1 fill:#f5a623
style N3 fill:#f5a623
style R fill:#7ed321
The proof for leaf₀ consists of two entries:
(Left, leaf₁)— sibling is to the right, so append it.(Left, H(leaf₂ ∥ leaf₃))— sibling is to the right, so append it.
Each entry is a (NodePosition, &Hash) pair where NodePosition indicates whether the target is the left or right child:
- Left → sibling is on the right →
H(0x01 ∥ target ∥ sibling) - Right → sibling is on the left →
H(0x01 ∥ sibling ∥ target)
Proof Verification
To verify a proof, start with the leaf hash and iteratively combine it with each sibling:
$$ v_0 = \text{leaf} $$
$$ v_{i+1} = \begin{cases} H(\mathtt{0x01} || v_i || s_i) & \text{if position}_i = \text{Left} \\ H(\mathtt{0x01} || s_i || v_i) & \text{if position}_i = \text{Right} \end{cases} $$
The proof is valid if and only if the final value equals the known root:
$$ v_n \stackrel{?}{=} \text{root} $$
Serialization
The tree supports zero-copy serialization via as_raw_bytes() and deserialization via from_raw_bytes(). The entire flat array is cast to/from a byte slice using bytemuck, enabling efficient storage in RocksDB without any encoding overhead.
On-Chain Verification
The Solidity library MerkleTree.sol implements the same algorithm on-chain for the L1 anchoring pipeline. It uses identical constants (INNER_NODE_PREFIX = 0x01) and the same power-of-two padding strategy, ensuring that roots computed off-chain in Rust match roots verified on-chain in Solidity.
// From contracts/core/MerkleTree.sol
function hashNode(bytes32 left, bytes32 right) public pure returns (bytes32) {
// keccak256(0x01 || left || right)
assembly {
mstore(0x00, 0x01)
mstore(0x01, left)
mstore(0x21, right)
result := keccak256(0x00, 0x41)
}
}
OpenTimestamps Codec
The OTS codec (uts-core) defines the binary format for timestamp proofs. It extends the original OpenTimestamps specification with new attestation types for Ethereum (EAS) while maintaining backward compatibility with the Bitcoin attestation format.
OTS File Structure
A detached timestamp file (.ots) consists of three sections:
- Magic bytes + version — file identification header.
- Digest header — the hash algorithm and original digest value.
- Timestamp tree — a directed acyclic graph of operations that transform the original digest into one or more attestation values.
The DetachedTimestamp struct wraps a DigestHeader and a Timestamp tree:
#![allow(unused)]
fn main() {
pub struct DigestHeader {
kind: DigestOp, // Which hash algorithm (SHA256, Keccak256, etc.)
digest: [u8; 32], // The original hash value (first N bytes used)
}
}
OpCode System
The codec defines a set of opcodes that describe transformations on byte sequences. Each opcode is a single byte:
Data Opcodes
| OpCode | Tag | Description |
|---|---|---|
APPEND | 0xf0 | Concatenate immediate data after the input |
PREPEND | 0xf1 | Concatenate immediate data before the input |
REVERSE | 0xf2 | Reverse the byte order |
HEXLIFY | 0xf3 | Convert to ASCII hex representation |
Digest Opcodes
| OpCode | Tag | Output Size | Description |
|---|---|---|---|
SHA1 | 0x02 | 20 bytes | SHA-1 hash |
RIPEMD160 | 0x03 | 20 bytes | RIPEMD-160 hash |
SHA256 | 0x08 | 32 bytes | SHA-256 hash |
KECCAK256 | 0x67 | 32 bytes | Keccak-256 hash |
Control Opcodes
| OpCode | Tag | Description |
|---|---|---|
FORK | 0xff | Branch the proof into multiple paths |
ATTESTATION | 0x00 | Terminal node — contains an attestation |
The Timestamp Proof Tree
A Timestamp is a recursive structure that forms a proof tree (step graph):
#![allow(unused)]
fn main() {
pub enum Timestamp<A: Allocator = Global> {
Step(Step<A>),
Attestation(RawAttestation<A>),
}
pub struct Step<A: Allocator = Global> {
op: OpCode, // Operation to execute
data: Vec<u8, A>, // Immediate data (for APPEND/PREPEND)
input: OnceLock<Vec<u8, A>>, // Cached computed input
next: Vec<Timestamp<A>, A>, // Child timestamps (1 normally, 2+ for FORK)
}
}
A single timestamp file can contain multiple attestations (e.g., both an EAS attestation and a Bitcoin attestation) connected via FORK nodes:
digest
└─ PREPEND(timestamp)
└─ APPEND(signature)
└─ KECCAK256
├─ [FORK: Calendar A path]
│ └─ APPEND(sibling₀)
│ └─ KECCAK256
│ └─ ATTESTATION(EASTimestamped)
└─ [FORK: Calendar B path]
└─ ATTESTATION(PendingAttestation)
Attestation Types
Each attestation is identified by an 8-byte tag and carries type-specific data:
PendingAttestation (0x83dfe30d2ef90c8e)
Indicates the timestamp is not yet confirmed. Contains a URI pointing to the calendar server where the user can retrieve the completed proof.
#![allow(unused)]
fn main() {
pub struct PendingAttestation<'a> {
uri: Cow<'a, str>, // e.g., "https://calendar.example.com"
}
}
URI validation: max 1000 bytes, restricted character set (a-zA-Z0-9.-_/:).
EASAttestation (0x8bf46bf4cfd674fa)
A confirmed attestation on EAS with a specific UID.
#![allow(unused)]
fn main() {
pub struct EASAttestation {
chain: Chain, // Ethereum chain (mainnet, Scroll, etc.)
uid: B256, // 32-byte attestation UID
}
}
Encoded as: chain_id (u64) || uid (32 bytes).
EASTimestamped (0x5aafceeb1c7ad58e)
A lighter attestation that records only the chain where the timestamp was created. The on-chain lookup uses the computed commitment value to find the timestamp.
#![allow(unused)]
fn main() {
pub struct EASTimestamped {
chain: Chain, // Only the chain identifier
}
}
Encoded as: chain_id (u64).
BitcoinAttestation (0x0588960d73d71901)
Compatibility with the original OpenTimestamps Bitcoin anchoring.
#![allow(unused)]
fn main() {
pub struct BitcoinAttestation {
height: u32, // Bitcoin block height
}
}
Commitment Computation
When a user submits a digest to a calendar server, the server computes a commitment — a deterministic value that binds the digest to the submission time and the server’s identity:
$$ \text{commitment} = \text{keccak256}(ts || digest || sig) $$
Where:
- \( ts \) is the Unix timestamp (seconds) of receipt.
- \( sig \) is the server’s EIP-191 signature over
timestamp || digest. - \( digest \) is the user’s original hash.
This commitment becomes the leaf in the Merkle tree.
Finalization
The Timestamp::finalize(input) method walks the proof tree and computes the value at each node by executing its opcode:
- For a
Step: execute the opcode on the input (with immediate data if applicable), then finalize all children with the output. - For a
FORK: finalize all children with the same input (the proof branches). - For an
Attestation: store the input as the attestation’s value (the commitment that should match on-chain).
Finalization uses OnceLock for caching — once a node’s input is computed, it is stored and never recomputed. Conflicting inputs (from multiple paths) produce a FinalizationError.
Iterating Attestations
The Timestamp::attestations() method returns a depth-first iterator over all RawAttestation nodes in the tree. This is used during verification to extract and check each attestation independently.
The mutable variant pending_attestations_mut() allows upgrading pending attestations to confirmed ones (e.g., replacing a PendingAttestation with an EASTimestamped after the calendar server confirms the timestamp).
Journal / WAL
The journal (uts-journal) is a RocksDB-backed write-ahead log that sits between the calendar server’s HTTP handler and the stamper’s batching engine. It provides at-least-once delivery semantics and crash recovery for incoming timestamp requests.
Why a WAL?
The calendar server and stamper operate at different speeds and cadences:
- The HTTP handler accepts user digests one at a time, potentially hundreds per second.
- The stamper batches digests into Merkle trees on a configurable interval (default: every 10 seconds).
Without a durable buffer between them, a crash between receiving a digest and building the next batch would lose user data. The journal solves this by persisting every entry to disk synchronously before acknowledging the HTTP request.
Architecture
┌──────────────┐ ┌──────────┐ ┌─────────┐
│ HTTP Handler │──commit──▶│ Journal │──read───▶│ Stamper │
│ (writer) │ │ (RocksDB)│ │(reader) │
└──────────────┘ └──────────┘ └─────────┘
│
┌────┴────┐
│CF_ENTRIES│ ← entry data
│CF_META │ ← write/consumed indices
└─────────┘
RocksDB Column Families
The journal uses two RocksDB column families:
| Column Family | Key | Value | Purpose |
|---|---|---|---|
CF_ENTRIES ("entries") | write_index (u64 big-endian) | Raw entry bytes | Stores the actual digest commitments |
CF_META ("meta") | 0x00 or 0x01 | u64 little-endian | Stores write_index and consumed_index |
Writer / Reader Pattern
The journal enforces a strict concurrency model:
- One writer (the HTTP handler), serialized by a
Mutex. - One exclusive reader (the stamper), enforced by an
AtomicBoolflag.
Write Path
#![allow(unused)]
fn main() {
// From crates/journal/src/lib.rs
pub fn try_commit(&self, data: &[u8]) -> Result<(), Error>
}
- Acquire write lock.
- Check capacity: if
write_index - consumed_index >= capacity, returnError::Full. - Write entry + updated
write_indexatomically viaWriteBatch. - Update in-memory
write_index(AtomicU64). - Notify the consumer (stamper) that new data is available.
Every commit is a synchronous RocksDB write. The in-memory write_index always matches the durable state — there is no separate “flush” step.
Read Path
The JournalReader maintains a local cursor independent of the journal’s consumed_index:
#![allow(unused)]
fn main() {
// From crates/journal/src/reader.rs
reader.wait_at_least(min).await; // Async wait for entries
let entries = reader.read(max); // Fetch into internal buffer
// ... process entries ...
reader.commit(); // Advance consumed_index
}
The critical invariant: entries are only deleted from RocksDB when the reader calls commit(). This ensures that if the stamper crashes after reading but before building the Merkle tree, the entries survive for re-processing on restart.
Capacity Management
The journal has a fixed capacity (default: 1,048,576 entries in the calendar configuration). When the journal is full (write_index - consumed_index >= capacity), the HTTP handler receives a 503 Service Unavailable response rather than blocking.
This back-pressure mechanism prevents unbounded memory growth and signals to clients that the server is temporarily overloaded.
Crash Recovery
On startup, the journal reads write_index and consumed_index from CF_META and validates the invariant:
$$ \text{consumed_index} \leq \text{write_index} $$
If both are zero (fresh database), the journal starts empty. Otherwise, it resumes from where it left off — any entries between consumed_index and write_index are re-delivered to the reader.
Fatal Errors
If RocksDB encounters an unrecoverable error (e.g., disk corruption), the journal sets an AtomicBool fatal error flag. All subsequent operations immediately return Error::Fatal, and the calendar server initiates graceful shutdown.
This fail-fast behavior prevents silent data loss — the operator must investigate and fix the storage issue before the server can restart.
Async Coordination
The journal uses a waker-based notification system for efficient async coordination:
- When the reader calls
wait_at_least(n)and fewer thannentries are available, it registers a waker. - When the writer commits a new entry, it checks for a registered waker and wakes the reader task.
- This avoids busy-polling and integrates cleanly with Tokio’s async runtime.
User Submission
This chapter describes the first stage of the calendar timestamping pipeline: how a user creates and submits a timestamp request.
CLI: Hashing and Submission
The uts stamp command is the primary entry point for creating timestamps:
uts stamp --hasher keccak256 myfile.pdf
The CLI supports four hash algorithms: SHA-1, RIPEMD-160, SHA-256, and Keccak-256 (default).
Workflow
- Hash the file using the selected algorithm to produce a digest.
- Generate a nonce for each file and build an internal Merkle tree (when stamping multiple files simultaneously).
- Submit the tree root to one or more calendar servers.
- Merge responses from multiple calendars into a single OTS file via
FORKnodes. - Write the
.otsdetached timestamp file to disk.
Multi-Calendar Quorum
The CLI can submit to multiple calendar servers for redundancy. Each server independently signs and stores the digest. The responses are merged:
digest
└─ FORK
├─ Calendar A response (PendingAttestation)
└─ Calendar B response (PendingAttestation)
This ensures that even if one calendar server goes offline, the timestamp can still be completed via the other.
Calendar Server: POST /digest
The calendar server exposes a single endpoint for submissions:
POST /digest
Content-Type: application/octet-stream
Body: <raw digest bytes>
Validation: the digest must be ≤ 64 bytes.
EIP-191 Signing
The server signs a binding message using EIP-191 (Ethereum personal sign):
\x19Ethereum Signed Message:\n<len><timestamp><digest>
Where:
timestampis the Unix time (seconds) of receipt.digestis the user’s original hash.
The signature is encoded in ERC-2098 compact format (64 bytes instead of 65), producing an undeniable binding between the server’s identity, the submission time, and the digest.
Commitment Computation
The commitment is the value that becomes a leaf in the Merkle tree:
$$ \text{commitment} = \text{keccak256}\Big(\text{timestamp} || \text{digest} || \text{signature} || \text{keccak256}(\text{digest})\Big) $$
More precisely, the codec builds a Timestamp tree:
digest
└─ PREPEND(timestamp_bytes)
└─ APPEND(signature_bytes)
└─ KECCAK256
└─ PendingAttestation { uri: "https://calendar/" }
The KECCAK256 opcode produces the commitment — a deterministic 32-byte value that the user can later use to retrieve their proof.
Journal Commit
The 32-byte commitment is written to the journal synchronously:
#![allow(unused)]
fn main() {
journal.try_commit(&commitment_bytes)?;
}
If the journal is full or in a fatal state, the server returns 503 Service Unavailable. Otherwise, the entry is durably persisted before the HTTP response is sent.
Response
The server returns:
- The encoded OTS bytes containing the pending timestamp tree.
- The 32-byte commitment for later retrieval via
GET /digest/{commitment}.
The OTS file at this stage contains a PendingAttestation pointing back to the calendar server. The user must later poll the server to upgrade it to a confirmed attestation.
Performance Optimizations
- Thread-local bump allocator: OTS encoding uses a per-thread bump allocator to avoid heap allocation overhead on the hot path.
- Cached current time: Unix seconds are cached globally and updated every second, avoiding repeated
clock_gettimesyscalls. - ERC-2098 compact signatures: 64 bytes instead of 65, saving space in every OTS file.
Batching & Tree Creation
The stamper is the batching engine that reads commitments from the journal, groups them into a Merkle tree, and prepares batches for on-chain attestation.
Stamper Work Loop
sequenceDiagram
participant J as Journal
participant S as Stamper
participant KV as RocksDB KV
participant SQL as SQLite
participant TX as TxSender
loop Main work loop
S->>J: wait_at_least(min_leaves)
Note over S: Trigger: timeout OR max entries
S->>J: read(batch_size)
S->>S: Build MerkleTree (blocking task)
S->>KV: Store leaf→root mappings
S->>KV: Store root→serialized tree
S->>SQL: INSERT pending_attestation
S->>J: commit() (advance consumed_index)
S->>TX: Wake TxSender
end
Wait Strategy
The stamper triggers batching on whichever condition fires first:
| Trigger | Condition | Behavior |
|---|---|---|
| Timeout | max_interval_seconds elapsed (default: 10s) | Takes largest power-of-2 ≤ available entries |
| Max entries | max_entries_per_timestamp reached (default: 1024) | Takes exactly max_entries_per_timestamp entries |
If a timeout fires but fewer than min_leaves (default: 16) entries are available, the stamper takes all available entries to avoid creating excessively small trees.
Power-of-Two Leaf Selection
The batch size is always a power of two (or all available entries if below min_leaves). This matches the Merkle tree’s power-of-two padding requirement and minimizes wasted padding nodes.
For example, with 300 available entries, the stamper takes 256 (the largest power of two ≤ 300). The remaining 44 entries stay in the journal for the next batch.
Tree Construction
Tree construction is CPU-intensive (hashing thousands of nodes), so it runs on a blocking thread to avoid starving the Tokio async runtime:
#![allow(unused)]
fn main() {
let tree = tokio::task::spawn_blocking(move || {
let unhashed = MerkleTree::<Keccak256>::new_unhashed(&leaves);
unhashed.finalize()
}).await?;
}
The two-phase construction (new_unhashed → finalize) separates allocation from computation, allowing the hashing to happen entirely on the blocking thread.
KV Storage
After tree construction, two types of mappings are stored in RocksDB:
| Key | Value | Purpose |
|---|---|---|
leaf (32 bytes) | root (32 bytes) | Maps each commitment to its tree root |
root (32 bytes) | Serialized tree (variable) | Stores the full tree for later proof generation |
For single-leaf trees (when only one entry is available), the leaf itself is the root, and the tree serialization is stored directly under the leaf key.
The DbExt trait on RocksDB provides the storage interface:
#![allow(unused)]
fn main() {
pub trait DbExt<D: Digest> {
fn load_trie(&self, root: B256) -> Result<Option<MerkleTree<D>>>;
fn get_root_for_leaf(&self, leaf: B256) -> Result<Option<B256>>;
}
}
SQL: Pending Attestation
A record is inserted into SQLite to track the attestation lifecycle:
INSERT INTO pending_attestations (trie_root, created_at, updated_at, result)
VALUES (?, ?, ?, 'pending');
The result field transitions through: pending → success | max_attempts_exceeded.
Journal Commit
After the tree and all storage writes succeed, the stamper calls reader.commit() to advance the journal’s consumed_index. This deletes the consumed entries from RocksDB and frees capacity for new submissions.
The ordering is critical: storage writes happen before the journal commit. If the process crashes between tree creation and journal commit, the entries will be re-read and re-processed on restart (at-least-once semantics). Since Merkle trees are deterministic, re-processing produces identical results.
Configuration
#![allow(unused)]
fn main() {
pub struct StamperConfig {
pub max_interval_seconds: u64, // Default: 10
pub max_entries_per_timestamp: usize, // Default: 1024 (must be power of 2)
pub min_leaves: usize, // Default: 16
}
}
On-Chain Attestation
The attestation phase takes pending Merkle roots and records them on-chain via the Ethereum Attestation Service (EAS). It is deliberately decoupled from tree creation to handle transient blockchain errors without blocking the batching pipeline.
Decoupled Design
Tree creation and on-chain attestation run as separate tasks:
- The stamper builds trees and creates
pending_attestationrecords in SQL. - The TxSender watches for pending records and submits them to EAS.
This separation means that if the RPC endpoint is down or gas prices spike, the stamper continues batching. Pending attestations queue up in SQL and are retried when conditions improve.
TxSender Work Loop
The TxSender wakes on three triggers:
- New batch signal — the stamper notifies via a channel that a new pending attestation was created.
- Retry timeout — 10 seconds after a failed attempt, the sender retries all pending attestations.
- Cancellation — graceful shutdown.
#![allow(unused)]
fn main() {
pub struct TxSender<P: Provider> {
eas: EAS<P>,
sql_storage: SqlitePool,
waker: Receiver<()>,
token: CancellationToken,
}
}
EAS.timestamp(root) Call
For each pending attestation, the sender calls the EAS timestamp function with the Merkle root:
#![allow(unused)]
fn main() {
let receipt = self.eas.timestamp(attestation.trie_root).send().await?;
}
On success, the transaction hash and block number are recorded in the attest_attempts table:
INSERT INTO attest_attempts (attestation_id, chain_id, tx_hash, block_number, created_at)
VALUES (?, ?, ?, ?, ?);
Retry Logic
| Attempt | Outcome | Action |
|---|---|---|
| 1–3 | Transient error | Retry after 10 seconds |
| > 3 | MAX_RETRIES exceeded | Mark as max_attempts_exceeded |
| Any | Success | Mark as success |
The maximum retry count is defined as:
#![allow(unused)]
fn main() {
const MAX_RETRIES: i64 = 3;
}
Handling “Already Timestamped” Reverts
If the EAS contract reverts because the root was already timestamped (e.g., from a previous attempt that succeeded but whose receipt was lost), the TxSender recovers gracefully:
- Call
EAS.getTimestamp(root)to retrieve the existing timestamp. - Binary-search the block range to narrow down the transaction.
- Query
Timestampedevent logs within the narrowed range. - Extract the original
tx_hashandblock_number. - Record the attempt as successful.
This ensures idempotent behavior — submitting the same root twice does not create duplicate on-chain entries and does not count as a failure.
Transaction Flow
flowchart TD
A[Load pending attestations] --> B{Any pending?}
B -->|No| C[Sleep / wait for signal]
B -->|Yes| D[EAS.timestamp root]
D --> E{Success?}
E -->|Yes| F[Record tx_hash + block]
E -->|Reverted: already timestamped| G[Recover existing tx]
E -->|Transient error| H{Attempts < MAX_RETRIES?}
H -->|Yes| I[Record failed attempt]
H -->|No| J[Mark max_attempts_exceeded]
F --> K[Mark success]
G --> K
I --> C
J --> C
K --> B
Proof Retrieval
After the on-chain attestation succeeds, users can retrieve their completed timestamp proof from the calendar server.
Retrieval Endpoint
GET /digest/{commitment}
Where {commitment} is the 32-byte hex commitment returned during submission.
Lookup Flow
Step 1: Leaf → Root
The server looks up the commitment in the RocksDB KV store:
#![allow(unused)]
fn main() {
let root = kv_db.get_root_for_leaf(commitment)?;
}
If the commitment is not found, the entry hasn’t been batched yet — return 404 Not Found.
Step 2: Check Attestation Status
The server queries SQLite for the attestation result:
#![allow(unused)]
fn main() {
let result = get_attestation_result(&pool, root).await?;
}
| Status | HTTP Response |
|---|---|
pending | 404 Not Found (not yet attested) |
max_attempts_exceeded | 500 Internal Server Error |
success | Continue to proof construction |
Step 3: Merkle Proof Reconstruction
The server loads the full Merkle tree from KV storage and generates a proof for the user’s leaf:
#![allow(unused)]
fn main() {
let tree = kv_db.load_trie::<Keccak256>(root)?;
let proof_iter = tree.get_proof_iter(&commitment)?;
}
The proof is a sequence of (NodePosition, Hash) pairs that the user needs to reconstruct the root from their leaf.
Step 4: Build Timestamp Tree
The proof is encoded as an OTS Timestamp tree with the Merkle proof steps and a terminal EASTimestamped attestation:
commitment
└─ APPEND(sibling₀) or PREPEND(sibling₀)
└─ KECCAK256
└─ APPEND(sibling₁) or PREPEND(sibling₁)
└─ KECCAK256
└─ ...
└─ EASTimestamped { chain: scroll }
Each sibling in the Merkle proof becomes either an APPEND or PREPEND operation, depending on whether the target node is the left or right child (i.e., the NodePosition). After each append/prepend, a KECCAK256 operation computes the parent hash.
Step 5: Encode and Return
The timestamp tree is encoded to OTS binary format and returned with caching headers:
Cache-Control: public, immutable
Once an attestation is confirmed on-chain, the proof is immutable — the same commitment will always produce the same response. This allows aggressive client-side and CDN caching.
Complete Response Structure
The user’s final .ots file (after merging the retrieval response with their original submission) looks like:
digest
└─ PREPEND(timestamp)
└─ APPEND(signature)
└─ KECCAK256 ← commitment (leaf)
└─ APPEND(sibling₀)
└─ KECCAK256
└─ PREPEND(sibling₁)
└─ KECCAK256
└─ EASTimestamped { chain_id }
The user now has a self-contained proof that:
- Their digest was received at a specific time (via the timestamp + signature).
- The commitment was included in a specific Merkle tree (via the proof path).
- The Merkle root was recorded on-chain (via the EASTimestamped attestation).
Verification
Verification is the process of taking an .ots file and the original data, and confirming that the timestamp is valid and anchored on-chain.
Verification Steps
Step 1: Recompute the Digest
Hash the original file with the same algorithm specified in the OTS digest header:
#![allow(unused)]
fn main() {
let digest = hash_file::<Keccak256>(&file_contents);
assert_eq!(digest, ots_file.digest_header.digest());
}
If the digest doesn’t match, the file has been modified since timestamping.
Step 2: Finalize the Proof Tree
Walk the OTS timestamp tree, executing each opcode to compute intermediate values:
#![allow(unused)]
fn main() {
timestamp.finalize(&digest)?;
}
This propagates the original digest through PREPEND, APPEND, and KECCAK256 operations until reaching the attestation nodes. Each attestation’s value field is set to the computed input at that point in the tree.
Step 3: Verify On-Chain
For each EASTimestamped attestation in the tree:
- Connect to the appropriate Ethereum RPC (auto-detected by chain ID from the attestation).
- Call the EAS contract to verify the computed value was timestamped:
#![allow(unused)]
fn main() {
let verifier = EASVerifier::new(provider);
let result = verifier.verify(&attestation, &value).await?;
}
The verifier checks that EAS.getTimestamp(value) returns a non-zero timestamp, confirming the Merkle root was recorded on-chain.
Step 4: Display Results
On success, the CLI displays:
- The chain where the attestation lives.
- The attestation UID.
- The attester address.
- The block time (when the root was timestamped).
Full Pipeline Sequence
sequenceDiagram
participant U as User
participant CLI as uts-cli
participant C as Calendar Server
participant J as Journal
participant S as Stamper
participant EAS as EAS (L2)
Note over U,EAS: === STAMP PHASE ===
U->>CLI: uts stamp myfile.pdf
CLI->>CLI: digest = keccak256(file)
CLI->>C: POST /digest (digest)
C->>C: Sign (EIP-191)
C->>C: commitment = keccak256(ts ∥ sig ∥ keccak256(digest))
C->>J: journal.commit(commitment)
C-->>CLI: OTS bytes + commitment
CLI->>CLI: Write myfile.pdf.ots (PendingAttestation)
Note over U,EAS: === BATCH PHASE ===
S->>J: read(batch_size)
S->>S: Build MerkleTree from commitments
S->>S: Store leaf→root, root→tree in KV
S->>S: INSERT pending_attestation in SQL
S->>J: commit()
Note over U,EAS: === ATTEST PHASE ===
S->>EAS: EAS.timestamp(merkle_root)
EAS-->>S: tx receipt (tx_hash, block_number)
S->>S: UPDATE attestation → success
Note over U,EAS: === UPGRADE PHASE ===
CLI->>C: GET /digest/{commitment}
C->>C: Load tree, generate Merkle proof
C-->>CLI: OTS bytes with EASTimestamped
CLI->>CLI: Merge into myfile.pdf.ots
Note over U,EAS: === VERIFY PHASE ===
U->>CLI: uts verify myfile.pdf
CLI->>CLI: Recompute digest
CLI->>CLI: Finalize proof tree
CLI->>EAS: EAS.getTimestamp(root)
EAS-->>CLI: timestamp (non-zero = valid)
CLI-->>U: ✓ Verified: attested at block N, time T
Error Cases
| Error | Meaning |
|---|---|
NoValue | Attestation node has no computed value (tree not finalized) |
Pending | Attestation is still pending — poll the calendar server |
BadAttestationTag | Unknown attestation type |
Decode | Malformed attestation data |
EAS | On-chain verification failed (root not found, wrong chain, etc.) |
Verification Without the CLI
Since verification only requires:
- The
.otsfile. - The original data.
- Access to an Ethereum RPC.
Any implementation that can parse the OTS codec and execute the opcodes can independently verify a timestamp. There is no dependency on the calendar server for verification — the proof is fully self-contained.
Smart Contracts Architecture
The UTS L1 anchoring pipeline is implemented across four primary smart contracts that coordinate cross-chain timestamping between L2 (Scroll) and L1 (Ethereum mainnet).
Contract Overview
graph TB
subgraph "L2 (Scroll)"
L2MGR[L2AnchoringManager<br/>UUPS + ERC721]
FEE[FeeOracle<br/>Dynamic pricing]
NFT[NFTGenerator<br/>SVG certificates]
EASL2[EAS Contract]
L2MSG[L2ScrollMessenger]
MT[MerkleTree.sol<br/>On-chain verification]
end
subgraph "L1 (Ethereum)"
L1GW[L1AnchoringGateway<br/>UUPS]
EASL1[EAS Contract]
L1MSG[L1ScrollMessenger]
EASHELPER[EASHelper.sol]
end
User -->|submitForL1Anchoring| L2MGR
L2MGR -->|getAttestation| EASL2
L2MGR -->|getFloorFee| FEE
L2MGR -->|computeRoot| MT
L2MGR -->|generateTokenURI| NFT
Relayer -->|submitBatch| L1GW
L1GW -->|timestamp| EASL1
L1GW -->|sendMessage| L1MSG
L1MSG -.->|cross-chain| L2MSG
L2MSG -->|notifyAnchored| L2MGR
Relayer -->|finalizeBatch| L2MGR
User -->|claimNFT| L2MGR
L1GW -->|attest| EASHELPER
EASHelper
A library that wraps EAS attestation creation with UTS-specific parameters:
bytes32 constant CONTENT_HASH_SCHEMA =
0x5c5b8b295ff43c8e442be11d569e94a4cd5476f5e23df0f71bdd408df6b9649c;
All UTS attestations share:
- Schema:
CONTENT_HASH_SCHEMA(un-revocable content hash) - Recipient:
address(0)(no specific recipient) - Revocable:
false - Expiration:
0(never expires) - Data:
abi.encode(root)(the Merkle root as a singlebytes32)
MerkleTree.sol
An on-chain implementation of the same binary Merkle tree algorithm used in the Rust uts-bmt crate. It ensures that roots computed off-chain match roots verified on-chain.
Key implementation details:
- Uses
INNER_NODE_PREFIX = 0x01— identical to the Rust implementation. - Pads leaves to power-of-two with
EMPTY_LEAF = bytes32(0). - The
hashNodefunction uses inline assembly for gas efficiency:
function hashNode(bytes32 left, bytes32 right) public pure returns (bytes32 result) {
assembly {
mstore(0x00, 0x01) // INNER_NODE_PREFIX
mstore(0x01, left) // 32 bytes
mstore(0x21, right) // 32 bytes
result := keccak256(0x00, 0x41) // Hash 65 bytes
}
}
The computeRoot function reconstructs the full tree from an array of leaves:
- Calculate
width = nextPowerOfTwo(count). - Hash leaf pairs into a buffer of
width/2entries. - Handle odd leaves and padding.
- Iteratively hash up the tree in-place until one root remains.
The verify function simply compares computeRoot(leaves) against an expected root.
ERC-7201 Namespaced Storage
Both L1AnchoringGateway and L2AnchoringManager use ERC-7201 namespaced storage to avoid storage slot collisions in the upgradeable proxy pattern:
// L1AnchoringGateway
bytes32 constant SLOT = keccak256(
abi.encode(uint256(keccak256("uts.storage.L1AnchoringGateway")) - 1)
) & ~bytes32(uint256(0xff));
// L2AnchoringManager
bytes32 constant SLOT = keccak256(
abi.encode(uint256(keccak256("uts.storage.L2AnchoringManager")) - 1)
) & ~bytes32(uint256(0xff));
This pattern stores all contract state in a deterministic, collision-free storage slot rather than in sequential slots starting from slot 0.
Upgrade Pattern
Both gateway contracts use the UUPS (Universal Upgradeable Proxy Standard) pattern:
- The implementation contract contains the upgrade logic.
- Only the admin can authorize upgrades.
- A 3-day admin transfer delay prevents hasty privilege changes.
L2AnchoringManager
The L2AnchoringManager is the L2-side orchestrator for the L1 anchoring pipeline. It manages a queue of user-submitted attestation roots, receives cross-chain notifications from L1, verifies batch integrity, and mints NFT certificates.
submitForL1Anchoring
Users call this function to request L1 anchoring for an existing EAS attestation:
function submitForL1Anchoring(
bytes32 attestationId
) external payable nonReentrant
Validation Steps
- Duplicate check: The attestation must not already be submitted.
- Fee check:
msg.value >= feeOracle.getFloorFee(). - Attestation validation:
- Schema must be
CONTENT_HASH_SCHEMA. - Expiration must be
0(non-expiring). - Must be non-revocable.
- Schema must be
- Decode root:
abi.decode(attestation.data, (bytes32)).
Storage
struct AnchoringRecord {
bytes32 root; // User's content hash
bytes32 attestationId; // EAS attestation ID
uint256 blockNumber; // L2 block of submission
}
indexToRecords[queueIndex] = record;
attestationIdToIndex[attestationId] = queueIndex;
queueIndex++;
The queueIndex starts at 1 and increments monotonically. Index 0 is reserved as a sentinel for “not found”.
Fee Refund
If the user overpays, excess ETH is refunded to a configurable refund address (defaults to msg.sender).
FeeOracle
The FeeOracle calculates the per-item fee for L1 anchoring based on current gas prices:
$$ \text{fee} = \frac{\text{estimatedCost} \times \text{feeMultiplier}}{\text{expectedBatchSize} \times \text{PRECISION}} $$
Where the estimated batch cost is:
$$ \text{estimatedCost} = \underbrace{l1BaseFee \times l1Gas}{\text{L1 attestation cost}} + \underbrace{crossDomainGasPrice \times crossDomainGas}{\text{cross-chain message cost}} + \underbrace{l2BaseFee \times l2ExecutionGas}_{\text{L2 finalization cost}} $$
And L2 execution gas scales with batch size:
$$ l2ExecutionGas = l2ExecutionScalar \times batchSize + l2ExecutionOverhead $$
Default Parameters
| Parameter | Default | Description |
|---|---|---|
l1GasEstimated | 350,000 | Gas to attest batch on L1 |
crossDomainGasEstimated | 110,000 | Gas for L1→L2 message |
l2ExecutionScalar | 3,500 | Per-item L2 gas |
l2ExecutionOverhead | 35,000 | Base L2 gas |
expectedBatchSize | 256 | Assumed items per batch |
feeMultiplier | 1.5 × 10¹⁸ | Safety margin (1.5×) |
The fee oracle reads l1BaseFee from Scroll’s L1 gas price oracle predeployed at 0x5300000000000000000000000000000000000002.
Queue Index Tracking
The manager maintains two indices:
queueIndex: next available slot for new submissions (monotonically increasing).confirmedIndex: the boundary of confirmed batches. All entries with index <confirmedIndexare confirmed.
┌──────────┬───────────────────┬──────────────┐
│Confirmed │ Pending Batch │ Unprocessed │
│ [1, ci) │ [ci, ci+count) │ [ci+count, qi)│
└──────────┴───────────────────┴──────────────┘
1 ci qi
notifyAnchored
Called by the L2 Scroll Messenger when the L1 gateway successfully timestamps a batch:
function notifyAnchored(
bytes32 claimedRoot,
uint256 startIndex,
uint256 count,
uint256 l1Timestamp,
uint256 l1BlockNumber
) external
Guards:
msg.sendermust be the L2 Scroll Messenger.xDomainMessageSendermust be the L1 Gateway.startIndexmust equalconfirmedIndex(sequential batches only).- No pending batch can exist (prevents overlapping batches).
The function stores a PendingL1Batch for later verification and finalization.
finalizeBatch
Anyone can call this function to complete a pending batch:
function finalizeBatch() external nonReentrant
- Load the pending batch.
- Reconstruct the Merkle tree from stored
AnchoringRecordroots. - Verify:
MerkleTree.computeRoot(leaves) == pendingBatch.claimedRoot. - Update
confirmedIndex = startIndex + count. - Store the finalized
L1Batchrecord. - Clear the pending batch.
The on-chain Merkle verification ensures the relayer cannot claim a fraudulent root. The contract independently reconstructs the tree from its own stored data and compares.
Cross-Chain Relay
The relayer (uts-relayer) orchestrates the full L2→L1→L2 anchoring lifecycle. It monitors L2 events, packs batches, submits them to L1, and finalizes them back on L2.
Architecture
The relayer consists of two main components:
- L2 Indexer — scans and subscribes to on-chain events from the
L2AnchoringManager. - Batch Engine — a state machine that drives batches through their lifecycle.
L2 Indexer
The indexer tracks three event types from the L2AnchoringManager contract:
| Event | Purpose |
|---|---|
L1AnchoringQueued | User submitted a root for L1 anchoring |
L1BatchArrived | Cross-chain notification arrived from L1 |
L1BatchFinalized | Batch verification and finalization completed |
For each event type, the indexer runs two parallel tasks:
- Scanner: historical catch-up via
eth_getLogswith configurable batch size. - Subscriber: real-time monitoring via WebSocket subscription.
The scanner rewinds by 100 blocks on startup for reorg protection:
#![allow(unused)]
fn main() {
const REWIND_BLOCKS: u64 = 100;
let start = last_indexed_block.saturating_sub(REWIND_BLOCKS);
}
Indexer progress is persisted in the indexer_cursors table, keyed by (chain_id, event_signature_hash).
Full Lifecycle Sequence
sequenceDiagram
participant L2I as L2 Indexer
participant R as Relayer Engine
participant L1 as L1AnchoringGateway
participant EAS as EAS (L1)
participant MSG as Scroll Messenger
participant L2M as L2AnchoringManager
L2I->>L2I: Scan L1AnchoringQueued events
L2I->>R: Pending queue count
R->>R: Pack batch (Merkle tree)
Note over R: Status: Collected
R->>L1: submitBatch(root, start, count, gasLimit)
Note over R: Status: L1Sent
L1->>EAS: timestamp(root)
L1->>MSG: sendMessage(notifyAnchored, ...)
R->>R: Poll for L1 receipt
Note over R: Status: L1Mined
MSG-->>L2M: notifyAnchored(root, start, count, ...)
L2I->>R: L1BatchArrived event detected
Note over R: Status: L2Received
R->>L2M: finalizeBatch()
Note over R: Status: L2FinalizeTxSent
R->>R: Poll for L2 receipt
L2I->>R: L1BatchFinalized event detected
Note over R: Status: L2Finalized
Batch State Machine
stateDiagram-v2
[*] --> Collected: Pack batch\n(Merkle tree from queued roots)
Collected --> L1Sent: submitBatch() on L1
L1Sent --> L1Mined: L1 tx receipt confirmed
L1Mined --> L2Received: Cross-chain message\narrived on L2
L2Received --> L2FinalizeTxSent: finalizeBatch() on L2
L2FinalizeTxSent --> L2Finalized: L2 tx receipt confirmed
L2Finalized --> [*]: Ready for next batch
State Transitions
| From | To | Trigger | Action |
|---|---|---|---|
| (none) | Collected | Queue has enough items or timeout | may_pack_new_batch() |
| Collected | L1Sent | — | send_attest_tx(): call L1AnchoringGateway.submitBatch() |
| L1Sent | L1Mined | L1 tx receipt | watch_l1_tx(): validate Timestamped event, record gas fees |
| L1Mined | L2Received | L1BatchArrived event indexed | Wait for cross-chain message delivery |
| L2Received | L2FinalizeTxSent | — | send_finalize_batch_tx(): call L2AnchoringManager.finalizeBatch() |
| L2FinalizeTxSent | L2Finalized | L2 tx receipt | watch_finalize_batch_tx(): validate L1BatchFinalized event |
Batch Packing Logic
The relayer packs a new batch when:
next_start_index = previous_batch.start_index + previous_batch.count
pending_count = count_pending_events(next_start_index)
Pack if:
pending_count >= batch_max_size OR
(pending_count > 0 AND elapsed >= batch_max_wait_seconds)
Batch packing constructs a MerkleTree<Keccak256> from the queued roots and stores the batch record:
INSERT INTO l1_batch (l2_chain_id, start_index, count, root, status)
VALUES (?, ?, ?, ?, 'Collected');
Cost Tracking
The relayer records detailed cost breakdowns for each batch:
-- batch_fee table
INSERT INTO batch_fee (internal_batch_id, l1_gas_fee, l2_gas_fee, cross_chain_fee)
VALUES (?, ?, ?, ?);
- L1 gas fee:
gas_used × effective_gas_pricefrom the L1 receipt. - L2 gas fee:
gas_used × effective_gas_pricefrom the L2 finalization receipt. - Cross-chain fee: ETH value sent with the
submitBatchcall (pays for L1→L2 message delivery).
Configuration
#![allow(unused)]
fn main() {
pub struct RelayerConfig {
pub batch_max_size: i64, // Max items per batch (≤ 512)
pub batch_max_wait_seconds: i64, // Timeout before sealing
pub tick_interval_seconds: u64, // State machine poll frequency
pub l1_batch_submission_gas_limit: u64, // Gas limit for L1 tx
pub l1_batch_submission_fee: U256, // ETH value for cross-chain msg
}
}
Database Schema
The relayer maintains its state in SQLite with these core tables:
| Table | Purpose |
|---|---|
indexer_cursors | Track scanning progress per event type |
eth_block | Block metadata for indexed events |
eth_transaction | Transaction metadata |
eth_log | Log metadata |
l1_anchoring_queued | Queued anchoring requests (from L2 events) |
l1_batch | Batch lifecycle state |
l1_batch_arrived | Cross-chain arrival events |
l1_batch_finalized | Finalization events |
tx_receipt | Transaction execution details |
batch_fee | Per-batch cost breakdown |
NFT Certificates
After a batch is finalized, users can claim an ERC-721 NFT certificate that serves as a visual, on-chain proof that their content hash was anchored on L1 Ethereum.
Claiming
Users call claimNFT on the L2AnchoringManager:
function claimNFT(
bytes32 attestationId,
uint256 batchStartIndexHint
) external nonReentrant
Requirements
- The attestation must exist and be mapped to a queue index.
- The queue index must be confirmed (
index < confirmedIndex). - The NFT must not already be claimed.
- The
batchStartIndexHintmust point to the correct batch containing this index.
Minting
The token ID equals the queue index. The NFT is minted to the original attester (the address that created the EAS attestation), not necessarily msg.sender.
On-Chain Metadata
The tokenURI function generates fully on-chain metadata — no IPFS or external hosting required:
function tokenURI(uint256 tokenId) public view returns (string memory)
It returns a data:application/json;base64,... URI containing:
{
"name": "Certificate #<tokenId> - <l2Name>",
"description": "Proof of content existence at timestamp ...",
"external_url": "https://timestamps.now/<chainId>/<tokenId>",
"image": "data:image/svg+xml;base64,...",
"attributes": [
{ "display_type": "date", "trait_type": "date", "value": <unix_timestamp> },
{ "trait_type": "l1BlockNumber", "value": "<block>" },
{ "trait_type": "l2BlockNumber", "value": "<block>" }
]
}
SVG Generation
The NFTGenerator contract produces a complex SVG certificate design entirely on-chain. The visual includes:
- Gradient background with animated glow effects.
- Grid pattern overlay.
- Tree diagram symbolizing the Merkle tree structure.
- Certificate ID (token ID with comma formatting).
- Content hash displayed as two lines of 32 hex characters.
- L2 block number of the original submission.
- L1 block number of the anchoring transaction.
- Timestamp formatted as
YYYY-MM-DD HH:MM:SS(using Solady’sDateTimelibrary). - Code 128-C barcode encoding the token ID.
- “UNIVERSAL TIMESTAMPS PROTOCOL” watermark.
Code 128-C Barcode
The Code128CGenerator contract generates a Code 128-C barcode as an SVG element:
- The token ID is zero-padded to 20 digits.
- Digits are grouped into pairs (Code 128-C encodes two digits per symbol).
- A checksum is calculated using the Code 128-C weighted sum algorithm.
- The barcode is rendered as alternating white and blue bars in an SVG
<g>element.
This allows the certificate to be scanned and linked back to the on-chain record.
Design Rationale
Generating NFT metadata on-chain (rather than pointing to IPFS) ensures:
- Permanence: The metadata cannot disappear if an IPFS pin is removed.
- Trustlessness: Anyone can verify the metadata by calling the contract directly.
- Consistency: The visual representation always matches the on-chain state.
The SVG for the NFT is generated every time the tokenURI is called.
Storage Architecture
UTS uses a three-layer storage strategy, choosing the right technology for each workload’s access pattern and durability requirements.
Storage Overview
| Layer | Technology | Component | Purpose | Data Stored |
|---|---|---|---|---|
| Journal | RocksDB | Calendar Server | High-throughput WAL | Pending digest commitments |
| KV Store | RocksDB | Calendar Server | Merkle tree storage | Trees + leaf→root mappings |
| SQL Store | SQLite | Calendar Server (Stamper) | Attestation metadata | Pending attestations + attempt history |
| Relayer DB | SQLite | Relayer Service | Event indexing + batch state | Cursors, batches, costs, event logs |
Journal (RocksDB)
Purpose: Durable buffer between HTTP handler and stamper.
Access pattern: Append-only writes, sequential reads, bulk deletes.
RocksDB is ideal here because:
- Synchronous writes guarantee durability before HTTP response.
- Sequential key layout (monotonic u64 indices) enables efficient range scans.
- Bulk deletes on commit are efficient via RocksDB’s compaction.
Column families:
entries— entry data keyed by write index.meta—write_indexandconsumed_indexmetadata.
Capacity: Configurable (default: 1,048,576 entries). Back-pressure via Error::Full when capacity is reached.
See Journal / WAL for implementation details.
KV Store (RocksDB)
Purpose: Store Merkle trees and leaf→root mappings for proof retrieval.
Access pattern: Point lookups by 32-byte hash keys.
Two types of entries:
| Key | Value | Size |
|---|---|---|
| Leaf hash (32B) | Root hash (32B) | 64 bytes per entry |
| Root hash (32B) | Serialized tree | Variable (depends on leaf count) |
The KV store uses RocksDB’s default column family with DB::open_default(). Trees are serialized as raw byte arrays via MerkleTree::as_raw_bytes() for zero-copy storage and retrieval.
Retrieval logic (DbExt trait):
get_root_for_leaf(leaf): Returns the root hash for a given commitment.load_trie(root): Deserializes the full Merkle tree for proof generation.
For single-leaf trees, the leaf itself is the root — the tree serialization is stored directly and detected by value length (≠ 32 bytes).
SQL Store — Stamper (SQLite)
Purpose: Track attestation lifecycle and transaction attempts.
Schema:
-- Pending attestation records
CREATE TABLE pending_attestations (
id INTEGER PRIMARY KEY,
trie_root TEXT NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL,
result TEXT NOT NULL DEFAULT 'pending'
-- result: 'pending' | 'success' | 'max_attempts_exceeded'
);
-- Individual transaction attempts
CREATE TABLE attest_attempts (
id INTEGER PRIMARY KEY,
attestation_id INTEGER NOT NULL REFERENCES pending_attestations(id),
chain_id TEXT NOT NULL,
tx_hash TEXT,
block_number TEXT,
created_at INTEGER NOT NULL
);
Encoding: Large integers and 32-byte hashes are stored as text (hex or decimal strings) via a TextWrapper<T> pattern to improve human-readability. The performance impact is minimal and hence is an acceptable trade-off.
Relayer DB (SQLite)
Purpose: Event indexing, batch lifecycle management, and cost tracking.
The relayer database is more complex, with 10 tables across 3 migrations:
Indexer Tables
indexer_cursors -- Track last-indexed block per event type
eth_block -- Block metadata for indexed events
eth_transaction -- Transaction metadata
eth_log -- Log metadata
These four tables form a normalized chain of custody: block → transaction → log → event-specific table.
Event Tables
l1_anchoring_queued -- L1AnchoringQueued events (user submissions)
l1_batch_arrived -- L1BatchArrived events (cross-chain notifications)
l1_batch_finalized -- L1BatchFinalized events (batch completions)
Batch Management
l1_batch -- Batch lifecycle state machine
-- Columns: start_index, count, root, l1_tx_hash, l2_tx_hash, status
-- Status: Collected → L1Sent → L1Mined → L2Received
-- → L2FinalizeTxSent → L2Finalized
Cost Tracking
tx_receipt -- Gas usage and pricing per transaction
batch_fee -- Per-batch cost breakdown (L1 gas, L2 gas, cross-chain fee)
Why This Split?
| Concern | RocksDB | SQLite |
|---|---|---|
| High-throughput sequential writes | Excellent | Adequate |
| Point lookups by hash | Excellent | Good (with index) |
| Complex queries (JOINs, aggregations) | Not supported | Excellent |
| Relational integrity (foreign keys) | Not supported | Built-in |
| Schema evolution (migrations) | Manual | SQLx migrations |
RocksDB handles the hot path (journal writes, tree storage) where throughput matters. SQLite handles the metadata path (attestation tracking, event indexing) where query flexibility matters.
This split avoids forcing either technology into a role it’s not designed for.
Security Considerations
This chapter covers the security properties and protections built into the UTS protocol across both the smart contract layer and the off-chain infrastructure.
Access Control
Smart Contract Roles
| Role | Contract | Privilege |
|---|---|---|
DEFAULT_ADMIN_ROLE | L1Gateway, L2Manager, FeeOracle | Configure contract parameters, grant/revoke roles |
SUBMITTER_ROLE | L1AnchoringGateway | Submit batches to L1 |
FEE_COLLECTOR_ROLE | L2AnchoringManager | Withdraw accumulated fees |
UPDATER_ROLE | FeeOracle | Update fee parameters |
Admin Transfer Delay
Both L1AnchoringGateway and L2AnchoringManager use OpenZeppelin’s AccessControlDefaultAdminRulesUpgradeable with a 3-day transfer delay for the admin role. This prevents:
- Instant admin takeover via compromised keys.
- Flash-loan governance attacks.
- Accidental admin transfers.
The FeeOracle uses the non-upgradeable variant with the same 3-day delay.
Reentrancy Protection
All state-modifying external functions use ReentrancyGuardTransient:
| Contract | Protected Functions |
|---|---|
| L1AnchoringGateway | submitBatch() |
| L2AnchoringManager | submitForL1Anchoring(), claimNFT(), withdrawFees() |
The transient variant (EIP-1153) uses transient storage for the reentrancy flag, saving gas compared to the traditional storage-based guard.
Cross-Chain Message Authentication
The L2AnchoringManager validates cross-chain messages with a two-layer check:
// In notifyAnchored():
require(msg.sender == address(l2Messenger));
require(l2Messenger.xDomainMessageSender() == l1Gateway);
- msg.sender must be the L2 Scroll Messenger — prevents direct calls from arbitrary addresses.
- xDomainMessageSender must be the L1 Gateway — prevents spoofed messages from other L1 contracts.
Both conditions must hold simultaneously. This ensures that notifyAnchored can only be triggered by a legitimate cross-chain message originating from the authorized L1 gateway contract.
Merkle Proof Verification
The L2AnchoringManager independently verifies batch integrity during finalization:
function finalizeBatch() external {
// Reconstruct leaves from stored records
bytes32[] memory leaves = new bytes32[](count);
for (uint256 i = 0; i < count; i++) {
leaves[i] = indexToRecords[startIndex + i].root;
}
// Verify against claimed root
require(MerkleTree.verify(leaves, claimedRoot));
}
This prevents a malicious relayer from submitting a fraudulent Merkle root that doesn’t match the actual queued entries. The contract uses its own stored data (not relayer-provided data) to reconstruct the tree.
Sequential Batch Ordering
The L2AnchoringManager enforces strict sequential batch ordering:
require(startIndex == confirmedIndex);
require(pendingBatch.count == 0); // No overlapping batches
This prevents:
- Gap attacks: skipping queue entries to exclude specific timestamps.
- Overlap attacks: double-processing entries across multiple batches.
- Reorder attacks: processing entries out of order.
Input Validation
Batch Size Bounds
uint256 constant MAX_BATCH_SIZE = 512;
require(count >= 1 && count <= MAX_BATCH_SIZE);
Gas Limit Bounds
uint256 constant MIN_GAS_LIMIT = 110_000;
uint256 constant MAX_GAS_LIMIT = 200_000;
require(gasLimit >= MIN_GAS_LIMIT && gasLimit <= MAX_GAS_LIMIT);
Address Zero Checks
All address setters validate against address(0) to prevent accidental misconfiguration.
Attestation Immutability
Submitted attestations are verified to be non-revocable and non-expiring:
require(attestation.expirationTime == 0);
require(!attestation.revocable);
This ensures that once an attestation is used for L1 anchoring, it cannot be invalidated by the attester.
Compare-and-Set Status Transitions
The relayer’s batch state machine uses compare-and-set (CAS) semantics in SQL:
UPDATE l1_batch
SET status = ?new_status, updated_at = ?now
WHERE id = ?id AND status = ?expected_status;
The indexer updates the status once new event arrives. There’s a possibility that the event arrives just before the receipt, hence we use CAS.
Fail-Fast Error Handling
The journal implements a fatal error flag:
#![allow(unused)]
fn main() {
fatal_error: AtomicBool
}
Once set, all journal operations immediately return Error::Fatal. The calendar server initiates graceful shutdown rather than risk silent data corruption. This is preferable to attempting recovery from an unknown state.
EAS Contract Addresses
UTS uses well-known, audited EAS contract addresses per chain:
| Chain | Address |
|---|---|
| Mainnet | 0xA1207F3BBa224E2c9c3c6D5aF63D0eb1582Ce587 |
| Scroll | 0xC47300428b6AD2c7D03BB76D05A176058b47E6B0 |
| Scroll Sepolia | 0xaEF4103A04090071165F78D45D83A0C0782c2B2a |
| Sepolia | 0xC47300428b6AD2c7D03BB76D05A176058b47E6B0 |
These are hardcoded via a compile-time perfect hash map (phf_map), preventing runtime misconfiguration.
Appendix A: Beacon Injector
The beacon injector (uts-beacon-injector) is an auxiliary service that injects drand randomness beacons into the UTS timestamping pipeline, providing a continuous stream of publicly verifiable, unpredictable timestamps.
What is drand?
drand (distributed randomness) is a decentralized randomness beacon that produces publicly verifiable, unbiased, and unpredictable random values at regular intervals. A network of independent nodes runs a distributed key generation protocol and produces BLS threshold signatures on sequential round numbers.
Each beacon round produces:
- A round number (monotonically increasing).
- A BLS signature over the round number (the randomness).
The signature is deterministic for a given round — once the round is produced, anyone can verify it using the beacon’s public key.
Why Inject Randomness?
Injecting drand beacons into UTS serves two purposes:
- Liveness proof: A continuous stream of timestamps proves the system is operational. If beacon timestamps stop appearing, it signals a service disruption.
- Unpredictable anchoring: Since drand outputs are unpredictable before they are produced, timestamping them proves the system was operational at that specific moment — the timestamp could not have been pre-computed.
Beacon Periods and Rounds
The injector discovers available drand networks and their periods:
GET {drand_base_url}/v2/beacons → list of networks
GET {drand_base_url}/v2/beacons/{net}/info → { period: u64 }
For each network, a separate task polls for new rounds at the network’s period interval:
GET {drand_base_url}/v2/beacons/{net}/rounds/latest → { round, signature }
If the round number hasn’t changed since the last poll, the iteration is skipped.
How It Submits Attestations
For each new drand round, the injector:
1. Hash the Beacon Signature
#![allow(unused)]
fn main() {
let hash = keccak256(&randomness.signature);
}
2. Submit to Calendar Server
The hash is posted to the calendar server’s /digest endpoint, entering the normal calendar timestamping pipeline:
#![allow(unused)]
fn main() {
// Async: submit to calendar
request_calendar(hash).await;
}
3. Submit for L1 Anchoring
In parallel, hashes are collected over a 5-second window and batched:
#![allow(unused)]
fn main() {
// Collect hashes for 5 seconds
let batch_hash = keccak256(collected_hashes);
// Create EAS attestation
let uid = eas.attest(batch_hash).send().await?;
// Get fee with 10% buffer
let fee = fee_oracle.getFloorFee() * 110 / 100;
// Submit for L1 anchoring
l2_manager.submitForL1Anchoring(uid).value(fee).send().await?;
}
This dual submission ensures the beacon data is timestamped via both:
- Pipeline A: Calendar timestamping (fast, L2-only).
- Pipeline B: L1 anchoring (slower, L1 finality).
Multi-Chain Deployment
The injector supports multiple drand networks simultaneously. Each network runs its own polling task, and all hashes flow into the same calendar and L1 anchoring pipeline.
Configuration
#![allow(unused)]
fn main() {
pub struct AppConfig {
pub blockchain: BlockchainConfig {
pub eas_address: Address,
pub manager_address: Address,
pub fee_oracle_address: Address,
pub rpc: RpcConfig { l2: String, ... },
pub wallet: WalletConfig { mnemonic: String, index: u32 },
},
pub injector: InjectorConfig {
pub drand_base_url: Url,
pub calendar_url: Url,
},
}
}
The service connects to:
- The drand HTTP API for beacon data.
- The calendar server for L2 timestamping.
- The L2 blockchain (via RPC) for EAS attestations and L1 anchoring submissions.