By Albert Dadon, Founder & CEO, AEREDIUM Holdings Inc.
In 2024, over $3.8 billion was stolen from blockchain platforms. Not through sophisticated zero-day exploits or quantum computing attacks. Through something far simpler: compromised private keys.
Every blockchain ever built — Bitcoin, Ethereum, Solana, all of them — relies on a single elliptic curve private key to authorise transactions. One key. Whoever holds that key controls everything. One compromised hardware wallet, one rogue insider, one phished seed phrase — and the funds are gone. Irreversibly.
This is not a bug in any particular implementation. It is the fundamental architecture of every blockchain in existence. And it may be the single biggest unsolved problem in the digital asset industry.
I believe we have found a path to solving it. We are early in this journey, and what follows is the story of how we got here — not a claim of victory, but an account of a discovery and the direction it has taken us.
In 2013, five researchers from the University of Lisbon, the Federal University of Santa Catarina in Brazil, and Stefanini IT Solutions published a paper in the IEEE Transactions on Computers titled "Efficient Byzantine Fault-Tolerance." The authors — Giuliana Santos Veronese, Miguel Correia, Alysson Neves Bessani, Lau Cheuk Lung, and Paulo Veríssimo — were working on a problem in distributed systems theory.
The problem was this: in traditional Byzantine Fault Tolerant (BFT) consensus protocols like PBFT (Practical Byzantine Fault Tolerance, introduced by Castro and Liskov in 1999), you need at least 3f+1 servers to tolerate f faulty ones. So to survive one bad actor, you need four servers. To survive two, you need seven. This is expensive and complex.
The Lisbon team’s insight was elegant. They asked: what if each server had a tiny tamper-proof component — a monotonic counter that only goes up — that signed every consensus message with a unique sequential number? If the hardware guarantees that a server can never sign two different messages with the same counter value, then double-signing (equivocation) becomes cryptographically impossible. Not economically punished. Not detected after the fact. Impossible.
They called this component a USIG — Unique Sequential Identifier Generator. And they proved that with USIG, you could reduce the fault tolerance threshold from 3f+1 to just 2f+1. Three servers instead of four. Five instead of seven. The math was clean, the proof was rigorous, and it was a genuine contribution to the field.
The researchers were solving the problem in front of them — making distributed clusters more efficient. That was the world they were working in. The blockchain industry as we know it today, with its billions in stolen assets and its structural key management failures, did not exist when they wrote their paper.
The USIG concept did not emerge from nowhere. It stands on the shoulders of earlier work.
In 2009, four researchers at Microsoft Research — Dave Levin, John R. Douceur, Jacob R. Lorch, and Thomas Moscibroda — published a paper at USENIX NSDI titled "TrInc: Small Trusted Hardware for Large Distributed Systems." TrInc (Trusted Incrementer) proposed something deceptively simple: a tiny hardware component consisting of nothing more than a monotonic counter and a cryptographic key. The counter only goes up. The key signs attestations binding messages to counter values. Once you’ve bound a message to counter value 7, you can never bind a different message to value 7. Equivocation is structurally eliminated.
Levin et al. demonstrated TrInc’s versatility with case studies in BitTorrent (preventing free-riding), PeerReview (accountability in distributed systems), and append-only memory. Their contribution was showing that this trivially simple primitive — a counter and a key — has remarkably broad applicability.
TrInc itself built on even earlier work: Attested Append-Only Memory (A2M) by Chun, Maniatis, Shenker, and Kubiatowicz, which proposed a trusted log facility for BFT systems. The TrInc team explicitly designed their system as a simpler, smaller alternative — arguing that protocol designers would be reluctant to assume the availability of a full trusted log, but might accept a mere counter.