Token Engineering Case Studies
Analysis of Bitcoin, Design of Ocean Protocol. TE Series Part III.
In previous articles, I described why we need to get incentives right when we build tokenized ecosystems; and introduced ideas towards a practice of token engineering. We can use these tools to help analyse existing tokenized ecosystems, and design new ones. This article does exactly that with case studies in (1) analysis of Bitcoin, and (2) design of Ocean Protocol. Let’s get started!
2. Case Study: Analysis of Bitcoin
We’ve discussed how best practices from optimization can apply to token design. Let’s put this into practice. Let’s frame Bitcoin in the lens of optimization design. In particular, let’s focus on the objective function for Bitcoin.
Its objective function is: maximize the security of its network. It then defines “security” as compute power (hash rate), which makes it expensive to roll back changes to the transaction log. Its block reward function manifests the objective, by giving block reward tokens (BTC) to people who improve the network’s compute power.
We can write the formula for the objective function (block reward function), as the image below shows. On the left side is the amount of token rewards R in a block interval that actor i can expect. The right side of the equation is proportional (α) to the left, and is the product of compute power (hash rate) of actor i and number of tokens dispensed every block T. The latter value is currently 12.5 BTC every ten minutes. Every four years that value halves.
Aside: Trading Variance for Efficiency
Notice that the reward is in terms of expected value, E(). This means that that each user doesn’t necessarily receive a block award every interval. Rather, in Bitcoin, it’s quite lumpy: just a single user is awarded in each block interval. But since their chance of getting the award is proportional to the hash rate they’ve contributed, then their expected value is indeed the amount contributed. The Orchid team calls this probabilistic micro-payments.
Why would Bitcoin have this lumpiness (high variance), rather than award every player at every interval (low variance)? Here are some benefits:
- It doesn’t need to track how much each user contributed. Therefore lower compute, and lower bandwidth.
- It doesn’t need to send BTC to each user at each interval. Therefore far fewer transactions, and lower bandwidth. An efficiency tweak!
- In not needing the first two, the system can be far simpler and therefore minimizes the attack surface. Therefore simpler, and more secure.
These are significant benefits. The biggest negative is the higher variance: to have any real chance to win anything at all you need significant hash rate, though if you do win, you win big. However, this higher variance is mitigated simply by higher level mining pools, which have the direct effect of reducing variance. This is cool because it means that Bitcoin doesn’t need even need to do that directly. As usual, we keep learning from Satoshi:)
Success of Bitcoin’s Incentives?
How well does Bitcoin do towards its objective function of maximizing security? The answer: incredibly well! From this simple function, Bitcoin has incentivized people to spend hundreds of millions of dollars to design custom hashing ASICs and building ASIC mining farms. Others are creating mining pools with thousands of participants. Now the hash rate is greater than all supercomputers combined. Electricity usage is greater than most small countries, and on track to overtake USA by July 2019. All in pursuit of Bitcoin token block rewards! (Not all of it is good, obviously.)
Besides the ASIC farms and mining pools, we’ve also seen a whole ecosystem emerge around Bitcoin. Software wallets, hardware wallets, core developers, app developers, countless Reddit threads, conferences, and more. Driving much of it is BTC token holders incentivized to spread the word about their token.
What’s driven all of this is the block rewards that manifest the objective function. That’s the power of incentives. You called it, Charlie:)
4. Case Study: Design of Ocean Protocol
When we first started doing serious token design for Ocean Protocol in May 2017, we found ourselves struggling. We hadn’t formulated the goals (objectives and constraints) and instead were simply looking at plug-and-play patterns like decentralized marketplaces. But then we asked: how does this help the data commons? It didn’t. Does this need its own token? It didn’t. And there were other issues.
So, we took a step back and gave ourselves the goal of writing proper objectives and constraints. Then, things started to go smoother. With those goals written down, we tried other plug-and-play patterns (solvers). We found new issues that the goals didn’t reflect, so we updated the goals. We kept looping in this iterative process. It didn’t take long before we’d exhausted existing plug-and-play patterns, so we had to design our own; and we iterated on those.
After doing this for a while, we realized that we had been applying the optimizer design approach to token design! That is: formulate the problem, try using existing patterns; and if needed then develop your own. So while this blog post lays out the token design process as a fait accompli, in reality we discovered it as we were doing it. We’ve actually used this methodology for other token designs since, to help out friends in their projects.
4.2 Ocean Problem Formulation
Recall that the objective function is about getting people to do stuff. So, we must first decide who those people are. We must define the possible stakeholders or system agents. The following table outlines the key ones for Ocean token dynamics.
Objective function. After the iterations described above, we arrived at an objective function of: maximize the supply of relevant AI data & services. This means to incentivize supply of not only high-quality priced data, but also high-quality commons data; and compute services around this (e.g. for privacy).
Constraints. In the iterations described above, used this checklist when considering various designs. Roughly speaking, we can think of these as constraints.
- For priced data, is there incentive for supplying more? Referring? Good spam prevention?
- For commons (free) data, is there incentive for supplying more? Referring? Good spam prevention?
- Does the token give higher marginal value to users of the network versus external investors?
- <and more>
Besides these questions, as we continually polled others about possible attacks; added each new concern to the list of constraints to solve for (including a memorable name); and updated the design to handle it. New constraints included: “Data Escapes”, “Curation Clones”, “Elsa & Anna Attack”, and more. The FAQs section of the Ocean whitepaper documents these, and how we addressed them.
4.3 Exploring the Design Space
We tried a variety of designs that combined token patterns in various ways; and tested each design (in thought experiments) against the constraints listed above. Some that we tried:
- Just a decentralized marketplace. Fail: doesn’t incentivize commons data.
- Just a TCR for actors (like adChain). Fail: can’t handle spam data.
- Just a TCR for data/services. Fail: can’t handle Data Escapes.
- A TCR for actors and a TCR for data/services. Fail: can’t distinguish non-spam data/services from relevant ones.
- A TCR for actors and a Curation Market (CM) for data/services. Fail: no incentives to make data/services available.
And more, such as various riffs on governance and reputation systems. Finally, we arrived at one that met our goals: TCR for actors, and Proofed Curation Market (PCM) for data/services. The next section elaborates.
4.4 A New Token Pattern for Ocean: Proofed Curation Markets
Ocean’s objective function is to maximize the supply of relevant AI data & services.
To manifest this, we must acknowledge that we can’t objectively measure what is “high quality”. To solve this problem, Ocean leaves curation to the crowd: users must “put their money where there mouth is” by betting on what they believe will be the most popular datasets, using a Curation Market setting.
Then we needed to reconcile signals for quality data with making data available. We resolved that by binding the two together: predicted popularity versus actual (proven) popularity. A user is awarded tokens if both of:
- They have predicted a dataset’s popularity in a Curation Market setting. This is the Predicted Popularity.
- They have provably made the dataset/service available when requested. By definition, the more popular it is, the more requests there are. This is the Proofed Popularity.
Together, these form what we call a Proofed Curation Market (PCM). In a PCM, the curation market and the proof are tightly bound: the proof gives teeth to the curation, to make curation more action-oriented; in turn, the curation gives signals for quality to the proof. PCMs are a new addition to our growing list of token design building blocks:)
The following equation describes Ocean’s token rewards function.
The first term on the right hand side, Sij, reflects an actor’s belief in the popularity of the dataset/service (Predicted Popularity). The second term, Dj, reflects the popularity of the dataset/service (Proofed Popularity). The third term, T, is the number of tokens doled out during that interval. The fourth term, Ri, is to mitigate one particular attack vector. The expected reward function E() is implemented similar to Bitcoin. The Ocean whitepaper elaborates on how this reward function works.
This article gave case studies on using token engineering tools to analyze bitcoin and to design Ocean Protocol.
Appendix: Related Articles & Media
This article is part of a series:
- Part I. Can Blockchains Go Rogue? AI Whack-A-Mole, Incentive Machines, and Life.
- Part II. Towards a Practice of Token Engineering: Methodology, Patterns & Tools.
- [this article] Part III. Token Engineering Case Studies: Analysis of Bitcoin, Design of Ocean Protocol.
I gave a talk about much of this content in Berlin in Feb 2018. Here’s the slides and video. I gave a related talk about complex systems at Santa Fe Institute, New Mexico, in Jan 2018. Here’s the slides and video from that talk.
Thanks to the following people for reviews of this and other articles in the series: Ian Grigg, Alex Lange, Simon de la Rouviere, Dimitri de Jonghe, Luis Cuende, Ryan Selkis, Kyle Samani, and Bill Mydlowec. Thanks to many others for conversations that influenced this too, including Anish Mohammed, Richard Craib, Fred Ehrsam, David Krakauer, Troy McConaghy, Thomas Kolinko, Jesse Walden, Chris Burniske, and Ben Goertzel. And thanks to the entire blockchain community for providing a substrate that makes token design possible:)
Appendix: Related Efforts
Here are some updates since the initial publication.
-I learned that the Slava and Billy from Relevant came up with a similar mechanism to Proofed Curation Markets for Relevant. Cool! Then Vitalik started to discover the joys of proofs * curation markets too.