Token Engineering Case Studies

Analysis of Bitcoin, Design of Ocean Protocol. TE Series Part III.

Trent McConaghy
Ocean Protocol

--

1. Introduction

In previous articles, I described why we need to get incentives right when we build tokenized ecosystems; and introduced ideas towards a practice of token engineering. We can use these tools to help analyse existing tokenized ecosystems, and design new ones. This article does exactly that with case studies in (1) analysis of Bitcoin, and (2) design of Ocean Protocol. Let’s get started!

2. Case Study: Analysis of Bitcoin

We’ve discussed how best practices from optimization can apply to token design. Let’s put this into practice. Let’s frame Bitcoin in the lens of optimization design. In particular, let’s focus on the objective function for Bitcoin.

Its objective function is: maximize the security of its network. It then defines “security” as compute power (hash rate), which makes it expensive to roll back changes to the transaction log. Its block reward function manifests the objective, by giving block reward tokens (BTC) to people who improve the network’s compute power.

We can write the formula for the objective function (block reward function), as the image below shows. On the left side is the amount of token rewards R in a block interval that actor i can expect. The right side of the equation is proportional (α) to the left, and is the product of compute power (hash rate) of actor i and number of tokens dispensed every block T. The latter value is currently 12.5 BTC every ten minutes. Every four years that value halves.

Aside: Trading Variance for Efficiency

Notice that the reward is in terms of expected value, E(). This means that that each user doesn’t necessarily receive a block award every interval. Rather, in Bitcoin, it’s quite lumpy: just a single user is awarded in each block interval. But since their chance of getting the award is proportional to the hash rate they’ve contributed, then their expected value is indeed the amount contributed. The Orchid team calls this probabilistic micro-payments.

Why would Bitcoin have this lumpiness (high variance), rather than award every player at every interval (low variance)? Here are some benefits:

  • It doesn’t need to track how much each user contributed. Therefore lower compute, and lower bandwidth.
  • It doesn’t need to send BTC to each user at each interval. Therefore far fewer transactions, and lower bandwidth. An efficiency tweak!
  • In not needing the first two, the system can be far simpler and therefore minimizes the attack surface. Therefore simpler, and more secure.

These are significant benefits. The biggest negative is the higher variance: to have any real chance to win anything at all you need significant hash rate, though if you do win, you win big. However, this higher variance is mitigated simply by higher level mining pools, which have the direct effect of reducing variance. This is cool because it means that Bitcoin doesn’t need even need to do that directly. As usual, we keep learning from Satoshi:)

Success of Bitcoin’s Incentives?

How well does Bitcoin do towards its objective function of maximizing security? The answer: incredibly well! From this simple function, Bitcoin has incentivized people to spend hundreds of millions of dollars to design custom hashing ASICs and building ASIC mining farms. Others are creating mining pools with thousands of participants. Now the hash rate is greater than all supercomputers combined. Electricity usage is greater than most small countries, and on track to overtake USA by July 2019. All in pursuit of Bitcoin token block rewards! (Not all of it is good, obviously.)

It started with a simple block rewards equation. Yet all sorts of complexities have emerged, including mining farms. [Image: Wikimedia Commons]

Besides the ASIC farms and mining pools, we’ve also seen a whole ecosystem emerge around Bitcoin. Software wallets, hardware wallets, core developers, app developers, countless Reddit threads, conferences, and more. Driving much of it is BTC token holders incentivized to spread the word about their token.

What’s driven all of this is the block rewards that manifest the objective function. That’s the power of incentives. You called it, Charlie:)

4. Case Study: Design of Ocean Protocol

Ocean Protocol

4.1 Introduction

When we first started doing serious token design for Ocean Protocol in May 2017, we found ourselves struggling. We hadn’t formulated the goals (objectives and constraints) and instead were simply looking at plug-and-play patterns like decentralized marketplaces. But then we asked: how does this help the data commons? It didn’t. Does this need its own token? It didn’t. And there were other issues.

So, we took a step back and gave ourselves the goal of writing proper objectives and constraints. Then, things started to go smoother. With those goals written down, we tried other plug-and-play patterns (solvers). We found new issues that the goals didn’t reflect, so we updated the goals. We kept looping in this iterative process. It didn’t take long before we’d exhausted existing plug-and-play patterns, so we had to design our own; and we iterated on those.

After doing this for a while, we realized that we had been applying the optimizer design approach to token design! That is: formulate the problem, try using existing patterns; and if needed then develop your own. So while this blog post lays out the token design process as a fait accompli, in reality we discovered it as we were doing it. We’ve actually used this methodology for other token designs since, to help out friends in their projects.

4.2 Ocean Problem Formulation

Recall that the objective function is about getting people to do stuff. So, we must first decide who those people are. We must define the possible stakeholders or system agents. The following table outlines the key ones for Ocean token dynamics.

Objective function. After the iterations described above, we arrived at an objective function of: maximize the supply of relevant AI data & services. This means to incentivize supply of not only high-quality priced data, but also high-quality commons data; and compute services around this (e.g. for privacy).

Constraints. In the iterations described above, used this checklist when considering various designs. Roughly speaking, we can think of these as constraints.

  • For priced data, is there incentive for supplying more? Referring? Good spam prevention?
  • For commons (free) data, is there incentive for supplying more? Referring? Good spam prevention?
  • Does the token give higher marginal value to users of the network versus external investors?
  • <and more>

Besides these questions, as we continually polled others about possible attacks; added each new concern to the list of constraints to solve for (including a memorable name); and updated the design to handle it. New constraints included: “Data Escapes”, “Curation Clones”, “Elsa & Anna Attack”, and more. The FAQs section of the Ocean whitepaper documents these, and how we addressed them.

4.3 Exploring the Design Space

We tried a variety of designs that combined token patterns in various ways; and tested each design (in thought experiments) against the constraints listed above. Some that we tried:

  1. Just a TCR for actors (like adChain). Fail: can’t handle spam data.
  2. Just a TCR for data/services. Fail: can’t handle Data Escapes.
  3. A TCR for actors and a TCR for data/services. Fail: can’t distinguish non-spam data/services from relevant ones.
  4. A TCR for actors and a Curation Market (CM) for data/services. Fail: no incentives to make data/services available.

Here’s how each candidate design fared against the checklist. Each had at least one major fail.

We needed to resort to step 3 of the methodology: design our own building block. What emerged was a Curated Proofs Market (CPM; next section has detail), a small-as-possible extension of a CM. We tried it in two new design options:

5. Data registry + free data CPM. Curation: Stake tokens as belief in reputation. Auto CDN.

6. Actor registry + free&priced CPM. Curation: Stake tokens as belief in reputation. Auto CDN. “Proofed Curation Market”

The following table adds the two new designs on the far right columns. We see that design 6 met our goals! This is critical: it means that we knew we could stop the current design process (at least for the time being).

4.4 A New Token Pattern for Ocean: Curated Proofs Markets

Ocean’s objective function is to maximize the supply of relevant AI data & services.

To manifest this, we must acknowledge that we can’t objectively measure what is “high quality”. To solve this problem, Ocean leaves curation to the crowd: users must “put their money where there mouth is” by betting on what they believe will be the most popular datasets, using a Curation Market setting.

Then we needed to reconcile signals for quality data with making data available. We resolved that by binding the two together: predicted popularity versus actual (proven) popularity. A user is awarded tokens if both of:

  1. They have predicted a dataset’s popularity in a Curation Market setting. This is the Predicted Popularity.
  2. They have provably made the dataset/service available when requested. By definition, the more popular it is, the more requests there are. This is the Proofed Popularity.

Together, these form what we call a Curated Proofs Market (CPM). In a CPM, the curation market and the proof are tightly bound: the proof gives teeth to the curation, to make curation more action-oriented; in turn, the curation gives signals for quality to the proof. CPMs are a new addition to our growing list of token design building blocks:)

The following equation describes Ocean’s token rewards function.

The first term on the right hand side, Sij, reflects an actor’s belief in the popularity of the dataset/service (Predicted Popularity). The second term, Dj, reflects the popularity of the dataset/service (Proofed Popularity). The third term, T, is the number of tokens doled out during that interval. The fourth term, Ri, is to mitigate one particular attack vector. The expected reward function E() is implemented similar to Bitcoin. The Ocean whitepaper elaborates on how this reward function works.

[Update Sep 2021: the token design of this section is different from what was actually shipped, based on learnings as we went along. However the goals remain the same, and there are still echoes of this design in Ocean Data Farming.]

3. Conclusion

This article gave case studies on using token engineering tools to analyze Bitcoin and to design Ocean Protocol.

Appendix: Related Articles & Media

This article is part of a series:

I gave a talk about much of this content in Berlin in Feb 2018. Here’s the slides and video. I gave a related talk about complex systems at Santa Fe Institute, New Mexico, in Jan 2018. Here’s the slides and video from that talk.

Further Resources

[June 1, 2018] Publication of this series seems to have sparked movement in #tokenengineering. Awesome! :):) A key resource is the wiki tokenengineering.net. It has info about building blocks, tools, community meetups, and more.

Acknowledgements

Thanks to the following people for reviews of this and other articles in the series: Ian Grigg, Alex Lange, Simon de la Rouviere, Dimitri de Jonghe, Luis Cuende, Ryan Selkis, Kyle Samani, and Bill Mydlowec. Thanks to many others for conversations that influenced this too, including Anish Mohammed, Richard Craib, Fred Ehrsam, David Krakauer, Troy McConaghy, Thomas Kolinko, Jesse Walden, Chris Burniske, and Ben Goertzel. And thanks to the entire blockchain community for providing a substrate that makes token design possible:)

Appendix: Related Efforts

Here are some updates since the initial publication.

Edits

  • Mar 28, 2018: renamed “Proofed Curation Market” to “Curated Proofs Market”. Why? It’s easier to understand.
  • June 5, 2018: added the tables and surrounding text which elaborate on the designs tried.

Follow Ocean Protocol via our Newsletter and Twitter; chat with us on Telegram or Discord; and build on Ocean starting at our docs.

--

--