People can set up a line of credit among themselves without the permission of any third party.
The problem with this mechanism, known as trustlines, is that when A and B agree that A will owe B, B doesn’t know how many other debts A has outstanding. Attempts to solve this problem have given rise to centralized credit agencies and credit ratings, sometimes with disastrous consequences for privacy.
Thus there is a need to have a third party system that can reliably record who owns what. The recipient B of a token T should be able to query this system to make sure that A owns the token, post the transaction and verify that B has become the new owner. This system must have Consistency and Durability.
Over the last decade, innovative solutions have been proposed and implemented to move from centralized systems to ones that distribute the work among many different computers. These new systems are typically referred to as Distributed Ledger Technologies (DLT) and the process by which they move from one consistent state to the next is referred to as Consensus.
Voting vs Disproving
Any process by which computers arrive at the same answer can be called Consensus, but there is a major distinction to be made. Consensus can either be based on Voting, or Disproving. Voting-based consensus is about agreeing on an arbitrary outcome (such as a number between 1 and 10, the location of a beehive, or which valid transaction was first), or about disproving a claim (such as done in science and math). This will hit home for many people when they consider how is truth determined in the current political climate. If enough people believe something, does it become true? In science and math, it takes even one participant to publish a disproof of a claim, to compel everyone who sees the disproof to realize the claim is false. When the book A hundred authors against Einstein, came out, he is reported to have said, “If my theory was wrong, one would have been enough.”
That kind of approach to determining truth has very desirable properties. For one thing, it is resilient in the face of 33% or even more malicious validators. This means the network can tolerate a higher amount of interference from large scale Sybil attacks, computer viruses, etc. that other projects also worry about.
Most popular crypto-currencies today (see the next section) need the network to reach a global consensus about every transaction in the world. This can be slow and expensive. In principle, one could distribute the workload among many computers, creating an embarrassingly parallel network capable of handling more transactions the larger it grows.
A mathematical analysis from 2009 shows that the probability of double-spending can be reduced to nearly zero even when consensus groups are fairly small, as long as they are strongly interconnected. This is even true if the group can be predicted in advance from a token id, and even in the face of a coordinated attack on a certain proportion of nodes in the group. Of course, such possibilities themselves can be mitigated by having the network consist of computers under the control of many different parties, that all link together into a giant decentralized “cloud”.
Intercoin Consensus Process
Thus, Intercoin Consensus Process is not based on voting about which transactions went first, as is done by Hashgraph and XRP Consensus Process. In that case, too many malicious participants could either undermine the consensus by casting contradictory votes, or prevent the system from making forward progress, as explained here.
Under the Intercoin Consensus Process, when two conflicting transactions are submitted close in time to one another, both are rejected, instead of voting on which one was first and gets to be honored. The Process works like this:
A token T is watched by a set of Watchers, computers in the decentralized cloud. These Watchers only care about the hash of the latest state of T, as it goes from person to person.
Each Validator for T validates the transaction (that the sender owns T, that T was validly issued and has a valid history, and so on). Validators that don’t find any problems sign that the transaction is Valid. A validator that finds anything wrong will gossip a Claim of Violation, and if the recipient B agrees with this claim, they will reject the transaction. (B can only hurt themselves by ignoring a valid Claim of Violation, because they will likely not be able to get more conscientious recipients later to accept T as payment).
Participants (Validators, Watchers, etc.) which submit spurious Claims will get subsequently ignored by those who find the claims to be false.
If a transaction is Valid, the Watchers go to work making sure there was no conflicting transactions that would fork the stream. (This might happen if the token holder was attempting to double-spend, creating two valid but conflicting histories.) Each Watcher is sent the transaction and has to check whether it has already been sent a conflicting transaction. If no, then the Watcher says it didn’t see any conflicts. Otherwise the Watcher gossips a Claim of Conflict with one or more conflicting transactions signed by A. As long as B receives a valid Claim of Conflict from even one Watcher, they will reject the transaction.
Each honest Watcher will report a Conflict, and will also honestly check the Claim of Conflict from other Watchers in the consensus about transaction X on token T. If the Claim of Conflict checks out, the transaction is rejected. Thus, if conflicting transactions X1 and X2 are submitted simultaneously, both are rejected. If one is submitted well in advance of the other, then as long as a (super)majority of honest Validators and Watchers has approved it, it goes through. Something in the middle could happen, where X1 is submitted and some watchers already approved it, but then a valid (but conflicting) X2 is submitted to some of the Watchers who haven’t yet reported in. Since any two honest majorities overlap, however, only one of these transactions can ultimately succeed in getting a (super)majority of honest Watchers to approve it.
The Watchers need to be strongly connected, so a malicious subset of Watchers can’t disconnect the Quorum of the honest ones, or block their gossip. Each Watcher waits to receive responses from a (super)majority of Watchers about the transaction. If any Watcher W1 approves a transaction X but also gossips a Claim of Conflict about X, then if the Claim is true, every other honest Watcher ignores the approval. However, since Watchers giving different replies to different participants is bad form, the watcher W1 may be ignored from then on, potentially changing the threshold for the (super)majority.
Once a (super)majority of honest Watchers received verdicts from a (super)majority of honest Watchers, and they all reported not finding a Claim of Conflict, they group-sign their approval of the transaction X1. (Even if one or more conflicting transactions X2 are partially making their way through the Watchers, or will start to in the future, they will get rejected.) This approval, in particular, is sent to the intended sender A of the token T.
The sender A then makes a decision of whether to “endorse” the transaction X1 with their signature, and give the approved, endorsed transaction to B. The reason for this extra endorsement is that the Intercoin Consensus Process about token T may have taken a very long time (e.g. due to a netsplit or too many malicious Validators or Watchers preventing an approval), and A may have changed their mind about proceeding with the transaction. A may elect to send some token T2 to B in the meantime, watched by a completely different set of Validators and Watchers and endorse whichever transaction goes through first. (This is similar to using a different credit card to pay a merchant, when the first credit card is blocked. You don’t want both transactions going through in the worst case.)
The Permissionless Timestamping Network can be used to implement a timeout so if a group of Watchers cannot come to a Consensus for a while, the Token is migrated to a different group of Watchers, so as not to be “frozen” forever.
Notes About the Process
After a (super)majority of honest Watchers they haven’t seen a conflicting transaction, even if the very next Watcher finally reports a conflicting transaction (because it was slow, for example), the first transaction is still approved. Thus there can be a sudden jump from “approved” to “not approved” around the supermajority mark. This doesn’t violate Buridan’s Principle because the number of participants is finite, and in fact pretty small (under 1000) so the continuity assumption is not satisfied.
Normally, Merkle DAGs consist of hashes and do not have information about the contents of a token or transaction or claim, so they can’t make prove or disprove anything in the content. However, a Claim contains the actual content, and enough relevant (signed) context for any honest participant to come to a conclusion whether something is true or not.
Because this system of Consensus is based on Disproof rather than Voting, the end-users are ultimately deciding what truth to go with. If any Validator, Watcher or Group signs a false Claim, they can subsequently ignore those, and gossip a Claim of Malfunction to others similarly to how a Claim of Conflict or Claim of Violation is gossiped.
If groups of end-users disagree on which subset of Validators or Watchers is really right, they may end up forking the network. That can happen, for example, whenever the rules change and some clients prefer the old rules and some clients prefer the new ones.
In the particular use-case of a crypto-currency, the incentives really come from the recipients of a token (B in this case) wanting to make sure everything went through correctly, and the system recorded B as the owner of the token. That is, until they give it to another party.
The question of who owns a token is a special case of access to a file. Spending a token can be considered like appending (or even replacing) content in a file representing that token. Not every node needs to store, decrypt or validate the entire contents of the file. Many nodes just want to know the latest hash of a file.
The Qbix Platform has the concept of Streams and Messages, which can be used to implement collaborative activities of any kind. One or more people can take “actions” in a stream, and “messages” are the result of a successful “action”. When implemented in a distributed system, Watchers are used to look for forks in a stream, and report this to end-users. Thus, Consensus about forward progress is achieved by end-users who all agree to honor valid, conflict-free transactions. Disagreement among end-users manifests in a fork of the entire back-end Consensus underlying that particular stream.
Streams can have different types, and “Intercoin/token” would be just one of those types. Others can include chatrooms, games, collaborative documents, social network / scuttlebutt feeds, and much more.
Rules are run by Validators and Participants who can decrypt the stream. Watchers and others storing the stream encrypted at rest have no idea what is is in it. What this means is that Intercoin Tokens (and other types of group activity) could be able to be sent completely anonymously, where only only the sender and recipient know each other’s identity.
Other Well-Known Approaches
Some early systems, like Bitcoin and Ethereum, use a form of leader-based consensus based on Proof of Work. Every so often, a leader is found that is then incentivized and empowered to append transactions to an ever-growing ledger. All new transactions on the entire system need to be sent to this leader. If this leader is known in advance, it could be blocked by a DDoS attack. However, with Proof of Work, leaders are computers which finds a solution to a Proof of Work problem (like partially reversing a hash), so by the time a leader reveals the next block, it’s too late to stop the gossip. However, now every potential transaction has to be sent to every potential leader in case they become the next leader (the “miner”). Proof of Work systems have extreme scalability problems, but they do have one nice property: as the network grows, it becomes extremely hard to rewrite history very far back in time, because doing so would require doing at least as much work as the network has collectively already done.
Proof of Stake and Delegated Proof of Stake replace the work element with a “stake” element, in an effort to reduce the amount of work needed to be done to find a leader. All these approaches, however, still require a leader, and thus have a bottleneck.
The XRP Consensus Process avoids this problem, but still needs every node to validate every transaction. In 2018, Ripple has made speed improvements to their Consensus protocol, making it more asynchronous. It can even be [sharded in the future](https://twitter.com/GregMozart/status/937836892661444610. However, it is still vulnerable to more than 33% (perhaps even 20% in fact) malicious nodes. To protect against Sybil attacks and other malicious nodes, Ripple Inc. publishes (and signs) a Unique Node List, making them a centralized gatekeeper of who can participate in the network. This approach is far more scalable at the cost of permissionless participation of nodes in the network (i.e. not just anyone can start running their own miner).
Since Intercoin (and Qbix) aim to create a resilient and reliable system for group activities implementing various rules, including those for cryptocurrency payments, we need small and fast consensus groups. Using Kademlia, we can also hide the IP addresses of the validators, preventing any given actor from finding and blocking all the nodes in the network. We are building a network where people can transact quickly and reliably, in full confidence that no one except those they authorized can tamper with their data, take it down, or even access it. People will be able to participate in all kinds of online group activities without having to trust a specific “landlord” to host those group activities.