Consensus algorithms at scale: Part 8 - Closing thoughts
Sugu Sougoumarane
Sugu Sougoumarane
7/7/2022
Engineering10 min read

Consensus algorithms at scale: Part 8 - Closing thoughts

If you're still catching up, you can find links to each article in the series at the bottom of this article.

We started off this series by challenging the premise that algorithms like Paxos and Raft are foundational to consensus systems. Such a premise would imply that any other algorithm would just be a variation of the original ones. These algorithms are foundational from a historical perspective, but they are not conceptually foundational.

We also showed that these algorithms are too rigid. I feel that they would struggle to adapt to the growing complexities of cloud deployments. FlexPaxos was the first advancement that highlighted that the majority quorum is just a special case of intersecting quorums. And intersecting quorums would allow you to configure systems with more flexibility.

Reconceptualizing consensus systems#

In this series, we have attempted to reconceptualize the other parts of consensus systems in the following manner:

Pluggable durability

A consensus system can be designed in such a way that it assumes nothing about the durability rules. These can be specified with a plugin, and the system should be able to fulfill these requirements without breaking integrity. Of course, the requirements have to be reasonable. We covered some examples in part 3.

A system that supports pluggable durability allows you to deploy additional nodes to the system without majorly affecting its performance characteristics. For example, if you had specified the durability requirement as cross-zone, deploying additional nodes to a zone keeps the system behaving mostly the same way.

Revocation and Establishment of leadership

We have reconceptualized a leadership change as a two-step process: revocation and establishment. Intersecting quorums are only one way to achieve this goal. We have shown situations where you could achieve a leadership change by directly asking the previous leader to step down. Following this, all we have to do is perform the necessary steps to establish the new leadership. This approach does not require knowledge of intersecting quorums.

We have also shown that multiple methods can be used to change leadership, and that such methods are interoperable. For example, you could use the direct leadership demotion for planned changes, but fall back to intersecting quorums if there are failures in the system.

Handling races

There are two contrasting approaches to handling races: lock-based and lock-free. The implementations and trade-offs are very different between the two. In general, a lock-free approach (like what Paxos uses) has elegance from the fact that it does not have a time component. However, lock-based approaches offer so many other flexibilities that they win out in real-life scenarios; With lock-based approaches, you can:

  • Perform graceful leadership changes by requesting the current leader to step down.
  • Although I didn’t cover this topic, it is easier to add or remove nodes in a system.
  • You can perform consistent reads by redirecting the read to the current leader.
  • You can implement anti-flapping rules.

Due to all these advantages, most large scale systems implement a lock-based approach.

Completing and propagating requests

We studied the corner cases of propagating requests, and suggested versioning of decisions as a way to avoid confusion when there are multiple partial failures. The proposal numbers in Paxos and the term numbers in Raft are just one way to version the decisions.

We also showed that many of these failure modes can be completely avoided using anti-flapping rules.

The Vitess implementation#

In Vitess, we make full use of the above options and flexibilities. For example, durability rules are a plugin for vtorc. The current plugin API is already more powerful than other existing implementations. You can specify cross-zone or cross-region durability without having to carefully balance all the nodes in the right location.

Additionally, Vitess has a graceful failover mechanism that gets used during software deployment. This automation comes built-in as part of the Vitess Operator.

Vitess allows you to direct reads to the current leader for consistent reads.

There are still a few corner cases that may require human intervention. We intend to enhance vtorc to also remedy those situations. This will put Vitess on full auto-pilot.

In closing#

There are still a few topics that could be worth covering:

  • Failure detection
  • Consistent reads
  • Adding and removing nodes

Strictly speaking, these are outside the scope of consensus algorithms, but they need to be addressed for real-life deployments. I can cover these later with some independent posts.

It is possible that consensus could be generalized using a different set of rules. But I personally find the approach presented in this series to be the easiest to reason about.

Feel free to reach out to me on twitter @ssougou if you have comments or questions.

Previous series articles#