When designing a distributed system, you might assume that the logical approach is to design it for all three of the most important factors:
Consistency
Availability
Partition tolerance over the network
For a long time, so did architects and developers. Although the CAP theorem approach seemed sensible, a better way was later discovered.
In this article, we’ll be discussing how the CAP theorem works, its implications, and how organizations can make use of them.
The CAP theorem is about how it was impossible for a distributed system to simultaneously maintain Consistency, Availability, and Partition Tolerance, hence the name CAP.
The theorem was introduced by computer scientist Eric Brewer which is why it is sometimes called ‘Brewer’s theorem’. He emphasized that if the system were to be optimized, it should focus on two of the mentioned factors — and compromise the third one.
Understanding the Components of CAPLet’s now look at each of the three components.
In a distributed system, consistency refers to the data in all the nodes being the same at the same time, regardless of which node the client is connected to.
In other words, following the completion of a write operation, all the future read operations will show that change, so that every client will have the same view of the data. This means all the nodes are working with the most recent data, and this is how they maintain consistency in the system.
However, enforcing consistency often comes at the cost of increased latency and reduced availability, especially in the face of network partitions or failures.
Availability emphasizes that any client in the network making a data request receives a response, even if some nodes are down.
This means the operational nodes in the distributed system will return a response for any request, though this response might not always reflect the most recent data version.
A system that prioritizes Availability operates and responds to requests, even if there are node failures or network partitions. To make the system operate even in the event of a node failure, you can use strategies like:
Data replication
Not every server in the network has to be available, but enough individual servers should be available in a cluster to handle the request.
Prioritizing high availability can mean relaxing strict consistency requirements, leading to eventual consistency where different nodes may temporarily hold different versions of the data. Nevertheless, the system continues to process requests and maintain uptime, although it might sometimes return an older data version.
(Related reading: what is availability management?)
Network partitions are temporary interruptions in the network that separate components or nodes in the system, which prevents them from exchanging information.
Network partitions are an inevitable reality in distributed systems, and achieving partition tolerance is crucial for ensuring system reliability under adverse network conditions. Partition tolerance is considered a prerequisite for building robust distributed systems.
When it comes to distributed systems, network partitions are quite common. Therefore, ensuring that your system tolerates network partitions is crucial for the reliability of the distributed system.
As we discussed, the real gist of the CAP Theorem is that it is challenging for a distributed system to simultaneously achieve all three characteristics — consistency, availability, and partition tolerance — at their maximum level.
Therefore, we have to prioritize two characteristics compromising on the third one. This leads to three primary trade-offs.
Be wary of oversimplifying these three compromises, however. In practice, the trade-offs are more nuanced. System designers frequently make dynamic adjustments based on the specific needs and circumstances of their systems.
In a system where Consistency and Partition tolerance are prioritized, the system makes sure that the nodes in the network have a constant view of the data, even during network disruptions (aka network partitions).
This means that when a value is written to one node, the value will be updated in all the other nodes.
However, CP systems may compromise on availability, meaning that some data might not be accessible in the event of a network partition.
CA systems prioritize a constant view of the data and ensure the availability is high under normal operating conditions. This means that every request made to a (non-faulty) node in the system must return either a successful or a failed response.
However, CA systems may struggle with network partitions, potentially becoming less available or exhibiting lower performance during such events.
In reality, pure CA systems are rare. That’s because pretty much every distributed systems must handle network partitions to some degree.
AP systems prioritize availability and partition tolerance, rather than strong consistency.
This means that the system always aims to process queries and provide the most recent available version of information, even if it cannot ensure it is completely up-to-date due to network partitioning. In such systems, temporary discrepancies in data versions among different nodes are acceptable, with the primary goal being to ensure high availability.
Understanding how the CAP theorem applies in real-world scenarios is crucial for making informed decisions when designing distributed systems.
Google Bigtable is a distributed storage system used for managing structured data. It is designed to prioritize consistency and partition tolerance (CP), which means that in the event of a network partition or failure, Bigtable may compromise availability to maintain data consistency.
However, Google has engineered its network to minimize the occurrence of network partitions, which allows Bigtable to achieve high availability in practice, despite its CP categorization.
DynamoDB is a database service provided by AWS designed to prioritize high availability and partition tolerance, in line with the CAP theorem. Initially, it offered only ‘eventual consistency,’ but now it also provides a ‘strong consistency’ option.
This means, that when you ask DynamoDB for the latest data through a consistent read request, it returns a response that is very much up-to-date. It’s designed to handle network partitions effectively, maintaining high availability and providing consistent reads as needed.
Users have the flexibility to choose between eventual and strong consistency, allowing DynamoDB to cater to different requirements while adhering to the CAP theorem principles.
MongoDB is a well-known Database Management System that is frequently used to run real-time applications from different locations. This DBMS follows a CP trade-off as it prioritizes Consistency and Partition tolerance over Availability.
Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system. It is designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.
Cassandra allows clients to specify the number of servers that a write must go to before it is committed on a per-write basis. Writes going to a single server are "AP", writes going to quorum (or all) servers are more "CA".
When designing distributed systems, it's essential to carefully consider the specific requirements and priorities of the application. The following strategies can guide decision-making.
Start by thoroughly understanding the application's requirements and the expected trade-offs. It’s important to understand that no system can guarantee all three simultaneously.
Either way, you’ll be giving up one characteristic that will reflect and impact on your system.
Since you can only choose two out of the 3 aforementioned Properties, you should prioritize the ones that are crucial for your system. For example, if you were designing a banking system, consistency is crucial to ensure that all financial transactions are accurate, even if this means occasionally sacrificing availability.
Network partitions are rare but can occur. Assessing their potential impact is vital for system design.
In systems like distributed databases, it's possible to balance between consistency and availability. This allows for choosing stronger consistency for critical operations while opting for eventual consistency for less critical operations.
In the event of a network partition, implementing a graceful degradation design ensures that the system continues to operate even though it is at a reduced level. This would both:
Minimize the impact of network failures.
Keep your system reliable.
While the system's needs may be clear currently, they could change over time. Therefore, it's important to be prepared to re-evaluate and modify the system's design as necessary, ensuring its long-term adaptability.
The CAP theorem proves to be a foundational principle for understanding the limitations of a network or system.
By understanding how Consistency, Availability, and Partition Tolerance work well with each other, developers and architects can design a system that perfectly suits the needs of their organization by striking a balance between the prioritized characteristics.
For solutions around making your distributed systems more resilient and ready for anything, talk with Splunk. Splunk offers a unified data platform that powers monitoring, observability, and cybersecurity across your entire distributed network. Explore Splunk solutions today.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.