参考文档:http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html#cassandra/architecture/architectureClientRequestsAbout_c.html#concept_ds_xf3_5nl_fk
可以在集群的任意节点上进行读写请求因为在Cassandra中所有的节点都是同等的。
当一个客户端连接到一个节点并发出一个读或写的请求,在那次特定的客户端操作中那个节点作为协调者。协调者的工作就是在客户端和数据实际存储的节点之间充当代理。协调者根据集群的设置(分区器和复制策略)决定在环上的那个节点应当获得请求。
一、写请求
协调者发送一个写请求给所有的副本包含要写的行。只要副本所在的节点是可用的,不管客户端指定的一致性是多少他们就会就会获得写。
The write consistency level determines how many replica nodes must respond with a success acknowledgment in order for the write to be considered successful. Success means that the data was written to the commit log and the memtable as described in About writes.
For example, in a single data center 10 node cluster with a replication factor of 3, an incoming write will go to all 3 nodes that own the requested row. If the write consistency level specified by the client is ONE, the first node to complete the write responds back to the coordinator, which then proxies the success message back to the client. A consistency level of ONE means that it is possible that 2 of the 3 replicas could miss the write if they happened to be down at the time the request was made. If a replica misses a write, Cassandra will make the row consistent later using one of its built-in repair mechanisms: hinted handoff, read repair, or anti-entropy node repair.
That node forwards the write to all replicas of that row. It will respond back to the client once it receives a write acknowledgment from the number of nodes specified by the consistency level. when a node writes and responds, that means it has written to the commit log and puts the mutation into a memtable.
二、多数据中心的写请求
In multiple data center deployments, Cassandra optimizes write performance by choosing one coordinator node in each remote data center to handle the requests to replicas within that data center. The coordinator node contacted by the client application only needs to forward the write request to one node in each remote data center.
If using a consistency level of ONE or LOCAL_QUORUM, only the nodes in the same data center as the coordinator node must respond to the client request in order for the request to succeed. This way, geographical latency does not impact client request response times.
三、读请求
协调者发送给给副本的读请求包括两种:
1、一个直接的读请求
2、一个后台的读修复请求
The number of replicas contacted by a direct read request is determined by the consistency level specified by the client. Background read repair requests are sent to any additional replicas that did not receive a direct request. Read repair requests ensure that the requested row is made consistent on all replicas.
Thus, the coordinator first contacts the replicas specified by the consistency level. The coordinator sends these requests to the replicas that are currently responding the fastest. The nodes contacted respond with the requested data; if multiple nodes are contacted, the rows from each replica are compared in memory to see if they are consistent. If they are not, then the replica that has the most recent data (based on the timestamp) is used by the coordinator to forward the result back to the client.
To ensure that all replicas have the most recent version of frequently-read data, the coordinator also contacts and compares the data from all the remaining replicas that own the row in the background. If the replicas are inconsistent, the coordinator issues writes to the out-of-date replicas to update the row to the most recent values. This process is known as read repair. Read repair can be configured per table (using read_repair_chance), and is enabled by default.
For example, in a cluster with a replication factor of 3, and a read consistency level of QUORUM, 2 of the 3 replicas for the given row are contacted to fulfill the read request. Supposing the contacted replicas had different versions of the row, the replica with the most recent version would return the requested data. In the background, the third replica is checked for consistency with the first two, and if needed, the most recent replica issues a write to the out-of-date replicas.
原文:http://www.cnblogs.com/dyf6372/p/3533584.html