Cassandra version 1.2 and Amazon EC2 MultiRegion replication and RandomPartitioner

This post will explicate the configuration and deployment of Cassandra v1.2 cluster across 2 Amazon EC2 regions – one EC2 instance  in Oregon and the other in Virginia. Note that the instances are in a cluster that spans across multiple Amazon regions.  The “cassandra-stress” utility (bundled in with Cassandra) will be used to test the insertion of 1M records of 2KB each in one region and subsequently read in the other region.

Configuration of a 2 node cluster – one node in each region

One can extend the cluster into as many nodes as required based on the steps outlined herein to create a 2 node cluster. Please note that these steps are a enabler for creating a multi-region cluster of ‘X’ set of nodes where ‘X’ is, of course, greater than 2. :-) You would not want to have a 2 node cluster – much less 2 nodes spread across 2 regions.

  1. Download and unzip / untar the cassandra 1.2 binary.
  2. cd conf and open up cassandra.yaml for editing:

    cluster_name: 'GK Cluster' [Update the cluster name]
    num_tokens: [Keep this commented]
    initial_token: 0 [set this to 0 for the first node]
    partitioner: RandomPartitioner [Replace the default with this]
    data_file_directories: /fs/fs1/cassandra/data [See below for details]
    commitlog_directory: /fs/fs2/cassandra/commitlog [See below for details]
    saved_caches_directory: /fs/fs2/cassandra/saved_caches [See below for details]
    seeds: "X.X.X.X,Y.Y.Y.Y" [comma separate public IPs of EC2 instances - one for each region]
    listen_address: pri.pri.pri.pri [Private IP of this instance]
    broadcast_address: pub.pub.pub.pub [Public IP of this instance]
    rpc_address: 0.0.0.0 [Replace with this]
    endpoint_snitch: Ec2MultiRegionSnitch [Replace with this snitch]

    The data_file and commit_log directories should be on two different disks. If you are using the new hi1.4xlarge instance to host Cassandra nodes then there are 2 TBs of local SSD storage. These 2 volumes need to be formatted (to ext4) and mounted. Thereafter one of these could be used for a commit log and the other for a data files. The initial_token is to be calculated using the  “tools/bin/token-generator” tool. In our case, we have one node in each region. Please note that if we had multiple nodes in each region then each region should be partitioned if it was its own distinct ring.
  3. Repeat the preceding configuration step on the other node in the other region.
  4. Start up both the nodes.
  5. Check the status of the ring on node tool:

This concludes the setup of the cluster spread across two Amazon regions.

Replication across regions

To demonstrate the replication of data from one region to the other, we would need to define a KeySpace with a RF (Replication Factor) of 2. Thus, there would be 2 replicas for each column family in it – one in each region.

We can utilize the “cassandra-stress” utility that is configurable to create KeySpaces, specify snitches, create a test payload of a given size and so on. The following two steps delineate the usage and demonstrates replication across regions.

Step 1:

In one node, we would run “cassandra-stress” to write to the cluster.

Here we write a million rows of size 2048 bytes with a consistency level of “ONE”. We also specify that the KeySpace that is to be created (if the KeySpace that cassandra-stress utilizes does not exist then it creates it) should have replication across regions – “us-east:1″ and “us-west-2:1″.  The one million rows are created in 59 seconds.

To check the number of keys inserted, run the following:

The number of keys are close to one million – please note that this is an estimate.

Step 2:

At the other node, we read from the cluster.

We see that one million keys are read in 50 seconds.

To check the number of keys on this node:

Status of the ring post the write and read operations: