'Replacing Cassandra Cluster EC2 Instances with AWS Keyspace using Java legacy driver
We are attempting to replace our existing Cassandra EC2 Cluster with AWS Keyspace and we are attempting to keep our old code base. The idea is to simply get out of the devops business and have our Cassandra managed by AWS (scaling, upgrades, etc.). Looking at the guide they provide
https://docs.aws.amazon.com/keyspaces/latest/devguide/using_java_driver.html
They use a different (newer) driver than we currently use:
Our current driver:
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>3.3.0</version>
</dependency>
Their example:
<dependency>
<groupId>com.datastax.oss</groupId>
<artifactId>java-driver-core</artifactId>
<version>4.4.0</version>
</dependency>
The code examples seem radically different than our existing code.
Question - has anyone successfully migrated to AWS Keyspaces using the older driver and old code? Or does this require an upgrade? My hesitation is that we have a lot of code and with the cost of refactoring it might be easier to abandon Cassandra and start over with something else (DynamoDB, MongoDB, etc.).
Solution 1:[1]
Amazon Keyspaces can use the 3.x drivers pretty simply. While there are benefits to the 4.x driver, like externalized configuration, there is no need to upgrade to move the Amazon Keyspaces.
You can use traditional user/name password authentication or the sigv4 plugin.
<!-- https://mvnrepository.com/artifact/software.aws.mcs/aws-sigv4-auth-cassandra-java-driver-plugin -->
<dependency>
<groupId>software.aws.mcs</groupId>
<artifactId>aws-sigv4-auth-cassandra-java-driver-plugin_3</artifactId>
<version>3.0.3</version>
</dependency>
Code sample to connect with java 3.x and write to keyspaces.
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
import com.datastax.driver.core.PreparedStatement;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.Row;
import software.aws.mcs.auth.SigV4AuthProvider;
import java.net.InetSocketAddress;
import java.util.Collections;
import java.util.List;
public class OrderFetcher {
public final static String TABLE_FORMAT = "%-25s%s\n";
public final static int KEYSPACES_PORT = 9142;
public static Cluster connectToCluster(String region, List<InetSocketAddress> contactPoints) {
SigV4AuthProvider provider = new SigV4AuthProvider(region);
return Cluster.builder()
.addContactPointsWithPorts(contactPoints)
.withPort(KEYSPACES_PORT)
.withAuthProvider(provider)
.withSSL()
.build();
}
public static void main(String[] args) {
if (args.length != 3) {
System.err.println("Usage: OrderFetcher <region> <endpoint> <customer ID>");
System.exit(1);
}
String region = args[0];
List<InetSocketAddress> contactPoints = Collections.singletonList(new InetSocketAddress(args[1], KEYSPACES_PORT));
try (Cluster cluster = connectToCluster(region, contactPoints)) {
Session session = cluster.connect();
// Use a prepared query for quoting
PreparedStatement prepared = session.prepare("select * from acme.orders where customer_id = ?");
// We use execute to send a query to Cassandra. This returns a ResultSet, which is essentially a collection
// of Row objects.
ResultSet rs = session.execute(prepared.bind(args[2]));
// Print the header
System.out.printf(TABLE_FORMAT, "Date", "Order Id");
for (Row row : rs) {
System.out.printf(TABLE_FORMAT, row.getTimestamp("order_timestamp"), row.getUUID("order_id"));
}
}
}
}
Here is a link to example on github
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | MikeJPR |