Wednesday, June 22, 2016

Spring Cloud Zuul Support - Configuring Timeouts

Spring Cloud provides support for Netflix Zuul - a toolkit for creating edge services with routing and filtering capabilities.

Zuul Proxy support is very comprehensively documented at the Spring Cloud site. My objective here is to focus on a small set of attributes relating to handling timeouts when dealing with the proxied services.

Target Service and Gateway


To study timeouts better I have created a sample service(code available here) which takes in a configurable "delay" parameter as part of the request body and a sample request/response looks something like this:

A sample request with a 5 second delay:

{
  "id": "1",
  "payload": "Hello",
  "delay_by": 5000,
  "throw_exception": false
}


and an expected response:

{
  "id": "1",
  "received": "Hello",
  "payload": "Hello!"
}


This service is registered with an id of "sample-svc" in Eureka, a Spring Cloud Zuul proxy on top of this service has the following configuration:

zuul:
  ignoredServices: '*'
  routes:
    samplesvc:
      path: /samplesvc/**
      stripPrefix: true
      serviceId: sample-svc

Essentially forward all requests to /samplesvc/ uri to a service disambiguated with the name "sample-svc" via Eureka.

I also have a UI on top of the gateway to make testing with different delay's easier:


Service Delay Tests


The Gateway behaves without any timeout related issues when a low "delay" parameter is added to the service call, however if the delay parameter is changed as low as say 1 to 1.5 seconds the gateway would time out.

The reason is that if the Gateway is set up to use Eureka, then the Gateway uses Netflix Ribbon component to make the actual call. Further, the ribbon call is wrapped within Hystrix to ensure that the call remains fault tolerant. The first timeout that we are hitting is because Hystrix has a very low delay tolerance threshold and tweaking the hystrix settings should get us past the first timeout.

hystrix:
  command:
    sample-svc:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 15000

Note that the Hystrix "Command Key" used for configuration is the name of the service as registered in Eureka.

This may be a little too fine grained for this specific Zuul call, if you are okay about tweaking it across the board then configuration along these lines should do the job:

hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 15000

With this change the request to the service via the gateway with a delay of upto 5 seconds will now go through without any issues. If we were to go above 5 seconds though we would get another timeout. We are now hitting Ribbons timeout setting which again can be configured in a fine grained way for the specific service call by tweaking configuration which looks like this:

sample-svc:
  ribbon:
    ReadTimeout: 15000


With both these timeout tweaks in place the gateway based call should now go through

Conclusion


The purpose was not to show ways of setting arbitrarily high timeout values but just to show how to set values that may be more appropriate for your applications. Sensible timeouts are very important to ensure that bad service behaviors don't cascade upto the users. One thing to note is that if the gateway is configured without Ribbon and Eureka by specifying a direct url to a service then these timeout settings are not relevant at all.

If you are interested in exploring this further, the samples are available here.

Tuesday, June 7, 2016

Spring-Reactive samples

Spring-Reactive aims to bring reactive programming support to Spring based projects and this is expected to be available for the timelines of Spring 5. My intention here is to exercise some of the very basic signatures for REST endpoints with this model.

Before I go ahead let me acknowledge that this entire sample is completely based on the samples which S├ębastien Deleuze has put together here - https://github.com/sdeleuze/spring-reactive-playground

I wanted to consider three examples, first a case where existing Java 8 CompletableFuture is returned as a type, second where RxJava's Observable is returned as a type and third with Spring Reactor Core's Flux type.

Expected Protocol

The structure of the request and response message handled by each of the three service is along these lines, all of them will take in a request which looks like this:

{
 "id":1,
  "delay_by": 2000,
  "payload": "Hello",
  "throw_exception": false
}


The delay_by will make the response to be delayed and throw_exception will make the response to error out. A sane response will be the following:

{
  "id": "1",
  "received": "Hello",
  "payload": "Response Message"
}

I will be ignoring the exceptions for this post.

CompletableFuture as a return type


Consider a service which returns a java 8 CompletableFuture as a return type:

public CompletableFuture<MessageAcknowledgement> handleMessage(Message message) {
 return CompletableFuture.supplyAsync(() -> {
  Util.delay(message.getDelayBy());
  return new MessageAcknowledgement(message.getId(), message.getPayload(), "data from CompletableFutureService");
 }, futureExecutor);
}

The method signature of a Controller which calls this service looks like this now:

@RestController
public class CompletableFutureController {

 private final CompletableFutureService aService;

 @Autowired
 public CompletableFutureController(CompletableFutureService aService) {
  this.aService = aService;
 }

 @RequestMapping(path = "/handleMessageFuture", method = RequestMethod.POST)
 public CompletableFuture<MessageAcknowledgement> handleMessage(@RequestBody Message message) {
  return this.aService.handleMessage(message);
 }

}

When the CompletableFuture completes the framework will ensure that the response is marshalled back appropriately.

Rx Java Observable as a return type

Consider a service which returns a Rx Java Observable as a return type:

public Observable<MessageAcknowledgement> handleMessage(Message message) {
 logger.info("About to Acknowledge");
 return Observable.just(message)
   .delay(message.getDelayBy(), TimeUnit.MILLISECONDS)
   .flatMap(msg -> {
    if (msg.isThrowException()) {
     return Observable.error(new IllegalStateException("Throwing a deliberate exception!"));
    }
    return Observable.just(new MessageAcknowledgement(message.getId(), message.getPayload(), "From RxJavaService"));
   });
}

The controller invoking such a service can directly return the Observable as a type now and the framework will ensure that once all the items have been emitted the response is marshalled correctly.
@RestController
public class RxJavaController {

 private final RxJavaService aService;

 @Autowired
 public RxJavaController(RxJavaService aService) {
  this.aService = aService;
 }

 @RequestMapping(path = "/handleMessageRxJava", method = RequestMethod.POST)
 public Observable<MessageAcknowledgement> handleMessage(@RequestBody Message message) {
  System.out.println("Got Message..");
  return this.aService.handleMessage(message);
 }

}

Note that since Observable represents a stream of 0 to many items, this time around the response is a json array.


Spring Reactor Core Flux as a return type


Finally, if the response type is a Flux type, the framework ensures that the response is handled cleanly. The service is along these lines:

public Flux<messageacknowledgement> handleMessage(Message message) {
 return Flux.just(message)
   .delay(Duration.ofMillis(message.getDelayBy()))
   .map(msg -> Tuple.of(msg, msg.isThrowException()))
   .flatMap(tup -> {
    if (tup.getT2()) {
     return Flux.error(new IllegalStateException("Throwing a deliberate Exception!"));
    }
    Message msg = tup.getT1();
    return Flux.just(new MessageAcknowledgement(msg.getId(), msg.getPayload(), "Response from ReactorService"));
   });
}

and a controller making use of such a service:

@RestController
public class ReactorController {

 private final ReactorService aService;

 @Autowired
 public ReactorController(ReactorService aService) {
  this.aService = aService;
 }

 @RequestMapping(path = "/handleMessageReactor", method = RequestMethod.POST)
 public Flux<MessageAcknowledgement> handleMessage(@RequestBody Message message) {
  return this.aService.handleMessage(message);
 }

}

Conclusion

This is just a sampling of the kind of return types that the Spring Reactive project supports, the possible return types is way more than this - here is a far more comprehensive example.

I look forward to when the reactive programming model becomes available in the core Spring framework.

The samples presented in this blog post is available at my github repository

Saturday, May 28, 2016

Cloud Foundry Java Client - Streaming events

Cloud Foundry Java Client provides Java based bindings for interacting with a running Cloud Foundry instance. One of the neat things about this project is that it has embraced the Reactive Stream based API's for its method signatures, specifically using the Reactor implementation, this is especially useful when consuming streaming data.

In this post I want to demonstrate a specific use case where this library really shines - in streaming events from Cloud Foundry

Loggregator is the subsystem in Cloud Foundry responsible for aggregating all the logs produced within the system and provides ways for this information to be streamed out to external systems. The "Traffic Controller" component within Loggregator exposes a Websocket based endpoint streaming out these events, the Cloud Foundry Java client abstracts the underlying websocket client connection details and provides a neat way to consume this information.


As a pre-requisite, you will need a running instance of Cloud Foundry to try out the sample and the best way to get it working locally is to use PCF Dev.

Assuming that you have a running instance, the way to connect to this instance from code using the cf-java-client library is along the following lines:

SpringCloudFoundryClient cfClient = SpringCloudFoundryClient.builder()
                .host("api.local.pcfdev.io")
                .username("admin")
                .password("admin")
                .skipSslValidation(true)
                .build();

Using this, a client to the Traffic Controller can be created the following way:

DopplerClient dopplerClient = ReactorDopplerClient.builder()
      .cloudFoundryClient(cfClient)
      .build();


That is essentially it, doppler client provides methods to stream the underlying events, if you are interested in all the unfiltered information(appropriately referred to as the firehose), you can do it the following way:

Flux<Event> cfEvents = this.dopplerClient.firehose(
    FirehoseRequest.builder()
      .subscriptionId(UUID.randomUUID().toString()).build());

The result is a Flux type from the Reactor library encapsulating the streaming data which can be observed by attaching a subscriber, say for a basic example of a subscriber simply logging the events to the console the following way:

cfEvents.subscribe(e -> LOGGER.info(e.toString()));


However the real power of Flux is in the very powerful fluent methods that it provides, so for eg if I were interested in a subset of say just the Application level logs, I would essentially want to filter down the data, extract the log from it and print the log the following way:

cfEvents
 .filter(e -> LogMessage.class.isInstance(e))
 .map(e -> (LogMessage)e)
 .map(LogMessage::getMessage)
 .subscribe(LOGGER::info);

If you want to play with this sample which as an added bonus has been Spring Boot enabled, I have it available in my github repository.

Tuesday, May 17, 2016

Spring Cloud with Turbine AMQP

I have previously blogged about using Spring Cloud with Turbine, a Netflix OSS library which provides a way to aggregate the information from Hystrix streams across a cluster.

The default aggregation flow is however pull-based, where Turbine requests the hystrix stream from each instance in the cluster and aggregates it together - this tends to be way more configuration heavy.


Spring Cloud Turbine AMQP offers a different model, where each application instance pushes the metrics from Hystrix commands to Turbine through a central RabbitMQ broker.


This blog post recreates the sample that I had configured previously using Spring Cloud support for AMQP - the entire sample is available at my github repo if you just want the code.

The changes are very minor for such a powerful feature, all the application which wants to feed the hystrix stream to an AMQP broker is to add these dependencies expressed in maven the following way:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-hystrix</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-netflix-hystrix-stream</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>


These dependencies would now auto-configure all the connectivity details with RabbitMQ sample topic exchange and would start feeding in the hystrix stream data into this RabbitMQ topic.

Similarly on the Turbine end all that needs to be done is to specify the appropriate dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-turbine-stream</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>

This would consume the hystrix messages from RabbitMQ and would in turn expose an aggregated stream over an http endpoint.

Using this aggregated stream a hystrix dashboard can be displayed along these lines:


The best way to try out the sample is using docker-compose and the README with the sample explains how to build the relevant docker containers and start it up using docker-compose.

Monday, May 2, 2016

Approaches to binding a Spring Boot application to a service in Cloud Foundry

If you want to try out Cloud Foundry the simplest way to do that is to download the excellent PCF Dev or to create a trial account at the Pivotal Web Services site.

The rest of the post assumes that you have an installation of Cloud Foundry available to you and that you have a high level understanding of Cloud Foundry. The objective of this post is to list out of the options you have in integrating your Java application to a service instance - this demo uses mysql as a sample service to integrate with but the approach is generic enough.

Overview of the Application

The application is fairly simple Spring-Boot app, it is a REST service exposing three domain types and their relationships, representing a university - Course, Teacher and Student. The domain instances are persisted to a MySQL database. The entire source code and the approaches are available at this github location if you want to jump ahead.

To try the application locally, first install a local mysql server database, on a Mac OSX box with homebrew available, the following set of commands can be run:

brew install mysql

mysql.server start
mysql -u root
# on the mysql prompt: 

CREATE USER 'univadmin'@'localhost' IDENTIFIED BY 'univadmin';
CREATE DATABASE univdb;
GRANT ALL ON univdb.* TO 'univadmin'@'localhost';

Bring up the Spring-Boot under cf-db-services-sample-auto:

mvn spring-boot:run

and an endpoint with a sample data will be available at http://localhost:8080/courses.

Trying this application on Cloud Foundry

If you have an installation of PCF Dev running locally, you can try out a deployment of the application the following way:

cf api api.local.pcfdev.io --skip-ssl-validation
cf login # login with admin/admin credentials

Create a Mysql service instance:
cf create-service p-mysql 512mb mydb

and push the app! (manifest.yml provides the binding of the app to the service instance)
cf push

An endpoint should be available at http://cf-db-services-sample-auto.local.pcfdev.io/courses

Approaches to service connectivity


Now that we have an application that works locally and on a sample local Cloud Foundry, these are the approaches to connecting to a service instance.

Approach 1 - Do nothing, let the Java buildpack handle the connectivity details


This approach is demonstrated in the cf-db-services-sample-auto project. Here the connectivity to the local database has been specified using Spring Boot and looks like this:

---

spring:
  jpa:
    show-sql: true
    hibernate.ddl-auto: none
    database: MYSQL

  datasource:
    driverClassName: com.mysql.jdbc.Driver
    url: jdbc:mysql://localhost/univdb?autoReconnect=true&useSSL=false
    username: univadmin
    password: univadmin

When this application is pushed to Cloud Foundry using the Java Buildpack, a component called the java-buildpack-auto-reconfiguration is injected into the application which reconfigures the connectivity to the service based on the runtime service binding.


Approach 2 - Disable Auto reconfiguration and use runtime properties

This approach is demonstrated in the cf-db-services-sample-props project. When a service is bound to an application, there is a set of environment properties injected into the application under the key "VCAP_SERVICES". For this specific service the entry looks something along these lines:

"VCAP_SERVICES": {
  "p-mysql": [
   {
    "credentials": {
     "hostname": "mysql.local.pcfdev.io",
     "jdbcUrl": "jdbc:mysql://mysql.local.pcfdev.io:3306/cf_456d9e1e_e31e_43bc_8e94_f8793dffdad5?user=**\u0026password=***",
     "name": "cf_456d9e1e_e31e_43bc_8e94_f8793dffdad5",
     "password": "***",
     "port": 3306,
     "uri": "mysql://***:***@mysql.local.pcfdev.io:3306/cf_456d9e1e_e31e_43bc_8e94_f8793dffdad5?reconnect=true",
     "username": "***"
    },
    "label": "p-mysql",
    "name": "mydb",
    "plan": "512mb",
    "provider": null,
    "syslog_drain_url": null,
    "tags": [
     "mysql"
    ]
   }
  ]
 }

The raw json is a little unwieldy to consume, however Spring Boot automatically converts this data into a flat set of properties that looks like this:

"vcap.services.mydb.plan": "512mb",
"vcap.services.mydb.credentials.username": "******",
"vcap.services.mydb.credentials.port": "******",
"vcap.services.mydb.credentials.jdbcUrl": "******",
"vcap.services.mydb.credentials.hostname": "******",
"vcap.services.mydb.tags[0]": "mysql",
"vcap.services.mydb.credentials.uri": "******",
"vcap.services.mydb.tags": "mysql",
"vcap.services.mydb.credentials.name": "******",
"vcap.services.mydb.label": "p-mysql",
"vcap.services.mydb.syslog_drain_url": "",
"vcap.services.mydb.provider": "",
"vcap.services.mydb.credentials.password": "******",
"vcap.services.mydb.name": "mydb",

Given this, the connectivity to the database can be specified in a Spring Boot application the following way - in a application.yml file:

spring:
  datasource:
    url: ${vcap.services.mydb.credentials.jdbcUrl}
    username: ${vcap.services.mydb.credentials.username}
    password: ${vcap.services.mydb.credentials.password}

One small catch though is that since I am now explicitly taking control of specifying the service connectivity, the runtime java-buildpack-auto-reconfiguration has to be disabled, which can done by a manifest metadata:
---
applications:
  - name: cf-db-services-sample-props
    path: target/cf-db-services-sample-props-1.0.0.RELEASE.jar
    memory: 512M
    env:
      JAVA_OPTS: -Djava.security.egd=file:/dev/./urandom
      SPRING_PROFILES_ACTIVE: cloud
    services:
      - mydb

buildpack: https://github.com/cloudfoundry/java-buildpack.git

env:
    JBP_CONFIG_SPRING_AUTO_RECONFIGURATION: '{enabled: false}'

Approach 3 - Using Spring Cloud Connectors

The third approach is to use the excellent Spring Cloud Connectors project and a configuration which specifies a service connectivity looks like this and is demonstrated in the cf-db-services-sample-connector sub-project:

@Configuration
@Profile("cloud")
public  class CloudFoundryDatabaseConfig {

    @Bean
    public Cloud cloud() {
        return new CloudFactory().getCloud();
    }

    @Bean
    public DataSource dataSource() {
        DataSource dataSource = cloud().getServiceConnector("mydb", DataSource.class, null);
        return dataSource;
    }
}

Pros and Cons



These are the Pros and Cons with each of these approaches:

Approaches Pros Cons
Approach 1 - Let Buildpack handle it
1. Simple, the application that works locally will work without any changes on the cloud

1. Magical - the auto-reconfiguration may appear magical to someone who does not understand the underlying flow
2. The number of service types supported is fairly limited -
say for eg, if a connectivity is required to Cassandra then Auto-reconfiguration will not work
Approach 2 - Explicit Properties 1. Fairly straightforward.
2. Follows the Spring Boot approach and uses some of the best practices of Boot based applications - for eg, there is a certain order in which datasource connection pools are created, all those best practices just flow in using this approach.
1. The Auto-reconfiguration will have to be explicitly disabled
2. Need to know what the flattened properties look like
3. A "cloud" profile may have to be manually injected through environment properties to differentiate local development and cloud deployment
4. Difficult to encapsulate reusability of connectivity to newer service types - say Cassandra or DynamoDB.
Approach 3 - Spring Cloud Connectors 1. Simple to integrate
2. Easy to add in re-usable integration to newer service types
1. Bypasses the optimizations of Spring Boot connection pool logic.

Conclusion


My personal preference is to go with Approach 2 as it most closely matches the Spring Boot defaults, not withstanding the cons of the approach. If more complicated connectivity to a service is required I will likely go with approach 3. Your mileage may vary though


References

1. Scott Frederick's spring-music has been a constant guide.
2. I have generously borrowed from Ben Hale's pong_matcher_spring sample.

Tuesday, April 26, 2016

Scatter-Gather using Spring Reactor Core

I have a good working experience in using the Netflix Rx-Java libraries and have previously blogged about using Rx-Java and Java 8 CompletableFuture for a scatter-gather kind of problems. Here I want to explore applying the same pattern using the Spring Reactor Core library.

tldr - If you are familiar with Netflix Rx-Java, you already know Spring Reactor Core, the API's map beautifully and I was thrilled to see that the Spring Reactor team has diligently used Marble diagrams in their Javadoc API's

Another quick point is that rx.Observable maps to Flux or Mono based on whether many items are being emitted or whether one or none is being emitted.

With this let me directly jump into the sample - I have a simple task(simulated using a delay) that is spawned a few times, I need to execute these tasks concurrently and then collect back the results, represented the following way using a rx.Observable code:

@Test
public void testScatterGather() throws Exception {
    ExecutorService executors = Executors.newFixedThreadPool(5);

    List<Observable<String>> obs =
            IntStream.range(0, 10)
                .boxed()
                .map(i -> generateTask(i, executors)).collect(Collectors.toList());


    Observable<List<String>> merged = Observable.merge(obs).toList();
    List<String> result = merged.toBlocking().first();

    logger.info(result.toString());

}

private Observable<String> generateTask(int i, ExecutorService executorService) {
    return Observable
            .<String>create(s -> {
                Util.delay(2000);
                s.onNext( i + "-test");
                s.onCompleted();
            }).subscribeOn(Schedulers.from(executorService));
}


Note that I am blocking purely for the test.

Now, a similar code using Spring Reactor Core translates to the following:

@Test
public void testScatterGather() {
    ExecutorService executors = Executors.newFixedThreadPool(5);

    List<Flux<String>> fluxList = IntStream.range(0, 10)
            .boxed()
            .map(i -> generateTask(executors, i)).collect(Collectors.toList());

    Mono<List<String>> merged = Flux.merge(fluxList).toList();

    List<String> list = merged.get();

    logger.info(list.toString());


}

public Flux<String> generateTask(ExecutorService executorService, int i) {
    return Flux.<String>create(s -> {
        Util.delay(2000);
        s.onNext(i + "-test");
        s.onComplete();
    }).subscribeOn(executorService);
}

It more or less maps one to one. A small difference is in the Mono type, I personally felt that this type was a nice introduction to the reactive library as it makes it very clear whether more than 1 item is being emitted vs only a single item which I have made use of in the sample.

These are still early explorations for me and I look forward to getting far more familiar with this excellent library.

Monday, April 18, 2016

First steps to Spring Boot Cassandra

If you want to start using Cassandra NoSQL database with Spring Boot, the best resource is likely the Cassandra samples available here and the Spring data Cassandra documentation.

Here I will take a little more roundabout way, by actually installing Cassandra locally and running a basic test against it and I aim to develop this sample into a more comprehensive example with the next blog post.

Setting up a local Cassandra instance

Your mileage may vary, but the simplest way to get a local install of Cassandra running is to use the Cassandra cluster manager(ccm) utility, available here.

ccm create test -v 2.2.5 -n 3 -s

Or a more traditional approach may simply be to download it from the Apache site. If you are following along, the version of Cassandra that worked best for me is the 2.2.5 one.

With either of the above, start up Cassandra, using ccm:

ccm start test

or with the download from the Apache site:

bin/cassandra -f

The -f flag will keep the process in the foreground, this way stopping the process will be very easy once you are done with the samples.

Now connect to this Cassandra instance:

bin/cqlsh

and create a sample Cassandra keyspace:

CREATE KEYSPACE IF NOT EXISTS sample WITH replication = {'class':'SimpleStrategy', 'replication_factor':1};

Using Spring Boot Cassandra


Along the lines of anything Spring Boot related, there is a starter available for pulling in all the relevant dependencies of Cassandra, specified as a gradle dependency here:

compile('org.springframework.boot:spring-boot-starter-data-cassandra')

This will pull in the dependencies that trigger the Auto-configuration for Cassandra related instances - a Cassandra session mainly.

For the sample I have defined an entity called the Hotel defined the following way:

package cass.domain;

import org.springframework.data.cassandra.mapping.PrimaryKey;
import org.springframework.data.cassandra.mapping.Table;

import java.io.Serializable;
import java.util.UUID;

@Table("hotels")
public class Hotel implements Serializable {

    private static final long serialVersionUID = 1L;

    @PrimaryKey
    private UUID id;

    private String name;

    private String address;

    private String zip;

    private Integer version;

    public Hotel() {
    }

    public Hotel(String name) {
        this.name = name;
    }

    public UUID getId() {
        return id;
    }

    public String getName() {
        return this.name;
    }

    public String getAddress() {
        return this.address;
    }

    public String getZip() {
        return this.zip;
    }

    public void setId(UUID id) {
        this.id = id;
    }

    public void setName(String name) {
        this.name = name;
    }

    public void setAddress(String address) {
        this.address = address;
    }

    public void setZip(String zip) {
        this.zip = zip;
    }

    public Integer getVersion() {
        return version;
    }

    public void setVersion(Integer version) {
        this.version = version;
    }

}

and the Spring data repository to manage this entity:

import cass.domain.Hotel;
import org.springframework.data.repository.CrudRepository;

import java.util.UUID;

public interface HotelRepository extends CrudRepository<Hotel, UUID>{}


A corresponding cql table is required to hold this entity:

CREATE TABLE IF NOT EXISTS  sample.hotels (
    id UUID,
    name varchar,
    address varchar,
    zip varchar,
    version int,
    primary key((id))
);


That is essentially it, Spring data support for Cassandra would now manage all the CRUD operations of this entity and a test looks like this:

import cass.domain.Hotel;
import cass.repository.HotelRepository;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.SpringApplicationConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import java.util.UUID;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.equalTo;

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = SampleCassandraApplication.class)
public class SampleCassandraApplicationTest {

 @Autowired
 private HotelRepository hotelRepository;

 @Test
 public void repositoryCrudOperations() {
  Hotel sample = sampleHotel();
  this.hotelRepository.save(sample);

  Hotel savedHotel = this.hotelRepository.findOne(sample.getId());

  assertThat(savedHotel.getName(), equalTo("Sample Hotel"));

  this.hotelRepository.delete(savedHotel);
 }

 private Hotel sampleHotel() {
  Hotel hotel = new Hotel();
  hotel.setId(UUID.randomUUID());
  hotel.setName("Sample Hotel");
  hotel.setAddress("Sample Address");
  hotel.setZip("8764");
  return hotel;
 }

}

Here is the github repo with this sample. There is not much to this sample yet, in the next blog post I will enhance this sample to account for the fact that it is very important to understand the distribution of data across a cluster in a NoSQL system and how the entity like Hotel here can be modeled for efficient CRUD operations.