Thursday, September 29, 2016

Spring-Reactive samples - Mono and Single

This is just a little bit of a learning from my previous post where I had tried out the Spring's native support for  reactive programming.

Just to quickly recap, my objective was to develop a reactive service which takes in a request which looks like this:

{
 "id":1,
  "delay_by": 2000,
  "payload": "Hello",
  "throw_exception": false
}

and returns a response along these lines:

{
  "id": "1",
  "received": "Hello",
  "payload": "Response Message"
}

I had demonstrated this in two ways that the upcoming Spring's reactive model supports, using the Reactor-Core Flux type as a return type and using Rx-java Observable type

However the catch with these types is that the response would look something like this:

[{"id":"1","received":"Hello","payload":"From RxJavaService"}]

Essentially an array, and the reason is obvious - Flux and Observable represent zero or more asynchronous emissions and so Spring Reactive Web framework has to represent such a result as an array.

The fix to return the expected json is to essentially return a type which represents 1 value - such a type is the Mono in Reactor-Core OR a Single in Rx-java. Both these types are as capable as their multi-valued counterparts in providing functions which combine, transform their elements.

So with this change the controller signature with Mono looks like this:

@RequestMapping(path = "/handleMessageReactor", method = RequestMethod.POST)
public Mono<MessageAcknowledgement> handleMessage(@RequestBody Message message) {
  return this.aService.handleMessageMono(message);
}


and with Single like this:

@RequestMapping(path = "/handleMessageRxJava", method = RequestMethod.POST)
public Single<MessageAcknowledgement> handleMessage(@RequestBody Message message) {
  return this.aService.handleMessageSingle(message);
}

I have the sample code available in my github repo

Saturday, September 10, 2016

RabbitMQ retries using Spring Integration

I recently read about an approach to retry with RabbitMQ here and wanted to try a similar approach with Spring Integration, which provides an awesome set of integration abstractions.

TL;DR the problem being solved is to retry a message(in case of failures in processing) a few times with a large delay between retries(say 10 mins +). The approach makes use of the RabbitMQ support for Dead Letter Exchanges and looks something like this




The gist of the flow is :
1. A Work dispatcher creates "Work Unit"(s) and sends it to a RabbitMQ queue via an exchange.
2. The Work queue is set with a Dead Letter exchange. If the message processing fails for any reason the "Work Unit" ends up with the Work Unit Dead Letter Queue.
3. Work Unit Dead Letter queue is in-turn set with the Work Unit exchange as the Dead Letter Exchange, this way creating a cycle. Further, the expiration of messages in the dead letter queue is set to say 10 mins, this way once the message expires it will be back again in the Work unit queue.
4. To break the cycle the processing code has to stop processing once a certain count threshold is exceeded.



Implementation using Spring Integration


I have covered a straight happy path flow using Spring Integration and RabbitMQ before, here I will mostly be building on top of this code.

A good part of the set-up is the configuration of the appropriate dead letter exchanges/queues, and looks like this when expressed using Spring's Java Configuration:

@Configuration
public class RabbitConfig {

    @Autowired
    private ConnectionFactory rabbitConnectionFactory;

    @Bean
    Exchange worksExchange() {
        return ExchangeBuilder.topicExchange("work.exchange")
                .durable()
                .build();
    }


    @Bean
    public Queue worksQueue() {
        return QueueBuilder.durable("work.queue")
                .withArgument("x-dead-letter-exchange", worksDlExchange().getName())
                .build();
    }

    @Bean
    Binding worksBinding() {
        return BindingBuilder
                .bind(worksQueue())
                .to(worksExchange()).with("#").noargs();
    }
    
    // Dead letter exchange for holding rejected work units..
    @Bean
    Exchange worksDlExchange() {
        return ExchangeBuilder
                .topicExchange("work.exchange.dl")
                .durable()
                .build();
    }

    //Queue to hold Deadletter messages from worksQueue
    @Bean
    public Queue worksDLQueue() {
        return QueueBuilder
                .durable("works.queue.dl")
                .withArgument("x-message-ttl", 20000)
                .withArgument("x-dead-letter-exchange", worksExchange().getName())
                .build();
    }

    @Bean
    Binding worksDlBinding() {
        return BindingBuilder
                .bind(worksDLQueue())
                .to(worksDlExchange()).with("#")
                .noargs();
    }
    ...
}


Note that here I have set the TTL of the Dead Letter queue to 20 seconds, this means that after 20 seconds a failed message will be back in the processing queue. Once this set-up is in place and the appropriate structures are created in RabbitMQ, the consuming part of the code looks like this, expressed using Spring Integration Java DSL:

@Configuration
public class WorkInbound {

    @Autowired
    private RabbitConfig rabbitConfig;

    @Bean
    public IntegrationFlow inboundFlow() {
        return IntegrationFlows.from(
                Amqp.inboundAdapter(rabbitConfig.workListenerContainer()))
                .transform(Transformers.fromJson(WorkUnit.class))
                .log()
                .filter("(headers['x-death'] != null) ? headers['x-death'][0].count <= 3: true", f -> f.discardChannel("nullChannel"))
                .handle("workHandler", "process")
                .get();
    }

}

Most of the retry logic here is handled by the RabbitMQ infrastructure, the only change here is to break the cycle by explicitly discarding the message after a certain 2 retries. This break is expressed as a filter above, looking at the header called "x-death" that RabbitMQ adds to the message once it is sent to Dead Letter exchange. The filter is admittedly a little ugly - it can likely be expressed a little better in Java code.



One more thing to note is that the retry logic could have been expressed in-process using Spring Integration, however I wanted to investigate a flow where the retry times can be high (say 15 to 20 mins) which will not work well in-process and is also not cluster safe as I want any instances of an application to potentially handle the retry of a message.

If you want to explore further, do try the sample at my github repo - https://github.com/bijukunjummen/si-dsl-rabbit-sample

Reference:

Retry With RabbitMQ: http://dev.venntro.com/2014/07/back-off-and-retry-with-rabbitmq

Wednesday, August 24, 2016

Integrating with RabbitMQ using Spring Cloud Stream

In my previous post I wrote about a very simple integration scenario between two systems - one generating a work unit and another processing that work unit and how Spring Integration makes such integration very easy.



Here I will demonstrate how this integration scenario can be simplified even further using Spring Cloud Stream

I have the sample code available here - the right maven dependencies for Spring Cloud Stream is available in the pom.xml.

Producer


So again starting with the producer responsible for generating the work units. All that needs to be done code wise to send messages to RabbitMQ is to have a java configuration along these lines:

@Configuration
@EnableBinding(WorkUnitsSource.class)
@IntegrationComponentScan
public class IntegrationConfiguration {}

This looks deceptively simple but does a lot under the covers, from what I can understand and glean from the documentation these are what this configuration triggers:

1. Spring Integration message channels based on the classes that are bound to the @EnableBinding annotation are created. The WorkUnitsSource class above is the definition of a custom channel called "worksChannel" and looks like this:

import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;

public interface WorkUnitsSource {

    String CHANNEL_NAME = "worksChannel";

    @Output
    MessageChannel worksChannel();

}

2. Based on which "binder" implementation is available at runtime(say RabbitMQ, Kaffka, Redis, Gemfire), the channel in the previous step will be connected to the appropriate structures in the system - so for eg, I am want my "worksChannel" to in turn send messages to RabbitMQ, Spring Cloud Stream would take care of automatically creating a topic exchange in RabbitMQ

I wanted some further customizations in terms of how the data is sent to RabbitMQ - specifically I wanted my domain objects to be serialized to json before being sent across and I want to specify the name of the RabbitMQ exchange that the payload is sent to, this is controlled by certain configurations that can be attached to the channel the following way using a yaml file:

spring:
  cloud:
    stream:
      bindings:
        worksChannel:
          destination: work.exchange
          contentType: application/json
          group: testgroup

One final detail is a way for the rest of the application to interact with Spring Cloud Stream, this can be done directly in Spring Integration by defining a message gateway:

import org.springframework.integration.annotation.Gateway;
import org.springframework.integration.annotation.MessagingGateway;
import works.service.domain.WorkUnit;

@MessagingGateway
public interface WorkUnitGateway {
 @Gateway(requestChannel = WorkUnitsSource.CHANNEL_NAME)
 void generate(WorkUnit workUnit);

}

That is essentially it, Spring Cloud Stream would now wire up the entire Spring integration flow, create the appropriate structures in RabbitMQ.


Consumer


Similar to the Producer, first I want to define the channel called "worksChannel" which would handle the incoming message from RabbitMQ:

import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.SubscribableChannel;

public interface WorkUnitsSink {
    String CHANNEL_NAME = "worksChannel";

    @Input
    SubscribableChannel worksChannel();
}

and let Spring Cloud Stream create the channels and RabbitMQ bindings based on this definition:

import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.context.annotation.Configuration;

@Configuration
@EnableBinding(WorkUnitsSink.class)
public class IntegrationConfiguration {}

To process the messages, Spring Cloud Stream provides a listener which can be created the following way:

@Service
public class WorkHandler {
    private static final Logger LOGGER = LoggerFactory.getLogger(WorkHandler.class);

    @StreamListener(WorkUnitsSink.CHANNEL_NAME)
    public void process(WorkUnit workUnit) {
        LOGGER.info("Handling work unit - id: {}, definition: {}", workUnit.getId(), workUnit.getDefinition());
    }
}

And finally the configuration which connects this channel to the RabbitMQ infrastructure expressed in a yaml file:

spring:
  cloud:
    stream:
      bindings:
        worksChannel:
          destination: work.exchange
          group: testgroup


Now if the producer and any number of consumers were started up, the message sent via the producer would be sent to a Rabbit MQ topic exchange as a json, retrieved by the consumer, deserialized to an object and passed to the work processor.

A good amount of the boiler plate involved in creating the RabbitMQ infrastructure is now handled purely by convention by the Spring Cloud Stream libraries. Though Spring Cloud Stream attempts to provide a facade over the raw Spring Integration, it is useful to have a basic knowledge of Spring integration to use Spring Cloud Stream effectively.

The sample described here is available at my github repository

Monday, August 22, 2016

Integrating with Rabbit MQ using Spring Integration Java DSL

I recently attended the Spring One conference 2016 in Las Vegas and had the good fortune to see from near and far some of the people that I have admired for a long time in the Software World. I personally met two of them who have actually merged some of my Spring Integration related minor contributions from a few years ago - Gary Russel and Artem Bilan and they inspired me to look again at Spring Integration which I have not used for a while.

I was once more reminded of how Spring Integration makes any complex Enterprise integration scenario look easy. I am happy to see that Spring Integration Java based DSL is now fully integrated into the Spring Integration umbrella and higher level abstractions like Spring Cloud Stream(introductions thanks to my good friend and a contributor to this project Soby Chacko) which makes some of the message driven scenarios even easier.

In this post I am just revisiting a very simple integration scenario with RabbitMQ and in a later post will re-implement it using Spring Cloud Stream.

Consider a scenario where two services are talking to each other via a RabbitMQ broker in between, one of them generating some kind of a work, the other processing this work.



Producer


The Work unit producing/dispatching part can be expressed in code using Spring Integration Java DSL the following way:


@Configuration
public class WorksOutbound {

    @Autowired
    private RabbitConfig rabbitConfig;

    @Bean
    public IntegrationFlow toOutboundQueueFlow() {
        return IntegrationFlows.from("worksChannel")
                .transform(Transformers.toJson())
                .handle(Amqp.outboundAdapter(rabbitConfig.worksRabbitTemplate()))
                .get();
    }
}

This is eminently readable - the flow starts by reading a message off a channel called "worksChannel", transforms the message into a json and dispatches it off using an Outbound channel adapter to a RabbitMQ exchange. Now, how does the message get to the channel called "worksChannel" - I have configured it via a Messaging gateway, an entry point to the Spring Integration world -

@MessagingGateway
public interface WorkUnitGateway {
 @Gateway(requestChannel = "worksChannel")
 void generate(WorkUnit workUnit);

}

So now if a java client wanted to dispatch a "work unit" to rabbitmq, the call would look like this :

WorkUnit sampleWorkUnit = new WorkUnit(UUID.randomUUID().toString(), definition);
workUnitGateway.generate(sampleWorkUnit);

I have brushed over a few things here - specifically the Rabbit MQ configuration, that is run of the mill however and is available here

Consumer

Along the lines of a producer, a consumers flow would start by receiving a message from RabbitMQ queue, transforming it to a domain model and then processing the message, expressed using Spring Integration Java DSL the following way:

@Configuration
public class WorkInbound {

    @Autowired
    private RabbitConfig rabbitConfig;

    @Autowired
    private ConnectionFactory connectionFactory;

    @Bean
    public IntegrationFlow inboundFlow() {
        return IntegrationFlows.from(
                Amqp.inboundAdapter(connectionFactory, rabbitConfig.worksQueue()).concurrentConsumers(3))
                .transform(Transformers.fromJson(WorkUnit.class))
                .handle("workHandler", "process")
                .get();
    }
}

The code should be intuitive, the workHandler above is a simple Java pojo and looks like this, doing the very important job of just logging the payload:

@Service
public class WorkHandler {
    private static final Logger LOGGER = LoggerFactory.getLogger(WorkHandler.class);

    public void process(WorkUnit workUnit) {
        LOGGER.info("Handling work unit - id: {}, definition: {}", workUnit.getId(), workUnit.getDefinition());
    }
}


That is essentially it, Spring Integration provides an awesome facade to what would have been a fairly complicated code had it been attempted using straight Java and raw RabbitMQ libraries. Spring Cloud Stream makes this entire set-up even simpler and would be the topic of a future post.


I have posted this entire code at my github repo if you are interested in taking this for a spin.

Saturday, August 6, 2016

No downtime deployment using "Yet another" Cloud Foundry Gradle plugin

I have been trying my hand at writing a gradle plugin for deploying applications to Cloud Foundry and wrote about this plugin in my previous post. I have now enhanced this plugin with support for no-downtime deploys into Cloud Foundry using two approaches - an Autopilot style deployment and a more commonly used Blue-Green style deployment.

To jump into the meat of the plugin, once it is configured cleanly all you have to do is the following:

For an autopilot style

./gradlew cf-push-autopilot

and for a Blue-Green deployment:

./gradlew cf-push-blue-green

and the plugin tasks would take care of the rest.

What is being solved


If you use Cloud Foundry CLI to push an application to Cloud Foundry, then existing instances of the application is stopped, replaced and started up. This introduces a downtime for the application until the new instance of the application is up. Just to demonstrate this behavior, the following graph represents a steady traffic to a website while an application is pushed to Cloud Foundry - the 30 second blip is when the new app is being started up.


Autopilot and Blue-Green style deployments


Autopilot and Blue-Green styles of deployment fix the issue by carefully orchestrating the deployment of an application such that the external facing route always points to a working version of the application.

The plugin now natively performs all the steps needed for these two styles of no-downtime deployments.

Here is how the same graph looks with an Autopilot style type deployment using the plugin, note that there is a slightly higher response time around the time the new application switches in. Once primed though the response times smooth out:



and with a Blue-Green style deployment using this plugin


References:


1. The details about how to install and configure the plugin is available here - https://github.com/pivotalservices/ya-cf-app-gradle-plugin

2. A sample application configured with the plugin is here - https://github.com/bijukunjummen/cf-show-env

3. The load test using gatling is available here

Tuesday, July 19, 2016

Introducing "Yet another" Cloud foundry Gradle plugin

In the process of working on an automated Jenkins pipeline for deploying a Cloud Foundry application with two of my colleagues(Thanks Mark Alston, Dave Malone !) I decided to try my hand on writing a Gradle plugin to perform some of the tasks that are typically done using a command line Cloud Foundry Client.

Introducing the totally unimaginatively named "ya-cf-app-gradle-plugin" with a set of gradle tasks(dare I say opinionated!) that should help automate some of the routine steps involved in deploying a java application to a Cloud Foundry environment. The "ya" or the yet-another part is because this is just a stand-in plugin, the authoritative plugin for Cloud Foundry will ultimately reside with the excellent CF-Java-Client project.


I have provided an extensive README with the projects documentation that should help in getting started with using the plugin, the tasks should be fairly intuitive if you have previously worked with the CF cli.

Just as an example, once the gradle plugin is cleanly added into the build script, the following gradle tasks are available when listed by running "./gradlew tasks" command:





All the tasks work off a configuration provided the following way in a cfConfig block in the buildscript:

apply plugin: 'cf-app'

cfConfig {
 //CF Details
 ccHost = "api.local.pcfdev.io"
 ccUser = "admin"
 ccPassword = "admin"
 org = "pcfdev-org"
 space = "pcfdev-space"

 //App Details
 name = "cf-show-env"
 hostName = "cf-show-env"
 filePath = "build/libs/cf-show-env-0.1.2-SNAPSHOT.jar"
 path = ""
 domain = "local.pcfdev.io"
 instances = 2
 memory = 512

 //Env and services
 buildpack = "https://github.com/cloudfoundry/java-buildpack.git"
 environment = ["JAVA_OPTS": "-Djava.security.egd=file:/dev/./urandom", "SPRING_PROFILES_ACTIVE": "cloud"]
 services  = ["mydb"]
}

Any overrides on top of the base configuration provided this way can be done by specifying gradle properties with a "cf.*" pattern. For eg. a normal push of an application would look like this:

./gradlew cf-push

and a push with the name of the application and the host name overridden would look like this:

./gradlew cf-push -Pcf.name=Green -Pcf.hostName=demo-time-temp


All of the tasks follow the exact same pattern, depending on the cfConfig block as the authoritative source of properties along with the command line overrides. There is one task that can be used for retrieving back some of the details of an app in CloudFoundry, the task is "cf-get-app-detail", say after deploying a canary instance of an app you wanted to run a quick test against it, the task would look along these lines, a structure "project.cfConfig" is populated with the app details once successfully invoked:

task acceptanceTest(type: Test, dependsOn: "cf-get-app-detail")  {
 doFirst() {
  systemProperty "url", "https://${project.cfConfig.applicationDetail.urls[0]}"
 }
 useJUnit {
  includeCategories 'test.AcceptanceTest'
 }
}


References:


1. The plugin is built on top of the excellent CF-Java-Client project
2. I have borrowed a lot of ideas from gradle-cf-plugin but is more or less a clean room implementation
3. Here is a sample project which makes use of the plugin.

Tuesday, July 5, 2016

Spring Cloud Zuul - Writing a Filter

Netflix OSS project Zuul serves as a gateway to backend services and provides support for adding in edge features like security, routing. In the Zuul world specific edge features are provided by components called the Zuul Filter and writing such a filter for a Spring Cloud based project is very simple. A good reference to adding a filter is here. Here I wanted to demonstrate two small features - deciding whether a filter should act on a request and secondly to add a header before forwarding the request.

Writing a Zuul Filter


Writing a Zuul Filter is very easy for Spring Cloud, all we need to do is to add a Spring bean which implements the ZuulFilter, so for this example it would look something like this:

import com.netflix.zuul.ZuulFilter;
import com.netflix.zuul.context.RequestContext;
import org.springframework.stereotype.Service;


@Service
public class PayloadTraceFilter extends ZuulFilter {

    private static final String HEADER="payload.trace";

    @Override
    public String filterType() {
        return "pre";
    }

    @Override
    public int filterOrder() {
        return 999;
    }

    @Override
    public boolean shouldFilter() {
      ....   
    }

    @Override
    public Object run() {
     ....
    }
}

Some high level details of this implementation, this has been marked as a "Filter type" of "pre" which means that this filter would be called before the request is dispatched to the backend service, filterOrder determines when this specific filter is called in the chain of filters, should Filter determines if this filter is invoked at all for this request and run contains the logic for the filter.

So to my first consideration, whether this filter should act on the flow at all - this can be done on a request by request basis, my logic is very simple - if the request uri starts with /samplesvc then this filter should act on the request.

@Override
public boolean shouldFilter() {
    RequestContext ctx = RequestContext.getCurrentContext();
    String requestUri = ctx.getRequest().getRequestURI();
    return requestUri.startsWith("/samplesvc");
}

and the second consideration on modifying the request headers to the backend service:

@Override
public Object run() {
    RequestContext ctx = RequestContext.getCurrentContext();
    ctx.addZuulRequestHeader("payload.trace", "true");
    return null;
}

A backing service getting such a request can look for the header and act accordingly, say in this specific case looking at the "payload.trace" header and deciding to log the incoming message:

@RequestMapping(value = "/message", method = RequestMethod.POST)
public Resource<MessageAcknowledgement> pongMessage(@RequestBody Message input, @RequestHeader("payload.trace") boolean tracePayload) {
    if (tracePayload) {
        LOGGER.info("Received Payload: {}", input.getPayload());
    }
....

Conclusion

As demonstrated here, Spring Cloud really makes it simple to add in Zuul filters for any edge needs. If you want to explore this sample a little further I have sample projects available in my github repo.