Tuesday, June 27, 2017

Spring Webflux - Kotlin DSL

Spring Webflux has introduced a feature for defining functional application endpoints using a very intuitive Kotlin based DSL

This post will be to simply show a contrasting api defined using a Java based fluent api and a Kotlin based DSL


A functional way to define a CRUD based Spring Webflux endpoint in Java would look like this:


RouterFunction<?> apis() {
    return nest(path("/hotels"), nest(accept(MediaType.APPLICATION_JSON),
            route(
                    GET("/"), messageHandler::getMessages)
                    .andRoute(POST("/"), messageHandler::addMessage)
                    .andRoute(GET("/{id}"), messageHandler::getMessage)
                    .andRoute(PUT("/{id}"), messageHandler::updateMessage)
                    .andRoute(DELETE("/{id}"), messageHandler::deleteMessage)
    ));
}

The details of the endpoint is very clear and is defined in a fluent manner with just a few keywords - route, nest and the HTTP verbs.

These endpoints can be expressed using a Kotlin based DSL(and some clever use of Kotlin extension functions) the following way:

@Bean
fun apis() = router {
    (accept(APPLICATION_JSON) and "/messages").nest {
        GET("/", messageHandler::getMessages)
        POST("/", messageHandler::addMessage)
        GET("/{id}", messageHandler::getMessage)
        PUT("/{id}", messageHandler::updateMessage)
        DELETE("/{id}", messageHandler::deleteMessage)
    }
}

I feels that this reads a little better than the Java based DSL. If the API is more complicated, as demonstrated in the excellent samples by S├ębastien Deleuze with multiple levels of nesting, the Kotlin based DSL really starts to shine.


In the next post, I will delve into how this support has been implemented.

This sample is available in my github repo here

Sunday, June 11, 2017

Spring Boot Web Slice test - Sample

Spring Boot introduced test slicing a while back and it has taken me some time to get my head around it and explore some of its nuances.

Background


The main reason to use this feature is to reduce the boilerplate. Consider a controller that looks like this, just for variety written using Kotlin.

@RestController
@RequestMapping("/users")
class UserController(
        private val userRepository: UserRepository,
        private val userResourceAssembler: UserResourceAssembler) {

    @GetMapping
    fun getUsers(pageable: Pageable, 
                 pagedResourcesAssembler: PagedResourcesAssembler<User>): PagedResources<Resource<User>> {
        val users = userRepository.findAll(pageable)
        return pagedResourcesAssembler.toResource(users, this.userResourceAssembler)
    }

    @GetMapping("/{id}")
    fun getUser(id: Long): Resource<User> {
        return Resource(userRepository.findOne(id))
    }
}


A traditional Spring Mock MVC test to test this controller would be along these lines:

@RunWith(SpringRunner::class)
@WebAppConfiguration
@ContextConfiguration
class UserControllerTests {

    lateinit var mockMvc: MockMvc

    @Autowired
    private val wac: WebApplicationContext? = null

    @Before
    fun setup() {
        this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build()
    }

    @Test
    fun testGetUsers() {
        this.mockMvc.perform(get("/users")
                .accept(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk)
    }

    @EnableSpringDataWebSupport
    @EnableWebMvc
    @Configuration
    class SpringConfig {

        @Bean
        fun userController(): UserController {
            return UserController(userRepository(), UserResourceAssembler())
        }

        @Bean
        fun userRepository(): UserRepository {
            val userRepository = Mockito.mock(UserRepository::class.java)
            given(userRepository.findAll(Matchers.any(Pageable::class.java)))
                    .willAnswer({ invocation ->
                        val pageable = invocation.arguments[0] as Pageable
                        PageImpl(
                                listOf(
                                        User(id = 1, fullName = "one", password = "one", email = "one@one.com"),
                                        User(id = 2, fullName = "two", password = "two", email = "two@two.com"))
                                , pageable, 10)
                    })
            return userRepository
        }
    }
}

There is a lot of ceremony involved in setting up such a test - a web application context which understands a web environment is pulled in, a configuration which sets up the Spring MVC environment needs to be created and MockMvc which is handle to the testing framework needs to be set-up before each test.


Web Slice Test

A web slice test when compared to the previous test is far simpler and focuses on testing the controller and hides a lot of the boilerplate code:

@RunWith(SpringRunner::class)
@WebMvcTest(UserController::class)
class UserControllerSliceTests {

    @Autowired
    lateinit var mockMvc: MockMvc

    @MockBean
    lateinit var userRepository: UserRepository

    @SpyBean
    lateinit var userResourceAssembler: UserResourceAssembler

    @Test
    fun testGetUsers() {

        this.mockMvc.perform(get("/users").param("page", "0").param("size", "1")
                .accept(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk)
    }

    @Before
    fun setUp(): Unit {
        given(userRepository.findAll(Matchers.any(Pageable::class.java)))
                .willAnswer({ invocation ->
                    val pageable = invocation.arguments[0] as Pageable
                    PageImpl(
                            listOf(
                                    User(id = 1, fullName = "one", password = "one", email = "one@one.com"),
                                    User(id = 2, fullName = "two", password = "two", email = "two@two.com"))
                            , pageable, 10)
                })
    }
}

It works by creating a Spring Application context but filtering out anything that is not relevant to the web layer and loading up only the controller which has been passed into the @WebTest annotation. Any dependency that the controller requires can be injected in as a mock.


Coming to some of the nuances, say if I wanted to inject one of the fields myself the way to do it is have the test use a custom Spring Configuration, for a test this is done by using a inner static class annotated with @TestConfiguration the following way:

@RunWith(SpringRunner::class)
@WebMvcTest(UserController::class)
class UserControllerSliceTests {

    @Autowired
    lateinit var mockMvc: MockMvc

    @Autowired
    lateinit var userRepository: UserRepository

    @Autowired
    lateinit var userResourceAssembler: UserResourceAssembler

    @Test
    fun testGetUsers() {

        this.mockMvc.perform(get("/users").param("page", "0").param("size", "1")
                .accept(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk)
    }

    @Before
    fun setUp(): Unit {
        given(userRepository.findAll(Matchers.any(Pageable::class.java)))
                .willAnswer({ invocation ->
                    val pageable = invocation.arguments[0] as Pageable
                    PageImpl(
                            listOf(
                                    User(id = 1, fullName = "one", password = "one", email = "one@one.com"),
                                    User(id = 2, fullName = "two", password = "two", email = "two@two.com"))
                            , pageable, 10)
                })
    }

    @TestConfiguration
    class SpringConfig {

        @Bean
        fun userResourceAssembler(): UserResourceAssembler {
            return UserResourceAssembler()
        }

        @Bean
        fun userRepository(): UserRepository {
            return mock(UserRepository::class.java)
        }
    }

}


The beans from the "TestConfiguration" adds on to the configuration which the Slice tests depend on and don't completely replace it.

On the other hand, if I wanted to override the loading of the main "@SpringBootApplication" annotated class then I can pass in a Spring Configuration class explicitly, but the catch is that I have to now take care of all of loading up the relevant Spring Boot features myself (enabling auto-configuration, appropriate scanning etc), so a way around it to explicitly annotate the configuration as a Spring Boot Application the following way:

@RunWith(SpringRunner::class)
@WebMvcTest(UserController::class)
class UserControllerExplicitConfigTests {

    @Autowired
    lateinit var mockMvc: MockMvc

    @Autowired
    lateinit var userRepository: UserRepository

    @Test
    fun testGetUsers() {

        this.mockMvc.perform(get("/users").param("page", "0").param("size", "1")
                .accept(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk)
    }

    @Before
    fun setUp(): Unit {
        given(userRepository.findAll(Matchers.any(Pageable::class.java)))
                .willAnswer({ invocation ->
                    val pageable = invocation.arguments[0] as Pageable
                    PageImpl(
                            listOf(
                                    User(id = 1, fullName = "one", password = "one", email = "one@one.com"),
                                    User(id = 2, fullName = "two", password = "two", email = "two@two.com"))
                            , pageable, 10)
                })
    }

    @SpringBootApplication(scanBasePackageClasses = arrayOf(UserController::class))
    @EnableSpringDataWebSupport
    class SpringConfig {

        @Bean
        fun userResourceAssembler(): UserResourceAssembler {
            return UserResourceAssembler()
        }

        @Bean
        fun userRepository(): UserRepository {
            return mock(UserRepository::class.java)
        }
    }

}


The catch though is that now other tests may end up finding this inner configuration which is far from ideal!, so my learning has been to depend on bare minimum slice testing, and if needed extend it using @TestConfiguration.


I have a little more detailed code sample available at my github repo which has working examples to play with.

Tuesday, May 30, 2017

Ratio based routing to a legacy and a modern app - Netflix Zuul via Spring Cloud

A very common requirement when migrating from a legacy version of an application to a modernized version of the application is to be able to migrate the users slowly over to the new application. In this post I will be going over this kind of a routing layer written using support for Netflix Zuul through Spring Cloud . Before I go ahead I have to acknowledge that most of the code demonstrated here has been written in collaboration with the superlative Shaozhen Ding


Scenario

I have a legacy service which has been re-engineered to a more modern version(assumption is that as part of this migration the uri's of the endpoints have not changed). I want to migrate users slowly over from the legacy application over to the modern version.


Implementation using Spring Cloud Netflix - Zuul Support


This can be easily implemented using Netflix Zuul support in Spring Cloud project.

Zuul is driven by a set of filters which act on a request before(pre filters), during(route filters) and after(post filters) a request to a backend. Spring Cloud adds it custom set of filters to Zuul and drives the behavior of these filters by configuration that looks like this:

zuul:
  routes:
    ratio-route:
      path: /routes/**
      strip-prefix: false

This specifies that Zuul will be handling a request to Uri with prefix "/routes" and this prefix will not be stripped from the downstream call. This logic is encoded into a "PreDecorationFilter". My objective is to act on the request AFTER the PreDecorationFilter and specify the backend to be either the legacy version or the modern version. Given this a filter which acts on the request looks like this:

import com.netflix.zuul.ZuulFilter;
import com.netflix.zuul.context.RequestContext;
...

@Service
public class RatioBasedRoutingZuulFilter extends ZuulFilter {

    public static final String LEGACY_APP = "legacy";
    public static final String MODERN_APP = "modern";
    
    private Random random = new Random();
    
    @Autowired
    private RatioRoutingProperties ratioRoutingProperties;

    @Override
    public String filterType() {
        return "pre";
    }

    @Override
    public int filterOrder() {
        return FilterConstants.PRE_DECORATION_FILTER_ORDER + 1;
    }

    @Override
    public boolean shouldFilter() {
        RequestContext ctx = RequestContext.getCurrentContext();
        return ctx.containsKey(SERVICE_ID_KEY)
                && ctx.get(SERVICE_ID_KEY).equals("ratio-route");
    }

    @Override
    public Object run() {
        RequestContext ctx = RequestContext.getCurrentContext();

        if (isTargetedToLegacy()) {
            ctx.put(SERVICE_ID_KEY, LEGACY_APP);
        } else {
            ctx.put(SERVICE_ID_KEY, MODERN_APP);
        }
        return null;
    }

    boolean isTargetedToLegacy() {
        return random.nextInt(100) < ratioRoutingProperties.getOldPercent();
    }
}

The filter is set to act after the "PreDecorationFilter" by overriding the filterOrder() method. The routing logic is fairly naive but should work for most cases. The serviceId being set in the Zuul context has a value of "legacy" or "modern" and represents a "named" Ribbon client, a handle using which the details of the backend can be set. So with Spring Cloud, my named clients are mapped to the legacy and modern versions of the app the following way:


legacy:
  ribbon:
    listOfServers: http://localhost:8081

modern:
  ribbon:
    DeploymentContextBasedVipAddresses: modern-app

Here just for a little more variation I am making a direct call to an endpoint for the legacy app and making a call via Eureka for the modern version of the application.


If you are interested in exploring the entirety of the application it is available in my github repo


With the entire set-up in place, a small test with the legacy handling 20% of the traffic confirms that the filter works effectively:

Conclusion

Spring Cloud support for Netflix Zuul makes handling such routing scenarios a cinch and should be a good fit for any organization having these kinds of routing scenarios that they may want to implement.

Friday, May 19, 2017

Cloud Foundry Custom User Provided Services(CUPS) and tagging

Custom User Provided Services or CUPS for short is a way to deliver credentials for external services to an application hosted on Cloud Foundry.

Consider a set of credentials represented as a json of the following form:

{
 "hostname": "mysql-broker.local.pcfdev.io",
 "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/somedb?user=someuser\u0026password=somepass",
 "name": "somedb",
 "password": "somepass",
 "port": 3306,
 "uri": "mysql://someuser:somepass@mysql-broker.local.pcfdev.io:3306/somedb?reconnect=true",
 "username": "someuser"
}

I could create a user provided service out of these values using cf-cli. The following is highly bash shell specific, so on a different shell the mileage is likely to vary:

CUPS_PARAM=$(cat <<-'EOF'
{
 "hostname": "mysql-broker.local.pcfdev.io",
 "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/somedb?user=someuser\u0026password=somepass",
 "name": "somedb",
 "password": "somepass",
 "port": 3306,
 "uri": "mysql://someuser:somepass@mysql-broker.local.pcfdev.io:3306/somedb?reconnect=true",
 "username": "someuser"
}
EOF
)

cf create-user-provided-service mycups -p ''"$CUPS_PARAM"''

This Custom User provided service can be bound to an app:

cf bind-service myapp mycups

and the application can retrieve the credentials via an environment variable called VCAP_SERVICES at runtime.


Issue

There is one small issue with the Custom User provided services over normal services created via Service Brokers on Cloud Foundry - there is no simple way to tag a Custom User Provided service. Tags are sometimes useful in getting a little more information about the service bound to an app and is extensively used by Spring Cloud Connectors to connect to services.


Solution

I have written a custom service broker called the CUPS tagging broker using which a service can be created with all the parameters normally passed to create the CUPS, additionally since it is a normal service it can be tagged.

Assuming that the "CUPS tagging broker" has been installed using the instructions here, an equivalent user provided service with tags can be created the following way, with two tags attached to the service:

cf create-service cups-tagging-service default my-cups-tagged -c ''"$CUPS_PARAM"'' -t "tag1, tag2"

If I were to bind this service to an app, the VCAP_SERVICES environment variable of the app would be along these lines:

{"cups-tagging-service":[{
  "credentials": {
    "hostname": "mysql-broker.local.pcfdev.io",
    "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/somedb?user=someuser&password=somepass",
    "name": "somedb",
    "password": "somepass",
    "port": 3306,
    "uri": "mysql://someuser:somepass@mysql-broker.local.pcfdev.io:3306/somedb?reconnect=true",
    "username": "someuser"
  },
  "syslog_drain_url": null,
  "volume_mounts": [

  ],
  "label": "cups-tagging-service",
  "provider": null,
  "plan": "default",
  "name": "my-cups-tagged",
  "tags": [
    "cups-tag",
    "tag1",
    "tag2"
  ]
}]}

See how the two additional tags show up.

That is all there is to the CUPS tagging Service Broker!





Thursday, May 4, 2017

Integrating Gatling into a Gradle build - Understanding SourceSets and Configuration

I recently worked on a project where we had to integrate the excellent load testing tool Gatling into a Gradle based build. There are gradle plugins available which make this easy, two of them being this and this, however for most of the needs a simple execution of the command line tool itself suffices, so this post will go into some details of how gatling can be hooked up into a gradle build and in the process understand some good gradle concepts.


SourceSets and Configuration


To execute the gatling cli I need to do two things, I need a location for the source code and related content of the Gatling simulations, and I need a way to get the gatling libraries. This is where two concepts of Gradle(SourceSets and Configuration) come into play.

Let us start with the first one - SourceSets.

SourceSets


SourceSets are simply a logical grouping of related files and are best demonstrated with an example. If I were to add a "java" plugin to a gradle build:

apply plugin: 'java'


sourceSets property would now show up with two values "main" and "test" and if I wanted to find details of the these sourceSets, a gradle task can be used for printing the details:

task sourceSetDetails {
    doLast {
        sourceSets {
            main {
                println java.properties
                println resources.properties
            }
        
            test {
                println java.properties
                println resources.properties
            }
        }
    }
}

Coming back to gatling, I can essentially create a new sourceSet to hold the gatling simulations:

sourceSets {
    simulations
}

This would now expect the gatling simulations to reside in "src/simulations/java" and the resources related to it in "src/simulations/resources" folders, which is okay, but ideally I would want to keep it totally separate from the project sources. I would want my folder structure to be with load simulations in "simulations/load" and resources in "simulations/resources" folder. This can be tweaked by first applying the "scala" plugin, which brings in scala compilation support to the project and then modifying the "simulations" source set along these lines:

apply plugin: 'scala'

sourceSets {
    simulations {
        scala {
            srcDirs = ['simulations/load']
        }
        resources {
            srcDirs = ['simulations/resources']
        }
    }
}

With these set of changes, I can now put my simulations in the right place, but the dependency of gatling and scala has not been pulled in, this is where the "configuration" feature of gradle comes in.

Configuration


Gradle Configuration is a way of grouping related dependencies together. If I were to print the existing set of configurations using a task:

task showConfigurations  {
    doLast {
        configurations.all { conf -> println(conf) }
    }
}

these show up:

configuration ':archives'
configuration ':compile'
configuration ':compileClasspath'
configuration ':compileOnly'
configuration ':default'
configuration ':runtime'
configuration ':simulationsCompile'
configuration ':simulationsCompileClasspath'
configuration ':simulationsCompileOnly'
configuration ':simulationsRuntime'
configuration ':testCompile'
configuration ':testCompileClasspath'
configuration ':testCompileOnly'
configuration ':testRuntime'
configuration ':zinc'

"compile" and "testCompile" should be familiar one's, that is where a normal source dependency and a test dependency is typically declared like this:

dependencies {
    compile 'org.slf4j:slf4j-api:1.7.21'
    testCompile 'junit:junit:4.12'   
}

However, it also looks like there is now configuration for "simulations" sourceset also available - "simulationsCompile" and "simulationsRuntime" etc, so with this I can declare the dependencies required for my gatling simulations using these configurations, however my intention is to declare a custom configuration just to go over the concept a little more, so let us explicitly declare one:

configurations {
    gatling
}

and use this configuration for declaring the dependencies of gatling:
dependencies {
    gatling 'org.scala-lang:scala-library:2.11.8'
    gatling 'io.gatling.highcharts:gatling-charts-highcharts:2.2.5'
}

Almost there, now how do we tell the sources in simulations source set to use dependency from gatling configuration..by tweaking the sourceSet a little:

sourceSets {
    simulations {
        scala {
            srcDirs = ['simulations/load']
        }
        resources {
            srcDirs = ['simulations/resources']
        }

        compileClasspath += configurations.gatling
    }
}


Running a Gatling Scenario

With the source sets and the configuration defined, all we need to do is to write a task to run a gatling simulation, which can be along these lines:

task gatlingRun(type: JavaExec) {
    description = 'Run gatling tests'
    new File("${buildDir}/reports/gatling").mkdirs()

    classpath = sourceSets.simulations.runtimeClasspath + configurations.gatling

    main = "io.gatling.app.Gatling"
    args = ['-s', 'simulations.SimpleSimulation',
            '-sf', 'simulations/resources',
            '-df', 'simulations/resources',
            '-rf', "${buildDir}/reports/gatling"
    ]
}

See how the compiled sources of the simulations and the dependencies from the gatling configuration is being set as the classpath for the "JavaExec" task


A good way to review this would be to look at a complete working sample that I have here in my github repo - https://github.com/bijukunjummen/cf-show-env

Friday, April 21, 2017

Spring Web-Flux - Functional Style with Cassandra Backend

In a previous post I had walked through the basics of Spring Web-Flux which denotes the reactive support in the web layer of Spring framework.

I had demonstrated an end to end sample using Spring Data Cassandra and using the traditional annotations support in the Spring Web Layers, along these lines:

...
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
...

@RestController
@RequestMapping("/hotels")
public class HotelController {

    @GetMapping(path = "/{id}")
    public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
        ...
    }

    @GetMapping(path = "/startingwith/{letter}")
    public Flux<HotelByLetter> findHotelsWithLetter(
            @PathVariable("letter") String letter) {
        ...
    }

}

This looks like the traditional Spring Web annotations except for the return types, instead of returning the domain types these endpoints are returning the Publisher type via the implementations of Mono and Flux in reactor-core and Spring-Web handles streaming the content back.


In this post I will cover a different way of exposing the endpoints - using a functional style instead of the annotations style. Let me acknowledge that I have found Baeldung's article and Rossen Stoyanchev's post invaluable in my understanding of the functional style of exposing the web endpoints.


Mapping the annotations to routes

Let me start with a few annotation based endpoints, one to retrieve an entity and one to save an entity:

@GetMapping(path = "/{id}")
public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
    return this.hotelService.findOne(uuid);
}

@PostMapping
public Mono<ResponseEntity<Hotel>> save(@RequestBody Hotel hotel) {
    return this.hotelService.save(hotel)
            .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED));
}


In a functional style of exposing the endpoints, each of the endpoints would translate to a RouterFunction, and they can composed to create all the endpoints of the app, along these lines:

package cass.web;

import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.server.RouterFunction;

import static org.springframework.web.reactive.function.server.RequestPredicates.*;
import static org.springframework.web.reactive.function.server.RouterFunctions.*;

public interface ApplicationRoutes {
    static RouterFunction<?> routes(HotelHandler hotelHandler) {
        return nest(path("/hotels"),
                nest(accept(MediaType.APPLICATION_JSON),
                        route(GET("/{id}"), hotelHandler::get)
                                .andRoute(POST("/"), hotelHandler::save)
                ));
    }
}


There are helper functions(nest, route, GET, accept etc) which make composing all the RouterFunction(s) together a breeze. Once an appropriate RouterFunction is found, the request is handled by a HandlerFunction which in the above sample is abstracted by the HotelHandler and for the save and get functionality looks like this:

import org.springframework.web.reactive.function.server.ServerRequest;
import org.springframework.web.reactive.function.server.ServerResponse;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

import java.util.UUID;

@Service
public class HotelHandler {

    ...
    
    public Mono<ServerResponse> get(ServerRequest request) {
        UUID uuid = UUID.fromString(request.pathVariable("id"));
        Mono<ServerResponse> notFound = ServerResponse.notFound().build();
        return this.hotelService.findOne(uuid)
                .flatMap(hotel -> ServerResponse.ok().body(Mono.just(hotel), Hotel.class))
                .switchIfEmpty(notFound);
    }

    public Mono<ServerResponse> save(ServerRequest serverRequest) {
        Mono<Hotel> hotelToBeCreated = serverRequest.bodyToMono(Hotel.class);
        return hotelToBeCreated.flatMap(hotel ->
                ServerResponse.status(HttpStatus.CREATED).body(hotelService.save(hotel), Hotel.class)
        );
    }

    ...
}    


This is how a complete RouterFunction for all the API's supported by the original annotation based project looks like:

import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.server.RouterFunction;

import static org.springframework.web.reactive.function.server.RequestPredicates.*;
import static org.springframework.web.reactive.function.server.RouterFunctions.*;

public interface ApplicationRoutes {
    static RouterFunction<?> routes(HotelHandler hotelHandler) {
        return nest(path("/hotels"),
                nest(accept(MediaType.APPLICATION_JSON),
                        route(GET("/{id}"), hotelHandler::get)
                                .andRoute(POST("/"), hotelHandler::save)
                                .andRoute(PUT("/"), hotelHandler::update)
                                .andRoute(DELETE("/{id}"), hotelHandler::delete)
                                .andRoute(GET("/startingwith/{letter}"), hotelHandler::findHotelsWithLetter)
                                .andRoute(GET("/fromstate/{state}"), hotelHandler::findHotelsInState)
                ));
    }
}

Testing functional Routes

It is easy to test these routes also, Spring Webflux provides a WebTestClient to test out the routes while providing the ability to mock the implementations behind it

For eg, to test the get by id endpoint, I would bind the WebTestClient to the RouterFunction defined before and use the assertions that it provides to test the behavior.

import org.junit.Before;
import org.junit.Test;
import org.springframework.test.web.reactive.server.WebTestClient;
import reactor.core.publisher.Mono;

import java.util.UUID;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;


public class GetRouteTests {

    private WebTestClient client;
    private HotelService hotelService;

    private UUID sampleUUID = UUID.fromString("fd28ec06-6de5-4f68-9353-59793a5bdec2");

    @Before
    public void setUp() {
        this.hotelService = mock(HotelService.class);
        when(hotelService.findOne(sampleUUID)).thenReturn(Mono.just(new Hotel(sampleUUID, "test")));
        HotelHandler hotelHandler = new HotelHandler(hotelService);
        
        this.client = WebTestClient.bindToRouterFunction(ApplicationRoutes.routes(hotelHandler)).build();
    }

    @Test
    public void testHotelGet() throws Exception {
        this.client.get().uri("/hotels/" + sampleUUID)
                .exchange()
                .expectStatus().isOk()
                .expectBody(Hotel.class)
                .isEqualTo(new Hotel(sampleUUID, "test"));
    }
}

Conclusion

The functional way of defining the routes is definitely a very different approach from the annotation based one - I like that it is a far more explicit way of defining an endpoint and how the calls for the endpoint is handled, the annotations always felt a little more magical.

I have a complete working code in my github repo which may be easier to follow than the code in this post.

Saturday, April 1, 2017

Hystrix Command - Java 8 helpers

Let me start by acknowledging that what I am posting here is far from original, it is inspired by the post here by Demian Neidetcher which was further adapted by two of my former colleagues - Alexey Dmitrovsky1(T-Mobile) and Pavel Orda(Altoros).


Motivation

So the motivation is fairly simple, consider two remote calls the result of which is aggregated in some way:

String  r1 = remoteCall1();
Integer r2 = remoteCall2();

String aggregated = r1 + r2;
assertThat(aggregated).isEqualTo("result1");

Ideally you would want the remote calls to be protected by the excellent Hystrix library, what if I could do it along these lines:

String  r1 = execute("remote1", "remote1", () -> remoteCall1());
Integer r2 = execute("remote2", "remote2", () -> remoteCall2());

String aggregated = r1 + r2;
assertThat(aggregated).isEqualTo("result1");

I have avoided all the boiler plate around needing to define an explicit HystrixCommand around each of my remote calls this way, and instead wrapped the remote calls using a Java 8 lambda expression which resolves to a Supplier functional interface

Even better, a variation of this allows me to aggregate the results in a reactive way by returning an Rx-java Observable instead:

Observable<String>  r1Obs = executeObservable("remote1", "remote1", () -> remoteCall1());
Observable<Integer> r2Obs = executeObservable("remote2", "remote2", () -> remoteCall2());

String aggregated = Observable.zip(r1Obs, r2Obs, (r1, r2) -> (r1 + r2)).toBlocking().single();

assertThat(aggregated).isEqualTo("result1");

What about fallbacks, I can support it by taking in another lambda expression which transforms an exception to a reasonable fallback(and logs the exception in the process):


Observable<String> r1Obs = executeObservable("remote1", "remote1",
        () -> {
            throw new RuntimeException("!!");
        },
        (t) -> {
            logger.error(t.getMessage(), t);
            return "fallback";
        });
Observable<Integer> r2Obs = executeObservable("remote2", "remote2",
        () -> {
            throw new RuntimeException("!!");
        },
        (t) -> {
            logger.error(t.getMessage(), t);
            return 0;
        });

String aggregated = Observable.zip(r1Obs, r2Obs, (r1, r2) -> (r1 + r2)).toBlocking().single();

assertThat(aggregated).isEqualTo("fallback0");


Implementation


The implementation is fairly simple and in its entirety is the following:

import com.netflix.hystrix.HystrixCommand;
import com.netflix.hystrix.HystrixCommandGroupKey;
import com.netflix.hystrix.HystrixCommandKey;
import rx.Observable;

import java.util.function.Function;
import java.util.function.Supplier;

public class GenericHystrixCommand<T> extends HystrixCommand<T> {

    private Supplier<T> toRun;

    private Function<Throwable, T> fallback;


    public static <T> T execute(String groupKey, String commandkey, Supplier<T> toRun) {
        return execute(groupKey, commandkey, toRun, null);
    }

    public static <T> T execute(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        return new GenericHystrixCommand<>(groupKey, commandkey, toRun, fallback).execute();
    }

    public static <T> Observable<T> executeObservable(String groupKey, String commandkey, 
               Supplier<T> toRun) {
        return executeObservable(groupKey, commandkey, toRun, null);
    }

    public static <T> Observable<T> executeObservable(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        return new GenericHystrixCommand<>(groupKey, commandkey, toRun, fallback)
                .toObservable();
    }

    public GenericHystrixCommand(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        super(Setter
                .withGroupKey(HystrixCommandGroupKey.Factory.asKey(groupKey))
                .andCommandKey(HystrixCommandKey.Factory.asKey(commandkey)));
        this.toRun = toRun;
        this.fallback = fallback;
    }

    protected T run() throws Exception {
        return this.toRun.get();
    }

    @Override
    protected T getFallback() {
        return (this.fallback != null)
                ? this.fallback.apply(getExecutionException())
                : super.getFallback();
    }
}


All it does is to take in the code that needs to be wrapped as a Java8 Supplier and the fallback as a Java 8 Function


If you are interested in playing with this pattern, I have a little more fleshed out sample here in my github repo.