Thursday, October 5, 2017

Kata - implementing a functional List data structure in Kotlin

I saw an exercise in chapter 3 of the excellent Functional Programming in Scala book which deals with defining functional data structures and uses the linked list as an example on how to go about developing such a datastructure. I wanted to try this sample using Kotlin to see to what extent I can replicate the sample.

A scala skeleton of the sample is available in the companion code to the book here and my attempt in Kotlin is heavily inspired (copied!) by the answerkey in the repository.

Basic

This is what a basic List representation in Kotlin looks like:

sealed class List<out A> {

    abstract val head: A

    abstract val tail: List<A>
}

data class Cons<out T>(override val head: T, override val tail: List<T>) : List<T>()

object Nil : List<Nothing>() {
    override val head: Nothing
        get() {
            throw NoSuchElementException("head of an empty list")
        }

    override val tail: List<Nothing>
        get() {
            throw NoSuchElementException("tail of an empty list")
        }
}

the List has been defined as a sealed class, this means that all subclasses of the sealed class will be defined in the same file. This is useful for pattern matching on the type of an instance and will come up repeatedly in most of the functions.

There are two implementations of this List -
1. Cons a non-empty list consisting of a head element and a tail List,
2. Nil an empty List

This is already very useful in its current form, consider the following which constructs a List and retrieves elements from it:

val l1:List<Int> = Cons(1, Cons(2, Cons(3, Cons(4, Nil))))
assertThat(l1.head).isEqualTo(1)
assertThat(l1.tail).isEqualTo(Cons(2, Cons(3, Cons(4, Nil))))


val l2:List<String> = Nil


Pattern Matching with "when" expression

Now to jump onto implementing some methods of List. Since List is a sealed class it allows for some good pattern matching, say to get the sum of elements in the List:

fun sum(l: List<Int>): Int {
    return when(l) {
        is Cons -> l.head + sum(l.tail)
        is Nil -> 0
    }
}

The compiler understands that Cons and Nil are the only two paths to take for the match on a list instance.

A little more complex operation, "drop" some number of elements from the beginning of the list and "dropWhile" which takes in a predicate and drops elements from the beginning matching the predicate:

fun drop(n: Int): List<A> {
    return if (n <= 0)
        this
    else when (this) {
        is Cons -> tail.drop(n - 1)
        is Nil -> Nil
    }
}

val l = list(4, 3, 2, 1)
assertThat(l.drop(2)).isEqualTo(list(2, 1))

fun dropWhile(p: (A) -> Boolean): List<A> {
    return when(this) {
        is Cons -> if (p(this.head)) this.tail.dropWhile(p) else this
        is Nil -> Nil
    }
}

val l = list(1, 2, 3, 5, 8, 13, 21, 34, 55, 89)
assertThat(l.dropWhile({e -> e < 20})).isEqualTo(list(21, 34, 55, 89))

These show off the power of pattern matching with the "when" expression in Kotlin.


Unsafe Variance!

To touch on a wrinkle, see how the List is defined with a type parameter that is declared as "out T", this is called the "declaration site variance" which in this instance makes List co-variant on type T. Declaration site variance is explained beautifully with the Kotlin documentation. With the way List is declared, it allows me to do something like this:

val l:List<Int> = Cons(1, Cons(2, Nil))
val lAny: List<Any> = l

Now, consider an "append" function which appends another list:

fun append(l: List<@UnsafeVariance A>): List<A> {
    return when (this) {
        is Cons -> Cons(head, tail.append(l))
        is Nil -> l
    }
}

here a second list is taken as a parameter to the append function, however Kotlin would flag the parameter - this is because it is okay to return a co-variant type but not to take it as a parameter. However since we know the List in its current form is immutable, I can get past this by marking the type parameter with "@UnsafeVariance" annotation.

Folding

Folding operations allow the list to be "folded" into a result based on some aggregation on individual elemnents in it.

Consider foldLeft:

fun <B> foldLeft(z: B, f: (B, A) -> B): B {
    tailrec fun foldLeft(l: List<A>, z: B, f: (B, A) -> B): B {
        return when (l) {
            is Nil -> z
            is Cons -> foldLeft(l.tail, f(z, l.head), f)
        }
    }

    return foldLeft(this, z, f)
}

If a list were to consist of elements (2, 3, 5, 8) then foldLeft is equivalent to "f(f(f(f(z, 2), 3),5),8)"

With this higher order function in place, the sum function can expressed this way:

val l = Cons(1, Cons(2, Cons(3, Cons(4, Nil))))
assertThat(l.foldLeft(0, {r, e -> r + e})).isEqualTo(10)


foldRight looks like the following in Kotlin:

fun <B> foldRight(z: B, f: (A, B) -> B): B {
    return when(this) {
        is Cons -> f(this.head, tail.foldRight(z, f))
        is Nil -> z
    }
}
If a list were to consist of elements (2, 3, 5, 8) then foldRight is equivalent to "f(2, f(3, f(5, f(8, z))))"

This version of the foldRight, though cooler looking is not tail recursive, a more stack friendly version can be implemented using the previously defined tail recursive foldLeft by simply reversing the List and calling foldLeft internally the following way:

fun reverse(): List<A> {
    return foldLeft(Nil as List<A>, { b, a -> Cons(a, b) })
}

fun <B> foldRightViaFoldLeft(z: B, f: (A, B) -> B): B {
    return reverse().foldLeft(z, { b, a -> f(a, b) })
}

map and flatMap

map is a function which transforms the element of this list:

fun <B> map(f: (A) -> B): List<B> {
    return when (this) {
        is Cons -> Cons(f(head), tail.map(f))
        is Nil -> Nil
    }
}

An example of using this function is the following:
val l = Cons(1, Cons(2, Cons(3, Nil)))
val l2 = l.map { e -> e.toString() }
assertThat(l2).isEqualTo(Cons("1", Cons("2", Cons("3", Nil))))

A variation of map where the transforming function returns another list, and the final results flattens everything, best demoed using an example after the implementation:

fun <B> flatMap(f: (a: A) -> List<@UnsafeVariance B>): List<B> {
    return flatten(map { a -> f(a) })
}

companion object {
    fun <A> flatten(l: List<List<A>>): List<A> {
        return l.foldRight(Nil as List<A>, { a, b -> a.append(b) })
    }
}


val l = Cons(1, Cons(2, Cons(3, Nil)))

val l2 = l.flatMap { e -> list(e.toString(), e.toString()) }

assertThat(l2)
        .isEqualTo(
                Cons("1", Cons("1", Cons("2", Cons("2", Cons("3", Cons("3", Nil)))))))


This covers the basics involved in implementing a functional list datastructure using Kotlin, there were a few rough edges when compared to the scala version but I think it mostly works. Admittedly the sample can likely be improved drastically, if you have any observations on how to improve the code please do send me a PR at my github repo for this sample or as comment to this post.

Tuesday, September 19, 2017

Testing time based reactor core streams with Virtual time

Reactor Core implements the Reactive Streams specification and deals with handling a (potentially unlimited) stream of data. If it interests you, do check out the excellent documentation it offers. Here I am assuming some basic familiarity with the Reactor Core libraries Flux and Mono types and will cover Reactor Core provides an abstraction to time itself to enable testing of functions which depend on passage of time.

For certain operators of Reactor-core, time is an important consideration - for eg, a variation of "interval" function which emits an increasing number every 5 seconds after an initial "delay" of 10 seconds:

val flux = Flux
        .interval(Duration.ofSeconds(10), Duration.ofSeconds(5))
        .take(3)

Testing such a stream of data depending on normal passage of time would be terrible, such a test would take about 20 seconds to finish.

Reactor-Core provides a solution, an abstraction to time itself - Virtual time based Scheduler, that provides a neat way to test these kinds of operations in a deterministic way.

Let me show it in two ways, an explicit way which should make the actions of Virtual time based scheduler very clear followed by the recommended approach of testing with Reactor Core.

import org.assertj.core.api.Assertions.assertThat
import org.junit.Test
import reactor.core.publisher.Flux
import reactor.test.scheduler.VirtualTimeScheduler
import java.time.Duration
import java.util.concurrent.CountDownLatch


class VirtualTimeTest {
    
    @Test
    fun testExplicit() {
        val mutableList = mutableListOf<Long>()

        val scheduler = VirtualTimeScheduler.getOrSet()
        val flux = Flux
                .interval(Duration.ofSeconds(10), Duration.ofSeconds(5), scheduler)
                .take(3)

        val latch = CountDownLatch(1)
        
        flux.subscribe({ l -> mutableList.add(l) }, { _ -> }, { latch.countDown() })
        
        scheduler.advanceTimeBy(Duration.ofSeconds(10))
        assertThat(mutableList).containsExactly(0L)
        
        scheduler.advanceTimeBy(Duration.ofSeconds(5))
        assertThat(mutableList).containsExactly(0L, 1L)
        
        scheduler.advanceTimeBy(Duration.ofSeconds(5))
        assertThat(mutableList).containsExactly(0L, 1L, 2L)

        latch.await()
    }
    
}

1. First the scheduler for "Flux.interval" function is being set to be the Virtual Time based Scheduler.

2. The stream of data is expected to be emitted every 5 seconds after a 10 second delay

3. VirtualTimeScheduler provides an "advanceTimeBy" method to advance the Virtual time by a Duration, so the time is being first advanced by the delay time of 10 seconds at which point the first element(0) is expected to be emitted

4. Then it is subsequently advanced by 5 seconds twice to get 1 and 2 respectively.

This is deterministic and the test completes quickly. This version of the test is ugly though, it uses a list to collect and assert the results on and a CountDownLatch to control when the test terminates. A far cleaner approach for testing Reactor-Core types is using the excellent StepVerifier class and a test which makes use of this class looks like this:

import org.junit.Test
import reactor.core.publisher.Flux
import reactor.test.StepVerifier
import reactor.test.scheduler.VirtualTimeScheduler
import java.time.Duration

class VirtualTimeTest {

    @Test
    fun testWithStepVerifier() {

        VirtualTimeScheduler.getOrSet()
        val flux = Flux
                .interval(Duration.ofSeconds(10), Duration.ofSeconds(5))
                .take(3)

        StepVerifier.withVirtualTime({ flux })
                .expectSubscription()
                .thenAwait(Duration.ofSeconds(10))
                .expectNext(0)
                .thenAwait(Duration.ofSeconds(5))
                .expectNext(1)
                .thenAwait(Duration.ofSeconds(5))
                .expectNext(2)
                .verifyComplete()
    }
 }

This new test with StepVerifier reads well with each step advancing time and asserting on what is expected at that point.



Friday, September 1, 2017

Spring Webflux - Kotlin DSL - a walkthrough of the implementation

In a previous blog post I had described how Spring Webflux, the reactive programming support in Spring Web Framework, uses a Kotlin based DSL to enable users to describe routes in a very intuitive way. Here I wanted to explore a little of the underlying implementation.


A sample DSL describing a set of endpoints looks like this:

package sample.routes

import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.http.MediaType.APPLICATION_JSON
import org.springframework.web.reactive.function.server.router
import sample.handler.MessageHandler

@Configuration
class AppRoutes(private val messageHandler: MessageHandler) {

    @Bean
    fun apis() = router {
        (accept(APPLICATION_JSON) and "/messages").nest {
            GET("/", messageHandler::getMessages)
            POST("/", messageHandler::addMessage)
            GET("/{id}", messageHandler::getMessage)
            PUT("/{id}", messageHandler::updateMessage)
            DELETE("/{id}", messageHandler::deleteMessage)
        }
    }

}


To analyze the sample let me start with a smaller working example:

import org.junit.Test
import org.springframework.test.web.reactive.server.WebTestClient
import org.springframework.web.reactive.function.server.ServerResponse.ok
import org.springframework.web.reactive.function.server.router

class AppRoutesTest {

    @Test
    fun testSimpleGet() {
        val routerFunction = router {
            GET("/isokay", { _ -> ok().build() })
        }

        val client = WebTestClient.bindToRouterFunction(routerFunction).build()

        client.get()
                .uri("/isokay")
                .exchange()
                .expectStatus().isOk
    }
}

The heart of the route definition is the "router" function:

import org.springframework.web.reactive.function.server.router
...
val routerFunction = router {
    GET("/isokay", { _ -> ok().build() })
}

which is defined the following way:

fun router(routes: RouterFunctionDsl.() -> Unit) = RouterFunctionDsl().apply(routes).router()

The parameter "routes" is a special type of lambda expression, called a Lambda expression with a receiver. This means that in the context of the router function, this lambda expression can only be invoked by instances of "RouterFunctionDsl" which is what is done in the body of the function using apply method, this also means in the body of the lambda expression "this" refers to an instance of "RouterFunctionDsl". Knowing this opens up access to the methods of "RouterFunctionDsl" one of which is GET that is used in the example, GET is defined as follows:

fun GET(pattern: String, f: (ServerRequest) -> Mono<ServerResponse>) {
  ...
}

There are other ways express the same endpoint:

GET("/isokay2")({ _ -> ok().build() })

implemented in Kotlin very cleverly as:

fun GET(pattern: String): RequestPredicate = RequestPredicates.GET(pattern)

operator fun RequestPredicate.invoke(f: (ServerRequest) -> Mono<ServerResponse>) {
 ...
}

Here GET with the pattern returns a "RequestPredicate" for which an extension function has been defined (in the context of the DSL) called invoke, which is in turn a specially named operator.

Or a third way:

"/isokay" { _ -> ok().build() }

which is implemented by adding an extension function on String type and defined the following way:

operator fun String.invoke(f: (ServerRequest) -> Mono<ServerResponse>) {
  ...
}


I feel that the Spring Webflux makes an excellent use of the Kotlin DSL in making some of these route definitions easy to read while remaining concise.

This should provide enough primer to explore the source code of Routing DSL in Spring Webflux .

My samples are available in a github repository here - https://github.com/bijukunjummen/webflux-route-with-kotlin

Monday, August 21, 2017

Gradle Kotlin DSL

Gradle build scripts can now be written using a dsl with Kotlin Language. All the concepts that work with traditional gradle build translate to a very intuitive dsl in Kotlin and have two additional features - it is typesafe and the script has excellent IDE support using Intellij IDEA.

My experience with the Gralde Kotlin DSL is fairly limited - all of one build script which is the subject of this article.

If you want to simply see how a sample script looks, I have a sample github repo with just that here - https://github.com/bijukunjummen/cf-show-env


Just to compare:

1. Consider the way different plugins are applied with gradle:

plugins {
 id "com.github.pivotalservices.cf-app" version "1.0.9"
}

apply plugin: 'kotlin"
apply plugin: 'java'
apply plugin: 'org.springframework.boot'
apply from: 'gradle/gatling.gradle'


An equivalent kotlin dsl is the following:

plugins {
    id("com.github.pivotalservices.cf-app").version("1.0.9")
}

apply {
    plugin("kotlin")
    plugin("java")
    plugin("org.springframework.boot")    
    from("gradle/gatling.gradle")
}



2. Adding project dependencies:

dependencies {
    compile('org.springframework.boot:spring-boot-starter-actuator')
    compile('org.springframework.boot:spring-boot-devtools')
    compile('org.springframework.boot:spring-boot-starter-thymeleaf')
    compile('org.springframework.boot:spring-boot-starter-web')
    compile('com.google.guava:guava:19.0')
    compile("org.webjars:bootstrap:3.3.7")
    compile("org.webjars:jquery:3.1.1")
    compile("io.prometheus:simpleclient:${prometheus_client_version}")
    compile("io.prometheus:simpleclient_spring_boot:${prometheus_client_version}")
    compile('nz.net.ultraq.thymeleaf:thymeleaf-layout-dialect')
    testCompile('org.springframework.boot:spring-boot-starter-test')
}

an equivalent code using kotlin DSL:

dependencies {
    val prometheus_client_version = "0.0.21"

    compile("org.springframework.boot:spring-boot-starter-actuator")
    compile("org.springframework.boot:spring-boot-devtools")
    compile("org.springframework.boot:spring-boot-starter-thymeleaf")
    compile("org.springframework.boot:spring-boot-starter-web")
    compile("com.google.guava:guava:19.0")
    compile("org.webjars:bootstrap:3.3.7")
    compile("org.webjars:jquery:3.1.1")
    compile("io.prometheus:simpleclient:${prometheus_client_version}")
    compile("io.prometheus:simpleclient_spring_boot:${prometheus_client_version}")
    compile("nz.net.ultraq.thymeleaf:thymeleaf-layout-dialect")
    testCompile("org.springframework.boot:spring-boot-starter-test")
}

3. Configuring Plugins - I have a plugin which helps deploy applications to Cloud Foundry, and works off a configuration which looks like this, when expressed using normal gradle build:

cfConfig {
    //CF Details
    ccHost = "api.local.pcfdev.io"
    ccUser = "admin"
    ccPassword = "admin"
    org = "pcfdev-org"
    space = "pcfdev-space"

    //App Details
    name = "cf-show-env"
    hostName = "cf-show-env"
    filePath = "build/libs/cf-show-env-0.1.3-SNAPSHOT.jar"
    path = ""
    domain = "local.pcfdev.io"
    instances = 2
    memory = 1024
    timeout = 180

    //Env and services
    buildpack = "https://github.com/cloudfoundry/java-buildpack.git"


    environment = ["JAVA_OPTS": "-Djava.security.egd=file:/dev/./urandom", "SPRING_PROFILES_ACTIVE": "cloud"]

    cfService {
        name = "p-mysql"
        plan = "512mb"
        instanceName = "test-db"
    }
    
 
    
    cfUserProvidedService {
        instanceName = "mydb1"
        credentials = ["jdbcUri": "someuri1"]
    }
}

this can now be configured in a typesafe way with full auto-completion support in IntelliJ the following way using Kotlin DSL:

configure< CfPluginExtension> {
    //CF Details
    ccHost = "api.local.pcfdev.io"
    ccUser = "admin"
    ccPassword = "admin"
    org = "pcfdev-org"
    space = "pcfdev-space"

    //App Details
    name = "cf-show-env"
    hostName = "cf-show-env"
    filePath = "build/libs/cf-show-env-1.0.0-M1.jar"
    path = ""
    domain = "local.pcfdev.io"
    instances = 2
    memory = 1024
    timeout = 180

    //Env and services
    buildpack = "https://github.com/cloudfoundry/java-buildpack.git"

    environment = mapOf(
            "JAVA_OPTS" to "-Djava.security.egd=file:/dev/./urandom", 
            "SPRING_PROFILES_ACTIVE" to "cloud"
    )

    cfService(closureOf<CfService> {
        name = "p-mysql"
        plan = "512mb"
        instanceName = "test-db"
    })
    
    cfUserProvidedService(closureOf<CfUserProvidedService> { 
        instanceName = "myups"
        credentials = mapOf(
                "user" to "someuser",
                "uri" to "someuri"
        )
    })

}

4. And finally a straight task:
task "hello-world" {
    doLast {
        println("Hello World")
    }
}

task showAppUrls(dependsOn: "cf-get-app-detail") << {
    print "${project.cfConfig.applicationDetail}"
}

looks more or less the same in Kotlin DSL:

task("hello-world") {
    doLast {
        println("Hello World")
    }
}


task("showAppUrls").dependsOn("cf-get-app-detail").doLast {
       println(cfConfig);
}


I am excited about using Kotlin DSL to configure my gradle builds, there are a few quirks to keep in mind though - the Intellij support tends to be a little flaky, it took a few tries for the IDEA to start helping with the auto-completions, also I needed to google quite a bit and look at some of the sample projects in gradle kotlin dsl, all in all though this has an awesome potential.

Monday, August 14, 2017

Concourse caching for Java Maven and Gradle builds

Concourse CI 3.3.x has introduced the ability to cache paths between task runs. This feature helps speed up tasks which cache content in specific folders - here I will demonstrate how this feature can be used for speeding up maven and gradle based java builds.

The code and the pipeline that I am using for this post is available at my github repo here - https://github.com/bijukunjummen/ci-concourse-caching-sample

Let me start with the gradle build, if I were to build the project using a gradle wrapper using the following command:

./gradlew clean build

then gradle would download the dependent libraries into a ".gradle" folder in the users home folder by default. This location of this folder can be changed using a "GRADLE_USER_HOME" environment variable, which is what I will be using in a concourse task to control the location of a cached path.

A concourse task which builds my project looks like this:

---
platform: linux
image_resource:
  type: docker-image
  source:
    repository: openjdk
    tag: 8-jdk
inputs:
  - name: repo
outputs:
  - name: out
run:
  path: /bin/bash
  args:
    - repo/ci/tasks/build.sh

caches:
  - path: .gradle/
  - path: .m2/

params:
  PROJECT_TYPE: 

See the caches parameter is specified as ".gradle" above. So all I have to do now is to ensure that Gradle uses this location as its home folder, which I would do in my build script:

export ROOT_FOLDER=$( pwd )
export GRADLE_USER_HOME="${ROOT_FOLDER}/.gradle"


The process to cache maven resources for a maven build is along the same lines, maven caches the dependent jars in a location that can be specified in a variety of ways, the one I have used is to specify this location via a dynamically generated settings.xml file the following way:

M2_HOME=${HOME}/.m2
mkdir -p ${M2_HOME}

M2_LOCAL_REPO="${ROOT_FOLDER}/.m2"

mkdir -p "${M2_LOCAL_REPO}/repository"

cat > ${M2_HOME}/settings.xml <<EOF

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                          https://maven.apache.org/xsd/settings-1.0.0.xsd">
      <localRepository>${M2_LOCAL_REPO}/repository</localRepository>
</settings>

EOF

which is quite a bit of bash scripting, all it is doing is generating a settings.xml with a localRepository tag set to ".m2/repository" folder which is relative to the temporary folder created by concourse for the build and thus can be cached.

With these changes in place, the behavior is that the downloads happen for the first run of the task but then get cached for subsequent runs. In my local concourse set-up a gradle build taking about 2 mins for a first time build takes about 20 seconds for a subsequent build !

You can try out this feature in my demo project here - https://github.com/bijukunjummen/ci-concourse-caching-sample



Friday, July 28, 2017

Kotlintest and property based testing

I was very happy to see that Kotlintest, a port of the excellent scalatest in Kotlin, supports property based testing.

I was introduced to property based testing through the excellent "Functional programming in Scala" book.

The idea behind property based testing is simple - the behavior of a program is described as a property and the testing framework generates random data to validate the property. This is best illustrated with an example using the excellent scalacheck library:


import org.scalacheck.Prop.forAll
import org.scalacheck.Properties

object ListSpecification extends Properties("List") {
  property("reversing a list twice should return the list") = forAll { (a: List[Int]) =>
    a.reverse.reverse == a
  }
}

scalacheck would generate a random list(of integer) of varying sizes and would validate that this property holds for the lists. A similar specification expressed through Kotlintest looks like this:

import io.kotlintest.properties.forAll
import io.kotlintest.specs.StringSpec


class ListSpecification : StringSpec({
    "reversing a list twice should return the list" {
        forAll{ list: List<Int> ->
            list.reversed().reversed().toList() == list
        }
    }
})

If the generators have to be a little more constrained, say if we wanted to test this behavior on lists of integer in the range 1 to 1000 then an explicit generator can be passed in the following way, again starting with scalacheck:

import org.scalacheck.Prop.forAll
import org.scalacheck.{Gen, Properties}

object ListSpecification extends Properties("List") {
  val intList = Gen.listOf(Gen.choose(1, 1000))
  property("reversing a list twice should return the list") = forAll(intList) { (a: List[Int]) =>
    a.reverse.reverse == a
  }
}

and an equivalent kotlintest code:

import io.kotlintest.properties.Gen
import io.kotlintest.properties.forAll
import io.kotlintest.specs.StringSpec

class BehaviorOfListSpecs : StringSpec({
    "reversing a list twice should return the list" {
        val intList = Gen.list(Gen.choose(1, 1000))

        forAll(intList) { list ->
            list.reversed().reversed().toList() == list
        }
    }
})

Given this let me now jump onto another example from the scalacheck site, this time to illustrate a failure:

import org.scalacheck.Prop.forAll
import org.scalacheck.Properties

object StringSpecification extends Properties("String") {

  property("startsWith") = forAll { (a: String, b: String) =>
    (a + b).startsWith(a)
  }

  property("concatenate") = forAll { (a: String, b: String) =>
    (a + b).length > a.length && (a + b).length > b.length
  }

  property("substring") = forAll { (a: String, b: String, c: String) =>
    (a + b + c).substring(a.length, a.length + b.length) == b
  }
}

the second property described above is wrong - if two strings are concatenated together they are ALWAYS larger than each of the parts, this is not true if one of the strings is blank. If I were to run this test using scalacheck it correctly catches this wrongly specified behavior:

+ String.startsWith: OK, passed 100 tests.
! String.concatenate: Falsified after 0 passed tests.
> ARG_0: ""
> ARG_1: ""
+ String.substring: OK, passed 100 tests.
Found 1 failing properties.

An equivalent kotlintest is the following:

import io.kotlintest.properties.forAll
import io.kotlintest.specs.StringSpec

class StringSpecification : StringSpec({
    "startsWith" {
        forAll { a: String, b: String ->
            (a + b).startsWith(a)
        }
    }

    "concatenate" {
        forAll { a: String, b: String ->
            (a + b).length > a.length && (a + b).length > b.length
        }
    }

    "substring" {
        forAll { a: String, b: String, c: String ->
            (a + b + c).substring(a.length, a.length + b.length) == b
        }
    }
})

on running, it correctly catches the issue with concatenate and produces the following result:

java.lang.AssertionError: Property failed for

Y{_DZ<vGnzLQHf9|3$i|UE,;!%8^SRF;JX%EH+<5d:p`Y7dxAd;I+J5LB/:O)

 at io.kotlintest.properties.PropertyTestingKt.forAll(PropertyTesting.kt:27)

However there is an issue here, scalacheck found a simpler failure case, it does this by a process called "Test Case minimization" where in case of a failure it tries to find the smallest test case that can fail, something that the Kotlintest can learn from.


There are other features where Kotlintest lags with respect to scalacheck, a big one being able to combine generators:

case class Person(name: String, age: Int)

val genPerson = for {
  name <- Gen.alphaStr
  age <- Gen.choose(1, 50)
} yield Person(name, age)

genPerson.sample

However all in all, I have found the DSL of Kotlintest and its support for property based testing to be a good start so far and look forward to how this library evolves over time.

If you want to play with these samples a little more, it is available in my github repo here - https://github.com/bijukunjummen/kotlintest-scalacheck-sample

Friday, July 14, 2017

Cloud Foundry Application manifest using Kotlin DSL

I had a blast working with and getting my head around the excellent support for creating DSL's in Kotlin Language.
Kotlin DSL is now being used for creating gradle build files, for defining routes in Spring Webflux, for creating html templates using kotlinx.html library.

Here I am going to demonstrate creating a kotlin based DSL to represent a Cloud Foundry Application Manifest content.

A sample manifest looks like this when represented as a yaml file:
applications:
 - name: myapp
   memory: 512M
   instances: 1
   path: target/someapp.jar
   routes:
     - somehost.com
     - antother.com/path
   envs:
    ENV_NAME1: VALUE1
    ENV_NAME2: VALUE2

And here is the kind of DSL I am aiming for:

cf {
    name = "myapp"
    memory = 512(M)
    instances = 1
    path = "target/someapp.jar"
    routes {
        +"somehost.com"
        +"another.com/path"
    }
    envs {
        env["ENV_NAME1"] = "VALUE1"
        env["ENV_NAME2"] = "VALUE2"
    }
}


Getting the basic structure


Let me start with a simpler structure that looks like this:


cf {
    name = "myapp"
    instances = 1
    path = "target/someapp.jar"
}

and want this kind of a DSL to map to a structure which looks like this:

data class CfManifest(
        var name: String = "",
        var instances: Int? = 0,
        var path: String? = null
)

It would translate to a Kotlin function which takes a Lambda expression:

fun cf(init: CfManifest.() -> Unit) {
 ...
}


The parameter which looks like this:
() -> Unit
is fairly self-explanatory, a lambda expression which does not take any parameters and does not return anything.

The part that took a while to seep into my mind is this modified lambda expression, referred to as a Lambda expression with receiver:

CfManifest.() -> Unit

It does two things the way I have understood it:

1. It defines in the scope of the wrapped function an extension function for the receiver type - in my case the CfManifest class
2. this within the lambda expression now refers to the receiver function.

Given this, the cf function translates to :

fun cf(init: CfManifest.() -> Unit): CfManifest {
    val manifest = CfManifest()
    manifest.init()
    return manifest
}

which can be succinctly expressed as:

fun cf(init: CfManifest.() -> Unit) = CfManifest().apply(init)

so now when I call:
cf {
    name = "myapp"
    instances = 1
    path = "target/someapp.jar"
}

It translates to:
CFManifest().apply {
  this.name = &quot;myapp&quot;
  this.instances = 1
  this.path = &quot;target/someapp.jar&quot;
}

More DSL

Expanding on the basic structure:

cf {
    name = "myapp"
    memory = 512(M)
    instances = 1
    path = "target/someapp.jar"
    routes {
        +"somehost.com"
        +"another.com/path"
    }
    envs {
        env["ENV_NAME1"] = "VALUE1"
        env["ENV_NAME2"] = "VALUE2"
    }
}

The routes and the envs in turn become methods on the CfManifest class and look like this:

data class CfManifest(
        var name: String = "",
        var path: String? = null,
        var memory: MEM? = null,
        ...
        var routes: ROUTES? = null,
        var envs: ENVS = ENVS()
) {

    fun envs(block: ENVS.() -> Unit) {
        this.envs = ENVS().apply(block)
    }

    ...

    fun routes(block: ROUTES.() -> Unit) {
        this.routes = ROUTES().apply(block)
    }
}

data class ENVS(
        var env: MutableMap<String, String> = mutableMapOf()
)

data class ROUTES(
        private val routes: MutableList<String> = mutableListOf()
) {
    operator fun String.unaryPlus() {
        routes.add(this)
    }
}

See how the routes method takes in a Lambda expression with a receiver type of ROUTES, this allows me to define an expression like this:

cf {
    ...
    routes {
        +"somehost.com"
        +"another.com/path"
    }
    ...
}

Another trick here is way a route is being added is using :

+"somehost.com"

which is enabled using a Kotlin convention which translates specific method names to operators, here the unaryPlus method. The cool thing for me is that this operator is visible only in the scope of ROUTES instance!


Another feature of the DSL making use of Kotlin features is the way a memory is specified, there are two parts to it - a number and the modifier, 2G, 500M etc.
This is being specified in a slightly modified way via the DSL as 2(G) and 500(M).

The way it is implemented is using another Kotlin convention where if a class has an invoke method then instances can call it the following way:

class ClassWithInvoke() {
    operator fun invoke(n: Int): String = "" + n
}
val c = ClassWithInvoke()
c(10)

So implementing invoke method as an extension function on Int in the scope of the CFManifest class allows this kind of a DSL:

data class CfManifest(
        var name: String = "",
        ...
) {
    ...
    operator fun Int.invoke(m: MemModifier): MEM = MEM(this, m)
}


This is pure experimentation on my part, I am both new to Kotlin as well as Kotlin DSL's so very likely there are a lot of things that can be improved in this implementation, any feedback and suggestions are welcome. You can play with this sample code at my github repo here

Tuesday, June 27, 2017

Spring Webflux - Kotlin DSL

Spring Webflux has introduced a feature for defining functional application endpoints using a very intuitive Kotlin based DSL

This post will be to simply show a contrasting api defined using a Java based fluent api and a Kotlin based DSL


A functional way to define a CRUD based Spring Webflux endpoint in Java would look like this:


RouterFunction<?> apis() {
    return nest(path("/hotels"), nest(accept(MediaType.APPLICATION_JSON),
            route(
                    GET("/"), messageHandler::getMessages)
                    .andRoute(POST("/"), messageHandler::addMessage)
                    .andRoute(GET("/{id}"), messageHandler::getMessage)
                    .andRoute(PUT("/{id}"), messageHandler::updateMessage)
                    .andRoute(DELETE("/{id}"), messageHandler::deleteMessage)
    ));
}

The details of the endpoint is very clear and is defined in a fluent manner with just a few keywords - route, nest and the HTTP verbs.

These endpoints can be expressed using a Kotlin based DSL(and some clever use of Kotlin extension functions) the following way:

@Bean
fun apis() = router {
    (accept(APPLICATION_JSON) and "/messages").nest {
        GET("/", messageHandler::getMessages)
        POST("/", messageHandler::addMessage)
        GET("/{id}", messageHandler::getMessage)
        PUT("/{id}", messageHandler::updateMessage)
        DELETE("/{id}", messageHandler::deleteMessage)
    }
}

I feels that this reads a little better than the Java based DSL. If the API is more complicated, as demonstrated in the excellent samples by S├ębastien Deleuze with multiple levels of nesting, the Kotlin based DSL really starts to shine.


In the next post, I will delve into how this support has been implemented.

This sample is available in my github repo here

Sunday, June 11, 2017

Spring Boot Web Slice test - Sample

Spring Boot introduced test slicing a while back and it has taken me some time to get my head around it and explore some of its nuances.

Background


The main reason to use this feature is to reduce the boilerplate. Consider a controller that looks like this, just for variety written using Kotlin.

@RestController
@RequestMapping("/users")
class UserController(
        private val userRepository: UserRepository,
        private val userResourceAssembler: UserResourceAssembler) {

    @GetMapping
    fun getUsers(pageable: Pageable, 
                 pagedResourcesAssembler: PagedResourcesAssembler<User>): PagedResources<Resource<User>> {
        val users = userRepository.findAll(pageable)
        return pagedResourcesAssembler.toResource(users, this.userResourceAssembler)
    }

    @GetMapping("/{id}")
    fun getUser(id: Long): Resource<User> {
        return Resource(userRepository.findOne(id))
    }
}


A traditional Spring Mock MVC test to test this controller would be along these lines:

@RunWith(SpringRunner::class)
@WebAppConfiguration
@ContextConfiguration
class UserControllerTests {

    lateinit var mockMvc: MockMvc

    @Autowired
    private val wac: WebApplicationContext? = null

    @Before
    fun setup() {
        this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build()
    }

    @Test
    fun testGetUsers() {
        this.mockMvc.perform(get("/users")
                .accept(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk)
    }

    @EnableSpringDataWebSupport
    @EnableWebMvc
    @Configuration
    class SpringConfig {

        @Bean
        fun userController(): UserController {
            return UserController(userRepository(), UserResourceAssembler())
        }

        @Bean
        fun userRepository(): UserRepository {
            val userRepository = Mockito.mock(UserRepository::class.java)
            given(userRepository.findAll(Matchers.any(Pageable::class.java)))
                    .willAnswer({ invocation ->
                        val pageable = invocation.arguments[0] as Pageable
                        PageImpl(
                                listOf(
                                        User(id = 1, fullName = "one", password = "one", email = "one@one.com"),
                                        User(id = 2, fullName = "two", password = "two", email = "two@two.com"))
                                , pageable, 10)
                    })
            return userRepository
        }
    }
}

There is a lot of ceremony involved in setting up such a test - a web application context which understands a web environment is pulled in, a configuration which sets up the Spring MVC environment needs to be created and MockMvc which is handle to the testing framework needs to be set-up before each test.


Web Slice Test

A web slice test when compared to the previous test is far simpler and focuses on testing the controller and hides a lot of the boilerplate code:

@RunWith(SpringRunner::class)
@WebMvcTest(UserController::class)
class UserControllerSliceTests {

    @Autowired
    lateinit var mockMvc: MockMvc

    @MockBean
    lateinit var userRepository: UserRepository

    @SpyBean
    lateinit var userResourceAssembler: UserResourceAssembler

    @Test
    fun testGetUsers() {

        this.mockMvc.perform(get("/users").param("page", "0").param("size", "1")
                .accept(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk)
    }

    @Before
    fun setUp(): Unit {
        given(userRepository.findAll(Matchers.any(Pageable::class.java)))
                .willAnswer({ invocation ->
                    val pageable = invocation.arguments[0] as Pageable
                    PageImpl(
                            listOf(
                                    User(id = 1, fullName = "one", password = "one", email = "one@one.com"),
                                    User(id = 2, fullName = "two", password = "two", email = "two@two.com"))
                            , pageable, 10)
                })
    }
}

It works by creating a Spring Application context but filtering out anything that is not relevant to the web layer and loading up only the controller which has been passed into the @WebTest annotation. Any dependency that the controller requires can be injected in as a mock.


Coming to some of the nuances, say if I wanted to inject one of the fields myself the way to do it is have the test use a custom Spring Configuration, for a test this is done by using a inner static class annotated with @TestConfiguration the following way:

@RunWith(SpringRunner::class)
@WebMvcTest(UserController::class)
class UserControllerSliceTests {

    @Autowired
    lateinit var mockMvc: MockMvc

    @Autowired
    lateinit var userRepository: UserRepository

    @Autowired
    lateinit var userResourceAssembler: UserResourceAssembler

    @Test
    fun testGetUsers() {

        this.mockMvc.perform(get("/users").param("page", "0").param("size", "1")
                .accept(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk)
    }

    @Before
    fun setUp(): Unit {
        given(userRepository.findAll(Matchers.any(Pageable::class.java)))
                .willAnswer({ invocation ->
                    val pageable = invocation.arguments[0] as Pageable
                    PageImpl(
                            listOf(
                                    User(id = 1, fullName = "one", password = "one", email = "one@one.com"),
                                    User(id = 2, fullName = "two", password = "two", email = "two@two.com"))
                            , pageable, 10)
                })
    }

    @TestConfiguration
    class SpringConfig {

        @Bean
        fun userResourceAssembler(): UserResourceAssembler {
            return UserResourceAssembler()
        }

        @Bean
        fun userRepository(): UserRepository {
            return mock(UserRepository::class.java)
        }
    }

}


The beans from the "TestConfiguration" adds on to the configuration which the Slice tests depend on and don't completely replace it.

On the other hand, if I wanted to override the loading of the main "@SpringBootApplication" annotated class then I can pass in a Spring Configuration class explicitly, but the catch is that I have to now take care of all of loading up the relevant Spring Boot features myself (enabling auto-configuration, appropriate scanning etc), so a way around it to explicitly annotate the configuration as a Spring Boot Application the following way:

@RunWith(SpringRunner::class)
@WebMvcTest(UserController::class)
class UserControllerExplicitConfigTests {

    @Autowired
    lateinit var mockMvc: MockMvc

    @Autowired
    lateinit var userRepository: UserRepository

    @Test
    fun testGetUsers() {

        this.mockMvc.perform(get("/users").param("page", "0").param("size", "1")
                .accept(MediaType.APPLICATION_JSON))
                .andDo(print())
                .andExpect(status().isOk)
    }

    @Before
    fun setUp(): Unit {
        given(userRepository.findAll(Matchers.any(Pageable::class.java)))
                .willAnswer({ invocation ->
                    val pageable = invocation.arguments[0] as Pageable
                    PageImpl(
                            listOf(
                                    User(id = 1, fullName = "one", password = "one", email = "one@one.com"),
                                    User(id = 2, fullName = "two", password = "two", email = "two@two.com"))
                            , pageable, 10)
                })
    }

    @SpringBootApplication(scanBasePackageClasses = arrayOf(UserController::class))
    @EnableSpringDataWebSupport
    class SpringConfig {

        @Bean
        fun userResourceAssembler(): UserResourceAssembler {
            return UserResourceAssembler()
        }

        @Bean
        fun userRepository(): UserRepository {
            return mock(UserRepository::class.java)
        }
    }

}


The catch though is that now other tests may end up finding this inner configuration which is far from ideal!, so my learning has been to depend on bare minimum slice testing, and if needed extend it using @TestConfiguration.


I have a little more detailed code sample available at my github repo which has working examples to play with.

Tuesday, May 30, 2017

Ratio based routing to a legacy and a modern app - Netflix Zuul via Spring Cloud

A very common requirement when migrating from a legacy version of an application to a modernized version of the application is to be able to migrate the users slowly over to the new application. In this post I will be going over this kind of a routing layer written using support for Netflix Zuul through Spring Cloud . Before I go ahead I have to acknowledge that most of the code demonstrated here has been written in collaboration with the superlative Shaozhen Ding


Scenario

I have a legacy service which has been re-engineered to a more modern version(assumption is that as part of this migration the uri's of the endpoints have not changed). I want to migrate users slowly over from the legacy application over to the modern version.


Implementation using Spring Cloud Netflix - Zuul Support


This can be easily implemented using Netflix Zuul support in Spring Cloud project.

Zuul is driven by a set of filters which act on a request before(pre filters), during(route filters) and after(post filters) a request to a backend. Spring Cloud adds it custom set of filters to Zuul and drives the behavior of these filters by configuration that looks like this:

zuul:
  routes:
    ratio-route:
      path: /routes/**
      strip-prefix: false

This specifies that Zuul will be handling a request to Uri with prefix "/routes" and this prefix will not be stripped from the downstream call. This logic is encoded into a "PreDecorationFilter". My objective is to act on the request AFTER the PreDecorationFilter and specify the backend to be either the legacy version or the modern version. Given this a filter which acts on the request looks like this:

import com.netflix.zuul.ZuulFilter;
import com.netflix.zuul.context.RequestContext;
...

@Service
public class RatioBasedRoutingZuulFilter extends ZuulFilter {

    public static final String LEGACY_APP = "legacy";
    public static final String MODERN_APP = "modern";
    
    private Random random = new Random();
    
    @Autowired
    private RatioRoutingProperties ratioRoutingProperties;

    @Override
    public String filterType() {
        return "pre";
    }

    @Override
    public int filterOrder() {
        return FilterConstants.PRE_DECORATION_FILTER_ORDER + 1;
    }

    @Override
    public boolean shouldFilter() {
        RequestContext ctx = RequestContext.getCurrentContext();
        return ctx.containsKey(SERVICE_ID_KEY)
                && ctx.get(SERVICE_ID_KEY).equals("ratio-route");
    }

    @Override
    public Object run() {
        RequestContext ctx = RequestContext.getCurrentContext();

        if (isTargetedToLegacy()) {
            ctx.put(SERVICE_ID_KEY, LEGACY_APP);
        } else {
            ctx.put(SERVICE_ID_KEY, MODERN_APP);
        }
        return null;
    }

    boolean isTargetedToLegacy() {
        return random.nextInt(100) < ratioRoutingProperties.getOldPercent();
    }
}

The filter is set to act after the "PreDecorationFilter" by overriding the filterOrder() method. The routing logic is fairly naive but should work for most cases. The serviceId being set in the Zuul context has a value of "legacy" or "modern" and represents a "named" Ribbon client, a handle using which the details of the backend can be set. So with Spring Cloud, my named clients are mapped to the legacy and modern versions of the app the following way:


legacy:
  ribbon:
    listOfServers: http://localhost:8081

modern:
  ribbon:
    DeploymentContextBasedVipAddresses: modern-app

Here just for a little more variation I am making a direct call to an endpoint for the legacy app and making a call via Eureka for the modern version of the application.


If you are interested in exploring the entirety of the application it is available in my github repo


With the entire set-up in place, a small test with the legacy handling 20% of the traffic confirms that the filter works effectively:

Conclusion

Spring Cloud support for Netflix Zuul makes handling such routing scenarios a cinch and should be a good fit for any organization having these kinds of routing scenarios that they may want to implement.

Friday, May 19, 2017

Cloud Foundry Custom User Provided Services(CUPS) and tagging

Custom User Provided Services or CUPS for short is a way to deliver credentials for external services to an application hosted on Cloud Foundry.

Consider a set of credentials represented as a json of the following form:

{
 "hostname": "mysql-broker.local.pcfdev.io",
 "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/somedb?user=someuser\u0026password=somepass",
 "name": "somedb",
 "password": "somepass",
 "port": 3306,
 "uri": "mysql://someuser:somepass@mysql-broker.local.pcfdev.io:3306/somedb?reconnect=true",
 "username": "someuser"
}

I could create a user provided service out of these values using cf-cli. The following is highly bash shell specific, so on a different shell the mileage is likely to vary:

CUPS_PARAM=$(cat <<-'EOF'
{
 "hostname": "mysql-broker.local.pcfdev.io",
 "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/somedb?user=someuser\u0026password=somepass",
 "name": "somedb",
 "password": "somepass",
 "port": 3306,
 "uri": "mysql://someuser:somepass@mysql-broker.local.pcfdev.io:3306/somedb?reconnect=true",
 "username": "someuser"
}
EOF
)

cf create-user-provided-service mycups -p ''"$CUPS_PARAM"''

This Custom User provided service can be bound to an app:

cf bind-service myapp mycups

and the application can retrieve the credentials via an environment variable called VCAP_SERVICES at runtime.


Issue

There is one small issue with the Custom User provided services over normal services created via Service Brokers on Cloud Foundry - there is no simple way to tag a Custom User Provided service. Tags are sometimes useful in getting a little more information about the service bound to an app and is extensively used by Spring Cloud Connectors to connect to services.


Solution

I have written a custom service broker called the CUPS tagging broker using which a service can be created with all the parameters normally passed to create the CUPS, additionally since it is a normal service it can be tagged.

Assuming that the "CUPS tagging broker" has been installed using the instructions here, an equivalent user provided service with tags can be created the following way, with two tags attached to the service:

cf create-service cups-tagging-service default my-cups-tagged -c ''"$CUPS_PARAM"'' -t "tag1, tag2"

If I were to bind this service to an app, the VCAP_SERVICES environment variable of the app would be along these lines:

{"cups-tagging-service":[{
  "credentials": {
    "hostname": "mysql-broker.local.pcfdev.io",
    "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/somedb?user=someuser&password=somepass",
    "name": "somedb",
    "password": "somepass",
    "port": 3306,
    "uri": "mysql://someuser:somepass@mysql-broker.local.pcfdev.io:3306/somedb?reconnect=true",
    "username": "someuser"
  },
  "syslog_drain_url": null,
  "volume_mounts": [

  ],
  "label": "cups-tagging-service",
  "provider": null,
  "plan": "default",
  "name": "my-cups-tagged",
  "tags": [
    "cups-tag",
    "tag1",
    "tag2"
  ]
}]}

See how the two additional tags show up.

That is all there is to the CUPS tagging Service Broker!





Thursday, May 4, 2017

Integrating Gatling into a Gradle build - Understanding SourceSets and Configuration

I recently worked on a project where we had to integrate the excellent load testing tool Gatling into a Gradle based build. There are gradle plugins available which make this easy, two of them being this and this, however for most of the needs a simple execution of the command line tool itself suffices, so this post will go into some details of how gatling can be hooked up into a gradle build and in the process understand some good gradle concepts.


SourceSets and Configuration


To execute the gatling cli I need to do two things, I need a location for the source code and related content of the Gatling simulations, and I need a way to get the gatling libraries. This is where two concepts of Gradle(SourceSets and Configuration) come into play.

Let us start with the first one - SourceSets.

SourceSets


SourceSets are simply a logical grouping of related files and are best demonstrated with an example. If I were to add a "java" plugin to a gradle build:

apply plugin: 'java'


sourceSets property would now show up with two values "main" and "test" and if I wanted to find details of the these sourceSets, a gradle task can be used for printing the details:

task sourceSetDetails {
    doLast {
        sourceSets {
            main {
                println java.properties
                println resources.properties
            }
        
            test {
                println java.properties
                println resources.properties
            }
        }
    }
}

Coming back to gatling, I can essentially create a new sourceSet to hold the gatling simulations:

sourceSets {
    simulations
}

This would now expect the gatling simulations to reside in "src/simulations/java" and the resources related to it in "src/simulations/resources" folders, which is okay, but ideally I would want to keep it totally separate from the project sources. I would want my folder structure to be with load simulations in "simulations/load" and resources in "simulations/resources" folder. This can be tweaked by first applying the "scala" plugin, which brings in scala compilation support to the project and then modifying the "simulations" source set along these lines:

apply plugin: 'scala'

sourceSets {
    simulations {
        scala {
            srcDirs = ['simulations/load']
        }
        resources {
            srcDirs = ['simulations/resources']
        }
    }
}

With these set of changes, I can now put my simulations in the right place, but the dependency of gatling and scala has not been pulled in, this is where the "configuration" feature of gradle comes in.

Configuration


Gradle Configuration is a way of grouping related dependencies together. If I were to print the existing set of configurations using a task:

task showConfigurations  {
    doLast {
        configurations.all { conf -> println(conf) }
    }
}

these show up:

configuration ':archives'
configuration ':compile'
configuration ':compileClasspath'
configuration ':compileOnly'
configuration ':default'
configuration ':runtime'
configuration ':simulationsCompile'
configuration ':simulationsCompileClasspath'
configuration ':simulationsCompileOnly'
configuration ':simulationsRuntime'
configuration ':testCompile'
configuration ':testCompileClasspath'
configuration ':testCompileOnly'
configuration ':testRuntime'
configuration ':zinc'

"compile" and "testCompile" should be familiar one's, that is where a normal source dependency and a test dependency is typically declared like this:

dependencies {
    compile 'org.slf4j:slf4j-api:1.7.21'
    testCompile 'junit:junit:4.12'   
}

However, it also looks like there is now configuration for "simulations" sourceset also available - "simulationsCompile" and "simulationsRuntime" etc, so with this I can declare the dependencies required for my gatling simulations using these configurations, however my intention is to declare a custom configuration just to go over the concept a little more, so let us explicitly declare one:

configurations {
    gatling
}

and use this configuration for declaring the dependencies of gatling:
dependencies {
    gatling 'org.scala-lang:scala-library:2.11.8'
    gatling 'io.gatling.highcharts:gatling-charts-highcharts:2.2.5'
}

Almost there, now how do we tell the sources in simulations source set to use dependency from gatling configuration..by tweaking the sourceSet a little:

sourceSets {
    simulations {
        scala {
            srcDirs = ['simulations/load']
        }
        resources {
            srcDirs = ['simulations/resources']
        }

        compileClasspath += configurations.gatling
    }
}


Running a Gatling Scenario

With the source sets and the configuration defined, all we need to do is to write a task to run a gatling simulation, which can be along these lines:

task gatlingRun(type: JavaExec) {
    description = 'Run gatling tests'
    new File("${buildDir}/reports/gatling").mkdirs()

    classpath = sourceSets.simulations.runtimeClasspath + configurations.gatling

    main = "io.gatling.app.Gatling"
    args = ['-s', 'simulations.SimpleSimulation',
            '-sf', 'simulations/resources',
            '-df', 'simulations/resources',
            '-rf', "${buildDir}/reports/gatling"
    ]
}

See how the compiled sources of the simulations and the dependencies from the gatling configuration is being set as the classpath for the "JavaExec" task


A good way to review this would be to look at a complete working sample that I have here in my github repo - https://github.com/bijukunjummen/cf-show-env

Friday, April 21, 2017

Spring Web-Flux - Functional Style with Cassandra Backend

In a previous post I had walked through the basics of Spring Web-Flux which denotes the reactive support in the web layer of Spring framework.

I had demonstrated an end to end sample using Spring Data Cassandra and using the traditional annotations support in the Spring Web Layers, along these lines:

...
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
...

@RestController
@RequestMapping("/hotels")
public class HotelController {

    @GetMapping(path = "/{id}")
    public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
        ...
    }

    @GetMapping(path = "/startingwith/{letter}")
    public Flux<HotelByLetter> findHotelsWithLetter(
            @PathVariable("letter") String letter) {
        ...
    }

}

This looks like the traditional Spring Web annotations except for the return types, instead of returning the domain types these endpoints are returning the Publisher type via the implementations of Mono and Flux in reactor-core and Spring-Web handles streaming the content back.


In this post I will cover a different way of exposing the endpoints - using a functional style instead of the annotations style. Let me acknowledge that I have found Baeldung's article and Rossen Stoyanchev's post invaluable in my understanding of the functional style of exposing the web endpoints.


Mapping the annotations to routes

Let me start with a few annotation based endpoints, one to retrieve an entity and one to save an entity:

@GetMapping(path = "/{id}")
public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
    return this.hotelService.findOne(uuid);
}

@PostMapping
public Mono<ResponseEntity<Hotel>> save(@RequestBody Hotel hotel) {
    return this.hotelService.save(hotel)
            .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED));
}


In a functional style of exposing the endpoints, each of the endpoints would translate to a RouterFunction, and they can composed to create all the endpoints of the app, along these lines:

package cass.web;

import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.server.RouterFunction;

import static org.springframework.web.reactive.function.server.RequestPredicates.*;
import static org.springframework.web.reactive.function.server.RouterFunctions.*;

public interface ApplicationRoutes {
    static RouterFunction<?> routes(HotelHandler hotelHandler) {
        return nest(path("/hotels"),
                nest(accept(MediaType.APPLICATION_JSON),
                        route(GET("/{id}"), hotelHandler::get)
                                .andRoute(POST("/"), hotelHandler::save)
                ));
    }
}


There are helper functions(nest, route, GET, accept etc) which make composing all the RouterFunction(s) together a breeze. Once an appropriate RouterFunction is found, the request is handled by a HandlerFunction which in the above sample is abstracted by the HotelHandler and for the save and get functionality looks like this:

import org.springframework.web.reactive.function.server.ServerRequest;
import org.springframework.web.reactive.function.server.ServerResponse;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

import java.util.UUID;

@Service
public class HotelHandler {

    ...
    
    public Mono<ServerResponse> get(ServerRequest request) {
        UUID uuid = UUID.fromString(request.pathVariable("id"));
        Mono<ServerResponse> notFound = ServerResponse.notFound().build();
        return this.hotelService.findOne(uuid)
                .flatMap(hotel -> ServerResponse.ok().body(Mono.just(hotel), Hotel.class))
                .switchIfEmpty(notFound);
    }

    public Mono<ServerResponse> save(ServerRequest serverRequest) {
        Mono<Hotel> hotelToBeCreated = serverRequest.bodyToMono(Hotel.class);
        return hotelToBeCreated.flatMap(hotel ->
                ServerResponse.status(HttpStatus.CREATED).body(hotelService.save(hotel), Hotel.class)
        );
    }

    ...
}    


This is how a complete RouterFunction for all the API's supported by the original annotation based project looks like:

import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.server.RouterFunction;

import static org.springframework.web.reactive.function.server.RequestPredicates.*;
import static org.springframework.web.reactive.function.server.RouterFunctions.*;

public interface ApplicationRoutes {
    static RouterFunction<?> routes(HotelHandler hotelHandler) {
        return nest(path("/hotels"),
                nest(accept(MediaType.APPLICATION_JSON),
                        route(GET("/{id}"), hotelHandler::get)
                                .andRoute(POST("/"), hotelHandler::save)
                                .andRoute(PUT("/"), hotelHandler::update)
                                .andRoute(DELETE("/{id}"), hotelHandler::delete)
                                .andRoute(GET("/startingwith/{letter}"), hotelHandler::findHotelsWithLetter)
                                .andRoute(GET("/fromstate/{state}"), hotelHandler::findHotelsInState)
                ));
    }
}

Testing functional Routes

It is easy to test these routes also, Spring Webflux provides a WebTestClient to test out the routes while providing the ability to mock the implementations behind it

For eg, to test the get by id endpoint, I would bind the WebTestClient to the RouterFunction defined before and use the assertions that it provides to test the behavior.

import org.junit.Before;
import org.junit.Test;
import org.springframework.test.web.reactive.server.WebTestClient;
import reactor.core.publisher.Mono;

import java.util.UUID;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;


public class GetRouteTests {

    private WebTestClient client;
    private HotelService hotelService;

    private UUID sampleUUID = UUID.fromString("fd28ec06-6de5-4f68-9353-59793a5bdec2");

    @Before
    public void setUp() {
        this.hotelService = mock(HotelService.class);
        when(hotelService.findOne(sampleUUID)).thenReturn(Mono.just(new Hotel(sampleUUID, "test")));
        HotelHandler hotelHandler = new HotelHandler(hotelService);
        
        this.client = WebTestClient.bindToRouterFunction(ApplicationRoutes.routes(hotelHandler)).build();
    }

    @Test
    public void testHotelGet() throws Exception {
        this.client.get().uri("/hotels/" + sampleUUID)
                .exchange()
                .expectStatus().isOk()
                .expectBody(Hotel.class)
                .isEqualTo(new Hotel(sampleUUID, "test"));
    }
}

Conclusion

The functional way of defining the routes is definitely a very different approach from the annotation based one - I like that it is a far more explicit way of defining an endpoint and how the calls for the endpoint is handled, the annotations always felt a little more magical.

I have a complete working code in my github repo which may be easier to follow than the code in this post.

Saturday, April 1, 2017

Hystrix Command - Java 8 helpers

Let me start by acknowledging that what I am posting here is far from original, it is inspired by the post here by Demian Neidetcher which was further adapted by two of my former colleagues - Alexey Dmitrovsky1(T-Mobile) and Pavel Orda(Altoros).


Motivation

So the motivation is fairly simple, consider two remote calls the result of which is aggregated in some way:

String  r1 = remoteCall1();
Integer r2 = remoteCall2();

String aggregated = r1 + r2;
assertThat(aggregated).isEqualTo("result1");

Ideally you would want the remote calls to be protected by the excellent Hystrix library, what if I could do it along these lines:

String  r1 = execute("remote1", "remote1", () -> remoteCall1());
Integer r2 = execute("remote2", "remote2", () -> remoteCall2());

String aggregated = r1 + r2;
assertThat(aggregated).isEqualTo("result1");

I have avoided all the boiler plate around needing to define an explicit HystrixCommand around each of my remote calls this way, and instead wrapped the remote calls using a Java 8 lambda expression which resolves to a Supplier functional interface

Even better, a variation of this allows me to aggregate the results in a reactive way by returning an Rx-java Observable instead:

Observable<String>  r1Obs = executeObservable("remote1", "remote1", () -> remoteCall1());
Observable<Integer> r2Obs = executeObservable("remote2", "remote2", () -> remoteCall2());

String aggregated = Observable.zip(r1Obs, r2Obs, (r1, r2) -> (r1 + r2)).toBlocking().single();

assertThat(aggregated).isEqualTo("result1");

What about fallbacks, I can support it by taking in another lambda expression which transforms an exception to a reasonable fallback(and logs the exception in the process):


Observable<String> r1Obs = executeObservable("remote1", "remote1",
        () -> {
            throw new RuntimeException("!!");
        },
        (t) -> {
            logger.error(t.getMessage(), t);
            return "fallback";
        });
Observable<Integer> r2Obs = executeObservable("remote2", "remote2",
        () -> {
            throw new RuntimeException("!!");
        },
        (t) -> {
            logger.error(t.getMessage(), t);
            return 0;
        });

String aggregated = Observable.zip(r1Obs, r2Obs, (r1, r2) -> (r1 + r2)).toBlocking().single();

assertThat(aggregated).isEqualTo("fallback0");


Implementation


The implementation is fairly simple and in its entirety is the following:

import com.netflix.hystrix.HystrixCommand;
import com.netflix.hystrix.HystrixCommandGroupKey;
import com.netflix.hystrix.HystrixCommandKey;
import rx.Observable;

import java.util.function.Function;
import java.util.function.Supplier;

public class GenericHystrixCommand<T> extends HystrixCommand<T> {

    private Supplier<T> toRun;

    private Function<Throwable, T> fallback;


    public static <T> T execute(String groupKey, String commandkey, Supplier<T> toRun) {
        return execute(groupKey, commandkey, toRun, null);
    }

    public static <T> T execute(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        return new GenericHystrixCommand<>(groupKey, commandkey, toRun, fallback).execute();
    }

    public static <T> Observable<T> executeObservable(String groupKey, String commandkey, 
               Supplier<T> toRun) {
        return executeObservable(groupKey, commandkey, toRun, null);
    }

    public static <T> Observable<T> executeObservable(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        return new GenericHystrixCommand<>(groupKey, commandkey, toRun, fallback)
                .toObservable();
    }

    public GenericHystrixCommand(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        super(Setter
                .withGroupKey(HystrixCommandGroupKey.Factory.asKey(groupKey))
                .andCommandKey(HystrixCommandKey.Factory.asKey(commandkey)));
        this.toRun = toRun;
        this.fallback = fallback;
    }

    protected T run() throws Exception {
        return this.toRun.get();
    }

    @Override
    protected T getFallback() {
        return (this.fallback != null)
                ? this.fallback.apply(getExecutionException())
                : super.getFallback();
    }
}


All it does is to take in the code that needs to be wrapped as a Java8 Supplier and the fallback as a Java 8 Function


If you are interested in playing with this pattern, I have a little more fleshed out sample here in my github repo.

Sunday, March 19, 2017

Spring Web-Flux - First steps

Spring Web-Flux term is used for denoting the Reactive programming support in the web layer of Spring Framework. It provides support for both creating reactive server based web applications and also has client libraries to make remote REST calls.

In this post, I will demonstrate a sample web application which makes use of Spring Web-Flux. As detailed here, the Web-Flux support in Spring 5+ supports two different programming style - the traditional annotation based style and the new functional style. In this post I will be sticking to the traditional annotation style and will follow it up in another blog post(now available here) detailing a similar application but with endpoints defined in a functional style. My focus is going to be purely the programming model.

Data and Services Layer


I have a fairly simple REST interface supporting CRUD operations of a Hotel resource with a structure along these lines:

public class Hotel {

    private UUID id;

    private String name;

    private String address;

    private String state;

    private String zip;
    
    ....

}

I am using Cassandra as a store of this entity and using the reactive support in Spring Data Cassandra allows the data layer to be reactive, supporting an API that looks like this - I have two repositories here, one facilitating the storage of the Hotel entity above, another maintaining a duplicated data which makes searching for Hotel entity by its first letter a little more efficient:

public interface HotelRepository  {
    Mono<Hotel> save(Hotel hotel);
    Mono<Hotel> update(Hotel hotel);
    Mono<Hotel> findOne(UUID hotelId);
    Mono<Boolean> delete(UUID hotelId);
    Flux<Hotel> findByState(String state);
}

public interface HotelByLetterRepository {
    Flux<HotelByLetter> findByFirstLetter(String letter);
    Mono<HotelByLetter> save(HotelByLetter hotelByLetter);
    Mono<Boolean> delete(HotelByLetterKey hotelByLetterKey);
}


The operations which return one instance of an entity now return a Mono type and operations which return more than one element return a Flux type.


Given this let me touch on one quick use of returning the reactive types, when a Hotel is updated I have to delete the duplicated data maintained via HotelByLetter repository and recreate it again, this can be accomplished something like the following, using the excellent operators provided by Flux and Mono types:

public Mono<Hotel> update(Hotel hotel) {
    return this.hotelRepository.findOne(hotel.getId())
            .flatMap(existingHotel ->
                    this.hotelByLetterRepository.delete(new HotelByLetter(existingHotel).getHotelByLetterKey())
                            .then(this.hotelByLetterRepository.save(new HotelByLetter(hotel)))
                            .then(this.hotelRepository.update(hotel))).next();
}


Web Layer

Now to the focus of the article, support for annotation based reactive programming model support in the web layer!

The @Controller and @RestController annotations have been the workhorses of the Spring MVC's REST endpoint support for years now, traditionally they have enabled taking in and returning Java POJO's. These controllers in the reactive model have now been tweaked to take in and return the Reactive types - Mono and Flux in my examples, but additionally also the Rx-Java 1/2 and Reactive Streams types.

Given this, my controller in almost its entirety looks like this:

@RestController
@RequestMapping("/hotels")
public class HotelController {

    ....

    @GetMapping(path = "/{id}")
    public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
        return this.hotelService.findOne(uuid);
    }

    @PostMapping
    public Mono<ResponseEntity<Hotel>> save(@RequestBody Hotel hotel) {
        return this.hotelService.save(hotel)
                .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED));
    }

    @PutMapping
    public Mono<ResponseEntity<Hotel>> update(@RequestBody Hotel hotel) {
        return this.hotelService.update(hotel)
                .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED))
                .defaultIfEmpty(new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    @DeleteMapping(path = "/{id}")
    public Mono<ResponseEntity<String>> delete(
            @PathVariable("id") UUID uuid) {
        return this.hotelService.delete(uuid).map((Boolean status) ->
                new ResponseEntity<>("Deleted", HttpStatus.ACCEPTED));
    }

    @GetMapping(path = "/startingwith/{letter}")
    public Flux<HotelByLetter> findHotelsWithLetter(
            @PathVariable("letter") String letter) {
        return this.hotelService.findHotelsStartingWith(letter);
    }

    @GetMapping(path = "/fromstate/{state}")
    public Flux<Hotel> findHotelsInState(
            @PathVariable("state") String state) {
        return this.hotelService.findHotelsInState(state);
    }
}

The traditional @RequestMapping, @GetMapping, @PostMapping is unchanged, what is different is the return types - for instances where atmost 1 result is expected I am now returning a Mono type and where a list would have been returned before, now a Flux type is returned.

With the use of the reactive support in Spring Data Cassandra the entire web to services and back is reactive and specifically for the focus of the article, eminently readable and intuitive.


It may be easier to simply try out the code behind this post which I have available in my github repo here.

Tuesday, February 28, 2017

Using UAA OAuth2 authorization server - client and resource

In a previous post I had gone over how to bring up an OAuth2 authorization server using Cloud Foundry UAA project and populating it with some of the actors involved in a OAuth2 Authorization Code flow.


I have found this article at the Digital Ocean site does a great job of describing the OAuth2 Authorization code flow, so instead of rehashing what is involved in this flow I will directly jump into implementing this flow using Spring Boot/Spring Security.

The following diagram inspired by the one here shows a high level flow in an Authorization Code grant type:




I will have two applications - a resource server exposing some resources of a user, and a client application that wants to access those resources on behalf of a user. The Authorization server itself can be brought up as described in the previous blog post.

The rest of the post can be more easily followed along with the code available in my github repo here

Authorization Server

The Cloud Foundry UAA server can be easily brought up using the steps described in my previous blog post. Once it is up the following uaac commands can be used for populating the different credentials required to run the sample.

These scripts will create a client credential for the client app and add a user called "user1" with a scope of "resource.read" and "resource.write".

# Login as a canned client
uaac token client get admin -s adminsecret

# Add a client credential with client_id of client1 and client_secret of client1
uaac client add client1 \
   --name client1 \
   --scope resource.read,resource.write \
   -s client1 \
   --authorized_grant_types authorization_code,refresh_token,client_credentials \
   --authorities uaa.resource


# Another client credential resource1/resource1
uaac client add resource1 \
  --name resource1 \
  -s resource1 \
  --authorized_grant_types client_credentials \
  --authorities uaa.resource


# Add a user called user1/user1
uaac user add user1 -p user1 --emails user1@user1.com


# Add two scopes resource.read, resource.write
uaac group add resource.read
uaac group add resource.write

# Assign user1 both resource.read, resource.write scopes..
uaac member add resource.read user1
uaac member add resource.write user1


Resource Server

The resource server exposes a few endpoints, expressed using Spring MVC and secured using Spring Security, the following way:

@RestController
public class GreetingsController {
    @PreAuthorize("#oauth2.hasScope('resource.read')")
    @RequestMapping(method = RequestMethod.GET, value = "/secured/read")
    @ResponseBody
    public String read(Authentication authentication) {
        return String.format("Read Called: Hello %s", authentication.getCredentials());
    }

    @PreAuthorize("#oauth2.hasScope('resource.write')")
    @RequestMapping(method = RequestMethod.GET, value = "/secured/write")
    @ResponseBody
    public String write(Authentication authentication) {
        return String.format("Write Called: Hello %s", authentication.getCredentials());
    }
}

There are two endpoint uri's being exposed - "/secured/read" authorized for scope "resource.read" and "/secured/write" authorized for scope "resource.write"

The configuration which secures these endpoints and marks the application as a resource server is the following:

@Configuration
@EnableResourceServer
@EnableWebSecurity
@EnableGlobalMethodSecurity(securedEnabled = true, prePostEnabled = true)
public class ResourceServerConfiguration extends ResourceServerConfigurerAdapter {

    @Override
    public void configure(ResourceServerSecurityConfigurer resources) throws Exception {
        resources.resourceId("resource");
    }

    @Override
    public void configure(HttpSecurity http) throws Exception {
        http
                .antMatcher("/secured/**")
                .authorizeRequests()
                .anyRequest().authenticated();
    }
}

This configuration along with properties describing how the token is to be validated is all that is required to get the resource server running.


Client

The client configuration for OAuth2 using Spring Security OAuth2 is also fairly simple, @EnableAuth2SSO annotation pulls in all the required configuration to wire up the spring security filters for OAuth2 flows:

@EnableOAuth2Sso
@Configuration
public class OAuth2SecurityConfig extends WebSecurityConfigurerAdapter {
    @Override
    public void configure(WebSecurity web) throws Exception {
        super.configure(web);
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.csrf().disable();

        //@formatter:off
        http.authorizeRequests()
                .antMatchers("/secured/**")
                    .authenticated()
                .antMatchers("/")
                    .permitAll()
                .anyRequest()
                    .authenticated();

        //@formatter:on

    }

}

To call a downstream system, the client has to pass on the OAuth token as a header in the downstream calls, this is done by hooking a specialized RestTemplate called the OAuth2RestTemplate that can grab the access token from the context and pass it downstream, once it is hooked up a secure downstream call looks like this:

public class DownstreamServiceHandler {

    private final OAuth2RestTemplate oAuth2RestTemplate;
    private final String resourceUrl;


    public DownstreamServiceHandler(OAuth2RestTemplate oAuth2RestTemplate, String resourceUrl) {
        this.oAuth2RestTemplate = oAuth2RestTemplate;
        this.resourceUrl = resourceUrl;
    }


    public String callRead() {
        return callDownstream(String.format("%s/secured/read", resourceUrl));
    }

    public String callWrite() {
        return callDownstream(String.format("%s/secured/write", resourceUrl));
    }

    public String callInvalidScope() {
        return callDownstream(String.format("%s/secured/invalid", resourceUrl));
    }

    private String callDownstream(String uri) {
        try {
            ResponseEntity<String> responseEntity = this.oAuth2RestTemplate.getForEntity(uri, String.class);
            return responseEntity.getBody();
        } catch(HttpStatusCodeException statusCodeException) {
            return statusCodeException.getResponseBodyAsString();
        }
    }
}


Demonstration

The Client and the resource server can be brought up using the instructions here. Once all the systems are up, accessing the client will present the user with a page which looks like this:


Accessing the secure page, will result in a login page being presented by the authorization server:



The client is requesting a "resource.read" and "resource.write" scope from the user, user is prompted to authorize these scopes:


Assuming that the user has authorized "resource.read" but not "resource.write", the token will be presented to the user:

At this point if the downstream resource is requested which requires a scope of "resource.read", it should get retrieved:


And if a downstream resource is requested with a scope that the user has not authorized - "resource.write" in this instance:



Reference

  • Most of the code is based on the Cloud Foundry UAA application samples available here - https://github.com/pivotal-cf/identity-sample-apps
  • The code in the post is here: https://github.com/bijukunjummen/oauth-uaa-sample