Sunday, December 10, 2017

Spring Webflux - Writing Filters

Spring Webflux is the new reactive web framework available as part of Spring 5+.  The way filters were written in a traditional Spring MVC based application(Servlet Filter, HandlerInterceptor) is very different from the way a filter is written in a Spring Webflux based application and this post will briefly go over the WebFlux approach to Filters.

Approach 1 - WebFilter

The first approach using WebFilter affects all endpoints broadly and covers Webflux endpoints written in a functional style as well the endpoints that are written using an annotation style. A WebFilter in Kotlin look like this:

    @Bean
    fun sampleWebFilter(): WebFilter {
        return WebFilter { e: ServerWebExchange, c: WebFilterChain ->
            val l: MutableList<String> = e.getAttributeOrDefault(KEY, mutableListOf())
            l.add("From WebFilter")
            e.attributes.put(KEY, l)
            c.filter(e)
        }
    }

The WebFilter adds a request attribute with the value being a collection where the filter is just putting in a message that it has intercepted the request.

Approach 2 - HandlerFilterFunction


The second approach is more focused and covers only endpoints written using functional style. Here specific RouterFunctions can be hooked up with a filter, along these lines:

Consider a Spring Webflux endpoint defined the following way:

@Bean
fun route(): RouterFunction<*> = router {
    GET("/react/hello", { r ->
        ok().body(fromObject(
                Greeting("${r.attribute(KEY).orElse("[Fallback]: ")}: Hello")
        ))
    POST("/another/endpoint", TODO())
        
    PUT("/another/endpoint", TODO())
})
        
}

A HandlerFilterFunction which intercepts these API's alone can be added in a highly focused way along these lines:

fun route(): RouterFunction<*> = router {
    GET("/react/hello", { r ->
        ok().body(fromObject(
                Greeting("${r.attribute(KEY).orElse("[Fallback]: ")}: Hello")
        ))
    })
    
    POST("/another/endpoint", TODO())
    
    PUT("/another/endpoint", TODO())
    
}.filter({ r: ServerRequest, n: HandlerFunction<ServerResponse> ->
    val greetings: MutableList<String> = r.attribute(KEY)
            .map { v ->
                v as MutableList<String>
            }.orElse(mutableListOf())

    greetings.add("From HandlerFilterFunction")

    r.attributes().put(KEY, greetings)
    n.handle(r)
})

Note that there is no need to be explicit about the types in Kotlin, I have added it just to be clear about the types in some of the lambda expressions


Conclusion

The WebFilter approach and the HandlerFilterFunction are very different from the Spring WebMVC based approach of writing filters using Servlet Specs or using HandlerInterceptors and this post summarizes the new approaches - I have samples available in my git repo which goes over these in more detail.

Friday, December 1, 2017

Annotated controllers - Spring Web/Webflux and Testing

Spring Webflux and Spring Web are two entirely different web stacks. Spring Webflux, however, continues to support an annotation-based programming model

An endpoint defined using these two stacks may look similar but the way to test such an endpoint is fairly different and a user writing such an endpoint has to be aware of which stack is active and formulate the test accordingly.

Sample Endpoint

Consider a sample annotation based endpoint:


import org.springframework.web.bind.annotation.PostMapping
import org.springframework.web.bind.annotation.RequestBody
import org.springframework.web.bind.annotation.RequestMapping
import org.springframework.web.bind.annotation.RestController


data class Greeting(val message: String)

@RestController
@RequestMapping("/web")
class GreetingController {
    
    @PostMapping("/greet")
    fun handleGreeting(@RequestBody greeting: Greeting): Greeting {
        return Greeting("Thanks: ${greeting.message}")
    }
    
}


Testing with Spring Web

If Spring Boot 2 starters were used to create this application with Spring Web as the starter, specified using a Gradle build file the following way:

compile('org.springframework.boot:spring-boot-starter-web')

then the test of such an endpoint would be using a Mock web runtime, referred to as Mock MVC:

import org.junit.Test
import org.junit.runner.RunWith
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.boot.test.autoconfigure.web.servlet.WebMvcTest
import org.springframework.test.context.junit4.SpringRunner
import org.springframework.test.web.servlet.MockMvc
import org.springframework.test.web.servlet.request.MockMvcRequestBuilders.post
import org.springframework.test.web.servlet.result.MockMvcResultMatchers.content


@RunWith(SpringRunner::class)
@WebMvcTest(GreetingController::class)
class GreetingControllerMockMvcTest {

    @Autowired
    lateinit var mockMvc: MockMvc

    @Test
    fun testHandleGreetings() {
        mockMvc
                .perform(
                        post("/web/greet")
                                .content(""" 
                                |{
                                |"message": "Hello Web"
                                |}
                            """.trimMargin())
                ).andExpect(content().json("""
                    |{
                    |"message": "Thanks: Hello Web"
                    |}
                """.trimMargin()))
    }
}


Testing with Spring Web-Flux

If on the other hand Spring-Webflux starters were pulled in, say with the following Gradle dependency:

compile('org.springframework.boot:spring-boot-starter-webflux')

then the test of this endpoint would be using the excellent WebTestClient class, along these lines:

import org.junit.Test
import org.junit.runner.RunWith
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.boot.test.autoconfigure.web.reactive.WebFluxTest
import org.springframework.http.HttpHeaders
import org.springframework.test.context.junit4.SpringRunner
import org.springframework.test.web.reactive.server.WebTestClient
import org.springframework.web.reactive.function.BodyInserters


@RunWith(SpringRunner::class)
@WebFluxTest(GreetingController::class)
class GreetingControllerTest {

    @Autowired
    lateinit var webTestClient: WebTestClient

    @Test
    fun testHandleGreetings() {
        webTestClient.post()
                .uri("/web/greet")
                .header(HttpHeaders.CONTENT_TYPE, "application/json")
                .body(BodyInserters
                        .fromObject(""" 
                                |{
                                |   "message": "Hello Web"
                                |}
                            """.trimMargin()))
                .exchange()
                .expectStatus().isOk
                .expectBody()
                .json("""
                    |{
                    |   "message": "Thanks: Hello Web"
                    |}
                """.trimMargin())
    }
}


Conclusion

It is easy to assume that since the programming model looks very similar using Spring Web and Spring Webflux stacks, that the tests for such a legacy test using Spring Web would continue over to Spring Webflux, this is however not true, as a developer we have to be mindful of the underlying stack that comes into play and formulate the test accordingly. I hope this post clarifies how such a test should be crafted.

Monday, November 20, 2017

Using Micrometer with Spring Boot 2

This is a very quick introduction to using the excellent Micrometer library to instrument a Spring Boot 2 based application and recording the metrics in Prometheus


Introduction

Micrometer provides a Java based facade over the client libraries that the different monitoring tools provide.

As an example consider Prometheus, if I were to integrate my Java application with Prometheus, I would have used the client library called Prometheus Client Java, used the data-structures(Counter, Gauge etc) to collect and provide data to Prometheus. If for any reason the monitoring system is changed, the code will have to be changed for the new system.

Micrometer attempts to alleviate this by providing a common facade that the applications use when writing code, binding to the monitoring system is purely a runtime concern and so changing Metrics system from Prometheus to say Datadog just requires changing a runtime library without needing any code changes.



Instrumenting a Spring Boot 2 Application

Nothing special needs to be done to get Micrometer support for a Spring Boot 2 based app, adding in the actuator starters pulls in Micrometer as a transitive dependency:

for eg. in a gradle based project this is sufficient:

dependencies {
    compile('org.springframework.boot:spring-boot-starter-actuator')
    ...
}

Additionally since the intention is to send the data to Prometheus a dependency has to be pulled in which provides the necessary Micrometer SPI's.


dependencies {
    ...
    runtime("io.micrometer:micrometer-registry-prometheus")
    ...
}

By default Micrometer provides a set of intelligent bindings which instruments the Spring based Web and Webflux endpoints and adds in meters to collect the duration, count of calls. Additionally it also provides bindings to collect JVM metrics - memory usage, threadpool, etc.

An application property needs to be enabled to expose an endpoint which Prometheus will use to scrape the metrics data:

endpoints:
  prometheus:
    enabled: true

If the application is brought up at this point, the "/applications/prometheus" endpoint should be available showing a rich set of metrics, the following is a sample on my machine:


The default metrics is very rich and should cover most of the common set of metrics requirements of an application, if additional metrics is required it can easily added in as shown in the following code snippet:

class MessageHandler {
    
    private val counter = Metrics.counter("handler.calls", "uri", "/messages")
    
    fun handleMessage(req: ServerRequest): Mono<ServerResponse> {
        return req.bodyToMono<Message>().flatMap { m ->
            counter.increment()
            ...
...
}

Integrating with Prometheus

Prometheus can be configured to scrape data from the endpoint exposed by the Spring Boot2 app, a snippet of Prometheus configuration looks like this:

scrape_configs:
  - job_name: 'myapp'
    metrics_path: /application/prometheus
    static_configs:
      - targets: ['localhost:8080']

This is not really a production configuration, in a production setting it may better to use a Prometheus Push Gateway to broker the collection of metrics.

Prometheus provides a basic UI to preview the information that it scrapes, it can be accessed by default at port 9090. Here is a sample graph with the data produced during a load test:



Conclusion

Micrometer makes it very easy to instrument an application and collect a good set of basic metrics which can be stored and visualized in Prometheus. If you are interested in following this further, I have a sample application using Micrometer available here - https://github.com/bijukunjummen/boot2-load-demo

Sunday, October 22, 2017

Raw performance numbers - Spring Boot 2 Webflux vs Spring Boot 1

Summary

Spring Boot 2 with Spring Webflux based application outperforms a Spring Boot 1 based application by a huge margin for IO heavy workloads. The following is a summarized result of a load test - Response time for a IO heavy transaction with varying concurrent users:



When the number of concurrent users remains low (say less than 1000) both Spring Boot 1 and Spring Boot 2 handle the load well and the 95 percentile response time remains milliseconds above a expected value of 300 ms.

At higher concurrency levels, the Async Non-Blocking IO and reactive support in Spring Boot 2 starts showing its colors - the 95th percentile time even with a very heavy load of 5000 users remains at around 312ms! Spring Boot 1 records a lot of failures and high response times at these concurrency levels.

Details



My set-up for the performance test is the following:



The sample applications expose an endpoint(/passthrough/message) which in-turn calls a downstream service. The request message to the endpoint looks something like this:

{
  "id": "1",
  "payload": "sample payload",
  "delay": 3000
}

The downstream service would delay based on the "delay" attribute in the message (in milliseconds).


Spring Boot 1 Application

I have used Spring Boot 1.5.8.RELEASE for the Boot 1 version of the application. The endpoint is a simple Spring MVC controller which in turn uses Spring's RestTemplate to make the downstream call. Everything is synchronous and blocking and I have used the default embedded Tomcat container as the runtime. This is the raw code for the downstream call:

public MessageAck handlePassthrough(Message message) {
    ResponseEntity<MessageAck> responseEntity = this.restTemplate.postForEntity(targetHost 
                                                            + "/messages", message, MessageAck.class);
    return responseEntity.getBody();
}

Spring Boot 2 Application

Spring Boot 2 version of the application exposes a Spring Webflux based endpoint and uses WebClient, the new non-blocking, reactive alternate to RestTemplate to make the downstream call - I have also used Kotlin for the implementation, which has no bearing on the performance. The runtime server is Netty:

import org.springframework.http.HttpHeaders
import org.springframework.http.MediaType
import org.springframework.web.reactive.function.BodyInserters.fromObject
import org.springframework.web.reactive.function.client.ClientResponse
import org.springframework.web.reactive.function.client.WebClient
import org.springframework.web.reactive.function.client.bodyToMono
import org.springframework.web.reactive.function.server.ServerRequest
import org.springframework.web.reactive.function.server.ServerResponse
import org.springframework.web.reactive.function.server.bodyToMono
import reactor.core.publisher.Mono

class PassThroughHandler(private val webClient: WebClient) {

    fun handle(serverRequest: ServerRequest): Mono<ServerResponse> {
        val messageMono = serverRequest.bodyToMono<Message>()

        return messageMono.flatMap { message ->
            passThrough(message)
                    .flatMap { messageAck ->
                        ServerResponse.ok().body(fromObject(messageAck))
                    }
        }
    }

    fun passThrough(message: Message): Mono<MessageAck> {
        return webClient.post()
                .uri("/messages")
                .header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
                .header(HttpHeaders.ACCEPT, MediaType.APPLICATION_JSON_VALUE)
                .body(fromObject<Message>(message))
                .exchange()
                .flatMap { response: ClientResponse ->
                    response.bodyToMono<MessageAck>()
                }
    }
}



Details of the Perfomance Test


The test is simple, for different sets of concurrent users (300, 1000, 1500, 3000, 5000), I send a message with the delay attribute set to 300 ms, each user repeats the scenario 30 times with a delay between 1 to 2 seconds between requests. I am using the excellent Gatling tool to generate this load.

Results

These are the results as captured by Gatling:

300 concurrent users:
Boot 1 Boot 2



1000 concurrent users:
Boot 1 Boot 2




1500 concurrent users:
Boot 1 Boot 2




3000 concurrent users:
Boot 1 Boot 2




5000 concurrent users:
Boot 1 Boot 2




Reference

The sample application and the load scripts are available in my github repo - https://github.com/bijukunjummen/boot2-load-demo.

Thursday, October 5, 2017

Kata - implementing a functional List data structure in Kotlin

I saw an exercise in chapter 3 of the excellent Functional Programming in Scala book which deals with defining functional data structures and uses the linked list as an example on how to go about developing such a datastructure. I wanted to try this sample using Kotlin to see to what extent I can replicate the sample.

A scala skeleton of the sample is available in the companion code to the book here and my attempt in Kotlin is heavily inspired (copied!) by the answerkey in the repository.

Basic

This is what a basic List representation in Kotlin looks like:

sealed class List<out A> {

    abstract val head: A

    abstract val tail: List<A>
}

data class Cons<out T>(override val head: T, override val tail: List<T>) : List<T>()

object Nil : List<Nothing>() {
    override val head: Nothing
        get() {
            throw NoSuchElementException("head of an empty list")
        }

    override val tail: List<Nothing>
        get() {
            throw NoSuchElementException("tail of an empty list")
        }
}

the List has been defined as a sealed class, this means that all subclasses of the sealed class will be defined in the same file. This is useful for pattern matching on the type of an instance and will come up repeatedly in most of the functions.

There are two implementations of this List -
1. Cons a non-empty list consisting of a head element and a tail List,
2. Nil an empty List

This is already very useful in its current form, consider the following which constructs a List and retrieves elements from it:

val l1:List<Int> = Cons(1, Cons(2, Cons(3, Cons(4, Nil))))
assertThat(l1.head).isEqualTo(1)
assertThat(l1.tail).isEqualTo(Cons(2, Cons(3, Cons(4, Nil))))


val l2:List<String> = Nil


Pattern Matching with "when" expression

Now to jump onto implementing some methods of List. Since List is a sealed class it allows for some good pattern matching, say to get the sum of elements in the List:

fun sum(l: List<Int>): Int {
    return when(l) {
        is Cons -> l.head + sum(l.tail)
        is Nil -> 0
    }
}

The compiler understands that Cons and Nil are the only two paths to take for the match on a list instance.

A little more complex operation, "drop" some number of elements from the beginning of the list and "dropWhile" which takes in a predicate and drops elements from the beginning matching the predicate:

fun drop(n: Int): List<A> {
    return if (n <= 0)
        this
    else when (this) {
        is Cons -> tail.drop(n - 1)
        is Nil -> Nil
    }
}

val l = list(4, 3, 2, 1)
assertThat(l.drop(2)).isEqualTo(list(2, 1))

fun dropWhile(p: (A) -> Boolean): List<A> {
    return when(this) {
        is Cons -> if (p(this.head)) this.tail.dropWhile(p) else this
        is Nil -> Nil
    }
}

val l = list(1, 2, 3, 5, 8, 13, 21, 34, 55, 89)
assertThat(l.dropWhile({e -> e < 20})).isEqualTo(list(21, 34, 55, 89))

These show off the power of pattern matching with the "when" expression in Kotlin.


Unsafe Variance!

To touch on a wrinkle, see how the List is defined with a type parameter that is declared as "out T", this is called the "declaration site variance" which in this instance makes List co-variant on type T. Declaration site variance is explained beautifully with the Kotlin documentation. With the way List is declared, it allows me to do something like this:

val l:List<Int> = Cons(1, Cons(2, Nil))
val lAny: List<Any> = l

Now, consider an "append" function which appends another list:

fun append(l: List<@UnsafeVariance A>): List<A> {
    return when (this) {
        is Cons -> Cons(head, tail.append(l))
        is Nil -> l
    }
}

here a second list is taken as a parameter to the append function, however Kotlin would flag the parameter - this is because it is okay to return a co-variant type but not to take it as a parameter. However since we know the List in its current form is immutable, I can get past this by marking the type parameter with "@UnsafeVariance" annotation.

Folding

Folding operations allow the list to be "folded" into a result based on some aggregation on individual elemnents in it.

Consider foldLeft:

fun <B> foldLeft(z: B, f: (B, A) -> B): B {
    tailrec fun foldLeft(l: List<A>, z: B, f: (B, A) -> B): B {
        return when (l) {
            is Nil -> z
            is Cons -> foldLeft(l.tail, f(z, l.head), f)
        }
    }

    return foldLeft(this, z, f)
}

If a list were to consist of elements (2, 3, 5, 8) then foldLeft is equivalent to "f(f(f(f(z, 2), 3),5),8)"

With this higher order function in place, the sum function can expressed this way:

val l = Cons(1, Cons(2, Cons(3, Cons(4, Nil))))
assertThat(l.foldLeft(0, {r, e -> r + e})).isEqualTo(10)


foldRight looks like the following in Kotlin:

fun <B> foldRight(z: B, f: (A, B) -> B): B {
    return when(this) {
        is Cons -> f(this.head, tail.foldRight(z, f))
        is Nil -> z
    }
}
If a list were to consist of elements (2, 3, 5, 8) then foldRight is equivalent to "f(2, f(3, f(5, f(8, z))))"

This version of the foldRight, though cooler looking is not tail recursive, a more stack friendly version can be implemented using the previously defined tail recursive foldLeft by simply reversing the List and calling foldLeft internally the following way:

fun reverse(): List<A> {
    return foldLeft(Nil as List<A>, { b, a -> Cons(a, b) })
}

fun <B> foldRightViaFoldLeft(z: B, f: (A, B) -> B): B {
    return reverse().foldLeft(z, { b, a -> f(a, b) })
}

map and flatMap

map is a function which transforms the element of this list:

fun <B> map(f: (A) -> B): List<B> {
    return when (this) {
        is Cons -> Cons(f(head), tail.map(f))
        is Nil -> Nil
    }
}

An example of using this function is the following:
val l = Cons(1, Cons(2, Cons(3, Nil)))
val l2 = l.map { e -> e.toString() }
assertThat(l2).isEqualTo(Cons("1", Cons("2", Cons("3", Nil))))

A variation of map where the transforming function returns another list, and the final results flattens everything, best demoed using an example after the implementation:

fun <B> flatMap(f: (a: A) -> List<@UnsafeVariance B>): List<B> {
    return flatten(map { a -> f(a) })
}

companion object {
    fun <A> flatten(l: List<List<A>>): List<A> {
        return l.foldRight(Nil as List<A>, { a, b -> a.append(b) })
    }
}


val l = Cons(1, Cons(2, Cons(3, Nil)))

val l2 = l.flatMap { e -> list(e.toString(), e.toString()) }

assertThat(l2)
        .isEqualTo(
                Cons("1", Cons("1", Cons("2", Cons("2", Cons("3", Cons("3", Nil)))))))


This covers the basics involved in implementing a functional list datastructure using Kotlin, there were a few rough edges when compared to the scala version but I think it mostly works. Admittedly the sample can likely be improved drastically, if you have any observations on how to improve the code please do send me a PR at my github repo for this sample or as comment to this post.

Tuesday, September 19, 2017

Testing time based reactor core streams with Virtual time

Reactor Core implements the Reactive Streams specification and deals with handling a (potentially unlimited) stream of data. If it interests you, do check out the excellent documentation it offers. Here I am assuming some basic familiarity with the Reactor Core libraries Flux and Mono types and will cover Reactor Core provides an abstraction to time itself to enable testing of functions which depend on passage of time.

For certain operators of Reactor-core, time is an important consideration - for eg, a variation of "interval" function which emits an increasing number every 5 seconds after an initial "delay" of 10 seconds:

val flux = Flux
        .interval(Duration.ofSeconds(10), Duration.ofSeconds(5))
        .take(3)

Testing such a stream of data depending on normal passage of time would be terrible, such a test would take about 20 seconds to finish.

Reactor-Core provides a solution, an abstraction to time itself - Virtual time based Scheduler, that provides a neat way to test these kinds of operations in a deterministic way.

Let me show it in two ways, an explicit way which should make the actions of Virtual time based scheduler very clear followed by the recommended approach of testing with Reactor Core.

import org.assertj.core.api.Assertions.assertThat
import org.junit.Test
import reactor.core.publisher.Flux
import reactor.test.scheduler.VirtualTimeScheduler
import java.time.Duration
import java.util.concurrent.CountDownLatch


class VirtualTimeTest {
    
    @Test
    fun testExplicit() {
        val mutableList = mutableListOf<Long>()

        val scheduler = VirtualTimeScheduler.getOrSet()
        val flux = Flux
                .interval(Duration.ofSeconds(10), Duration.ofSeconds(5), scheduler)
                .take(3)

        val latch = CountDownLatch(1)
        
        flux.subscribe({ l -> mutableList.add(l) }, { _ -> }, { latch.countDown() })
        
        scheduler.advanceTimeBy(Duration.ofSeconds(10))
        assertThat(mutableList).containsExactly(0L)
        
        scheduler.advanceTimeBy(Duration.ofSeconds(5))
        assertThat(mutableList).containsExactly(0L, 1L)
        
        scheduler.advanceTimeBy(Duration.ofSeconds(5))
        assertThat(mutableList).containsExactly(0L, 1L, 2L)

        latch.await()
    }
    
}

1. First the scheduler for "Flux.interval" function is being set to be the Virtual Time based Scheduler.

2. The stream of data is expected to be emitted every 5 seconds after a 10 second delay

3. VirtualTimeScheduler provides an "advanceTimeBy" method to advance the Virtual time by a Duration, so the time is being first advanced by the delay time of 10 seconds at which point the first element(0) is expected to be emitted

4. Then it is subsequently advanced by 5 seconds twice to get 1 and 2 respectively.

This is deterministic and the test completes quickly. This version of the test is ugly though, it uses a list to collect and assert the results on and a CountDownLatch to control when the test terminates. A far cleaner approach for testing Reactor-Core types is using the excellent StepVerifier class and a test which makes use of this class looks like this:

import org.junit.Test
import reactor.core.publisher.Flux
import reactor.test.StepVerifier
import reactor.test.scheduler.VirtualTimeScheduler
import java.time.Duration

class VirtualTimeTest {

    @Test
    fun testWithStepVerifier() {

        VirtualTimeScheduler.getOrSet()
        val flux = Flux
                .interval(Duration.ofSeconds(10), Duration.ofSeconds(5))
                .take(3)

        StepVerifier.withVirtualTime({ flux })
                .expectSubscription()
                .thenAwait(Duration.ofSeconds(10))
                .expectNext(0)
                .thenAwait(Duration.ofSeconds(5))
                .expectNext(1)
                .thenAwait(Duration.ofSeconds(5))
                .expectNext(2)
                .verifyComplete()
    }
 }

This new test with StepVerifier reads well with each step advancing time and asserting on what is expected at that point.



Friday, September 1, 2017

Spring Webflux - Kotlin DSL - a walkthrough of the implementation

In a previous blog post I had described how Spring Webflux, the reactive programming support in Spring Web Framework, uses a Kotlin based DSL to enable users to describe routes in a very intuitive way. Here I wanted to explore a little of the underlying implementation.


A sample DSL describing a set of endpoints looks like this:

package sample.routes

import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.http.MediaType.APPLICATION_JSON
import org.springframework.web.reactive.function.server.router
import sample.handler.MessageHandler

@Configuration
class AppRoutes(private val messageHandler: MessageHandler) {

    @Bean
    fun apis() = router {
        (accept(APPLICATION_JSON) and "/messages").nest {
            GET("/", messageHandler::getMessages)
            POST("/", messageHandler::addMessage)
            GET("/{id}", messageHandler::getMessage)
            PUT("/{id}", messageHandler::updateMessage)
            DELETE("/{id}", messageHandler::deleteMessage)
        }
    }

}


To analyze the sample let me start with a smaller working example:

import org.junit.Test
import org.springframework.test.web.reactive.server.WebTestClient
import org.springframework.web.reactive.function.server.ServerResponse.ok
import org.springframework.web.reactive.function.server.router

class AppRoutesTest {

    @Test
    fun testSimpleGet() {
        val routerFunction = router {
            GET("/isokay", { _ -> ok().build() })
        }

        val client = WebTestClient.bindToRouterFunction(routerFunction).build()

        client.get()
                .uri("/isokay")
                .exchange()
                .expectStatus().isOk
    }
}

The heart of the route definition is the "router" function:

import org.springframework.web.reactive.function.server.router
...
val routerFunction = router {
    GET("/isokay", { _ -> ok().build() })
}

which is defined the following way:

fun router(routes: RouterFunctionDsl.() -> Unit) = RouterFunctionDsl().apply(routes).router()

The parameter "routes" is a special type of lambda expression, called a Lambda expression with a receiver. This means that in the context of the router function, this lambda expression can only be invoked by instances of "RouterFunctionDsl" which is what is done in the body of the function using apply method, this also means in the body of the lambda expression "this" refers to an instance of "RouterFunctionDsl". Knowing this opens up access to the methods of "RouterFunctionDsl" one of which is GET that is used in the example, GET is defined as follows:

fun GET(pattern: String, f: (ServerRequest) -> Mono<ServerResponse>) {
  ...
}

There are other ways express the same endpoint:

GET("/isokay2")({ _ -> ok().build() })

implemented in Kotlin very cleverly as:

fun GET(pattern: String): RequestPredicate = RequestPredicates.GET(pattern)

operator fun RequestPredicate.invoke(f: (ServerRequest) -> Mono<ServerResponse>) {
 ...
}

Here GET with the pattern returns a "RequestPredicate" for which an extension function has been defined (in the context of the DSL) called invoke, which is in turn a specially named operator.

Or a third way:

"/isokay" { _ -> ok().build() }

which is implemented by adding an extension function on String type and defined the following way:

operator fun String.invoke(f: (ServerRequest) -> Mono<ServerResponse>) {
  ...
}


I feel that the Spring Webflux makes an excellent use of the Kotlin DSL in making some of these route definitions easy to read while remaining concise.

This should provide enough primer to explore the source code of Routing DSL in Spring Webflux .

My samples are available in a github repository here - https://github.com/bijukunjummen/webflux-route-with-kotlin