Sunday, January 15, 2017

Gradle Plugins DSL and Spring-Boot Plugin

Gradle Plugins DSL is a new gradle feature which provides a very succinct way of adding a plugin to a Gradle based project. A good way to show the utility of this new mechanism is in how it simplifies a sample Spring Boot based gradle build file.

If I were to generate a sample gradle based Spring boot project from the excellent http://start.spring.io site, a snippet of the gradle file which adds in the Spring Boot gradle plugin looks like this:

buildscript {
 ext {
  springBootVersion = '1.4.3.RELEASE'
 }
 repositories {
  mavenCentral()
 }
 dependencies {
  classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
 }
}

apply plugin: 'org.springframework.boot'

The new Gradle Plugins DSL simplifies this boilerplate drastically. An equivalent declaration using the new Plugins DSL is the following:

plugins {
  id "org.springframework.boot" version "1.4.3.RELEASE"
}

This IMHO reads far better, though it does require some level of mental parsing. The best way to understand this new syntax though may to know that this works in concert with the Gradle plugins portal, a centralized repository of plugins, to resolve the plugin related dependencies. The page for the Spring Boot plugin is here - https://plugins.gradle.org/plugin/org.springframework.boot.

Wednesday, January 11, 2017

Deploying akka-http app to Cloud Foundry - Part 2

In a preceding post I had gone over the steps to deploy a simple akka-http app to Cloud Foundry. The gist of it was that as long there is a way to create a runnable fat(uber) jar, the deployment is very straightforward - Cloud Foundry's Java buildpack can take the bits and wire up everything needed to get it up an running in the Cloud Foundry environment.

Here I wanted to go over a slightly more involved scenario - this is where the app has an external database dependency say to a MySQL database.

In a local environment the details of the database would have been resolved using a configuration typically specified like this:

sampledb = {
  url = "jdbc:mysql://localhost:3306/mydb?useSSL=false"
  user = "myuser"
  password = "mypass"
}

If the Mysql database were to be outside of Cloud Foundry environment this approach of specifying the database configuration will continue to work nicely. However if the service resides in a Cloud Foundry market place , then the details of the service is created dynamically at bind time with the Application.

Just to make this a little more concrete, in my local PCF Dev, I have a marketplace with "p-mysql" service available.



And if I were to create a "service instance" out of this:


and bind this instance to an app:


essentially what happens at this point is that the application has an environment variable called VCAP_SERVICES available to it and this has to be parsed to get the db creds. VCAP_SERVICES in the current scenario looks something like this:

{
  "p-mysql": [
   {
    "credentials": {
     "hostname": "mysql-broker.local.pcfdev.io",
     "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/myinstance?user=user\u0026password=pwd",
     "name": "myinstance",
     "password": "pwd",
     "port": 3306,
     "uri": "mysql://user:pwd@mysql-broker.local.pcfdev.io:3306/myinstance?reconnect=true",
     "username": "user"
    },
    "label": "p-mysql",
    "name": "mydb",
    "plan": "512mb",
    "provider": null,
    "syslog_drain_url": null,
    "tags": [
     "mysql"
    ]
   }
  ]
 }

This can be parsed very easily using Typesafe config, a sample (admittedly hacky) code looks like this:

  def getConfigFor(serviceType: String, name: String): Config = {
    val vcapServices = env("VCAP_SERVICES")
    val rootConfig = ConfigFactory.parseString(vcapServices)
    val configs = rootConfig.getConfigList(serviceType).asScala
      .filter(_.getString("name") == name)
      .map(instance => instance.getConfig("credentials"))

    if (configs.length > 0) configs.head
    else ConfigFactory.empty()
  }

and called the following way:
val dbConfig = cfServicesHelper.getConfigFor("p-mysql", "mydb")

This would dynamically resolve the credentials for mysql and would allow the application to connect to the database.

An easier way to follow all this may be to look at a sample code available in my github repo here - https://github.com/bijukunjummen/sample-akka-http-rest.

Tuesday, January 3, 2017

Deploying akka-http app to Cloud Foundry - Part 1

It is easy to deploy an akka-http application to Cloud Foundry. I experimented with a few variations recently and will cover ways to deploy an Akka-http based REST app in two parts - first a simple app with no external resource dependencies, the second a little more complex CRUD app that maintains state in a MySQL database.


Pre Requisites


A quick way to get a running Cloud Foundry instance is using PCF Dev, a small footprint distribution of Cloud Foundry that can be started up on a developer laptop.

The sample app that I am using is a stock demo app available via the Lightbend Activator, if you have activator binaries available locally, you can create a quick project using the following command:


Generating the sample App and running it locally

activator new sample-akka-http akka-http-microservice


The application can be brought up by running sbt and using the "re-start" task

$ sbt
> re-start

By default the app comes up on port 9000 and can be tested with a sample CURL call - more here:

$ curl http://localhost:9000/ip/8.8.8.8
{
  "city": "Mountain View",
  "query": "8.8.8.8",
  "country": "United States",
  "lon": -122.0881,
  "lat": 37.3845
}


Deploying to Cloud Foundry

There is one change that needs to be made to the application to get it to work in Cloud Foundry - adjusting the port where the application listens on. When the app is deployed to Cloud Foundry, an environment variable called "PORT" is the port that the application is expected to listen on. This change is the following for the sample app:

val port = if (sys.env.contains("PORT")) sys.env("PORT").toInt else config.getInt("http.port")
Http().bindAndHandle(routes, config.getString("http.interface"), port)

Here I look for the PORT environment variable and use that port if available.

There is already the "assembly" sbt plugin available which creates a fat jar with the appropriate entries, to be able to start up the main class of the application

> assembly

Go to "target/scala-2.11" folder and run the fat jar:
$ java -jar sample-akka-http-assembly-1.0.jar

And the application should come up cleanly.

Having a fat jar greatly simplifies the deployment to Cloud Foundry - In the Cloud Foundry world, a buildpack takes the application binaries and layers in the runtime(jvm, a container like tomcat, application certs, monitoring agents etc). Given this fat jar all that is needed to deploy to Cloud Foundry is a command which looks like this:

$ cf push -p sample-akka-http-assembly-1.0.jar sample-akka-http  

Assuming that you are targeting the local PCF Dev environment the application should get cleanly deployed using the appropriate buildpack(java buildpack in this instance) and be available to handle requests in a few minutes:


Which I can test using a curl command similar to what I had before:
$ curl http:// sample-akka-http.local.pcfdev.io/ip/8.8.8.8
{
  "city": "Mountain View",
  "query": "8.8.8.8",
  "country": "United States",
  "lon": -122.0881,
  "lat": 37.3845
}

That is all there is to it - if say some customizations need to be made to the application, say more jvm heap size, this can be easily done via other command line flags or using an application manifest. The process to deploy with external resource dependencies is a little more complex and I will cover this in a follow up post.

Tuesday, December 27, 2016

Practical Reactor operations - Retrieve Details of a Cloud Foundry Application

CF-Java-Client is a library which enables programatic access to a Cloud Foundry Cloud Controller API. It is built on top of Project Reactor, an implementation of Reactive Streams specification and it is a fun exercise using this library to do something practical in a Cloud Foundry environment.

Consider a sample use case - Given an application id I need to find a little more detail of this application, more details of the application along with the details of the organization and the space that it belongs to.

To start with, the basis of all API operations with cf-java-client is a type unsurprisingly called the CloudFoundryClient(org.cloudfoundry.client.CloudFoundryClient), cf-java-client's github page has details on how to get hold of an instance of this type.

Given a CloudFoundryClient instance, the details of an application given its id can be obtained as follows:

Mono<GetApplicationResponse> applicationResponseMono = this.cloudFoundryClient
  .applicationsV2().get(GetApplicationRequest.builder().applicationId(applicationId).build());

Note that the API returns a reactor "Mono" type, this is in general the behavior of all the API calls of cf-java-client.


  • If an API returns one item then typically a Mono type is returned
  • If the API is expected to return more than one item then a Flux type is returned, and
  • If the API is called purely for side effects - say printing some information then it returns a Mono<Void> type


The next step is to retrieve the space identifier from the response and make an API call to retrieve the details of the space and looks like this:

Mono<Tuple2<GetApplicationResponse, GetSpaceResponse>> appAndSpaceMono = applicationResponseMono
  .and(appResponse -> this.cloudFoundryClient.spaces()
    .get(GetSpaceRequest.builder()
      .spaceId(appResponse.getEntity().getSpaceId()).build()));



Here I am using an "and" operator to combine the application response with another Mono that returns the space information, the result is a "Tuple2" type holding both the pieces of information - the application detail and the detail of the space that it is in.

Finally to retrieve the Organization that the app is deployed in:

Mono<Tuple3<GetApplicationResponse, GetSpaceResponse, GetOrganizationResponse>> t3 =
  appAndSpaceMono.then(tup2 -> this.cloudFoundryClient.organizations()
      .get(GetOrganizationRequest.builder()
        .organizationId(tup2.getT2().getEntity()
          .getOrganizationId())
        .build())
      .map(orgResp -> Tuples.of(tup2.getT1(), tup2.getT2(),
        orgResp)));

Here a "then" operation is being used to retrieve the organization detail given the id from the previous step and the result added onto the previous tuple to create a Tuple3 type holding the "Application Detail", "Space Detail" and the "Organization Detail". "then" is the equivalent of flatMap operator familiar in the Scala and ReactiveX world.

This essentially covers the way you would typically deal with "cf-java-client" library and use the fact that it is built on the excellent "Reactor" library and its collection of very useful operators to get results together. Just to take the final step of transforming the result to a type that may be more relevant to your domain and to handle any errors along the way:

Mono<AppDetail> appDetail =  
 t3.map(tup3 -> {
   String appName = tup3.getT1().getEntity().getName();
   String spaceName = tup3.getT2().getEntity().getName();
   String orgName = tup3.getT3().getEntity().getName();
   return new AppDetail(appName, orgName, spaceName);
  }).otherwiseReturn(new AppDetail("", "", ""));


If you are interested in trying out a working sample, I have an example available in my github repo here - https://github.com/bijukunjummen/boot-firehose-to-syslog

And the code shown in the article is available here - https://github.com/bijukunjummen/boot-firehose-to-syslog/blob/master/src/main/java/io.pivotal.cf.nozzle/service/CfAppDetailsService.java


Sunday, December 18, 2016

Spring Boot and Application Context Hierarchy

Spring Boot supports a simple way of specifying a Spring application context hierarchy.

This post is simply demonstrating this feature, I am yet to find a good use of it in the projects I have worked on. Spring Cloud uses this feature for creating a bootstrap context where properties are loaded up, if required, from an external configuration server which is made available to the main application context later on.

To quickly take a step back - a Spring Application Context manages the lifecycle of all the beans registered with it. Application Context hierarchies provide a way to reuse beans, beans defined in the parent context is accessible in the child contexts.

Consider a contrived use-case of using multiple application contexts and application context hierarchy - this is to provide two different ports with different set of endpoints at each of these ports.


Child1 and Child2 are typical Spring Boot Applications, along these lines:

package child1;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.PropertySource;
import root.RootBean;

@SpringBootApplication
@PropertySource("classpath:/child1.properties")
public class ChildContext1 {

    @Bean
    public ChildBean1 childBean(RootBean rootBean, @Value("${root.property}") String someProperty) {
        return new ChildBean1(rootBean, someProperty);
    }
}


Each of the application resides in its own root package to avoid collisions when scanning for beans. Note that the bean in the child contexts depend on a bean that is expected to come from the root context.

The port to listen on is provided as properties, since the two contexts are expected to listen on different ports I have explicitly specified the property file to load with a content along these lines:

server.port=8080
spring.application.name=child1

Given this set-up, Spring Boot provides a fluid interface to load up the root context and the two child contexts:

SpringApplicationBuilder appBuilder =
       new SpringApplicationBuilder()
               .parent(RootContext.class)
               .child(ChildContext1.class)
               .sibling(ChildContext2.class);

ConfigurableApplicationContext applicationContext  = appBuilder.run();

The application context returned by the SpringBootApplicationBuilder appears to be the final one in the chain, defined via ChildContext2 above.

If the application is now started up, there would be a root context with two different child contexts each exposing an endpoint via a different port. A visualization via the /beans actuator endpoint shows this:


Not everything is clean though, there are errors displayed in the console related to exporting jmx endpoints, however these are informational and don't appear to affect the start-up.

Samples are available in my github repo

Tuesday, November 29, 2016

Using Kafka with Junit

One of the neat features that the excellent Spring Kafka project provides, apart from a easier to use abstraction over raw Kafka Producer and Consumer, is a way to use Kafka in tests. It does this by providing an embedded version of Kafka that can be set-up and torn down very easily.

All that a project needs to include this support is the "spring-kafka-test" module, for a gradle build the following way:

testCompile "org.springframework.kafka:spring-kafka-test:1.1.2.BUILD-SNAPSHOT"

Note that I am using a snapshot version of the project as this has support for Kafka 0.10+.

With this dependency in place, an Embedded Kafka can be spun up in a test using the @ClassRule of JUnit:

@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(2, true, 2, "messages");

This would start up a Kafka Cluster with 2 brokers, with a topic called "messages" using 2 partitions and the class rule would make sure that a Kafka cluster is spun up before the tests are run and then shutdown at the end of it.

Here is how a sample with Raw Kafka Producer/Consumer using this embedded Kafka cluster looks like, the embedded Kafka can be used for retrieving the properties required by the Kafka Producer/Consumer:

Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps);
producer.send(new ProducerRecord<>("messages", 0, 0, "message0")).get();
producer.send(new ProducerRecord<>("messages", 0, 1, "message1")).get();
producer.send(new ProducerRecord<>("messages", 1, 2, "message2")).get();
producer.send(new ProducerRecord<>("messages", 1, 3, "message3")).get();


Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("sampleRawConsumer", "false", embeddedKafka);
consumerProps.put("auto.offset.reset", "earliest");

final CountDownLatch latch = new CountDownLatch(4);
ExecutorService executorService = Executors.newSingleThreadExecutor();
executorService.execute(() -> {
    KafkaConsumer<Integer, String> kafkaConsumer = new KafkaConsumer<>(consumerProps);
    kafkaConsumer.subscribe(Collections.singletonList("messages"));
    try {
        while (true) {
            ConsumerRecords<Integer, String> records = kafkaConsumer.poll(100);
            for (ConsumerRecord<Integer, String> record : records) {
                LOGGER.info("consuming from topic = {}, partition = {}, offset = {}, key = {}, value = {}",
                        record.topic(), record.partition(), record.offset(), record.key(), record.value());
                latch.countDown();
            }
        }
    } finally {
        kafkaConsumer.close();
    }
});

assertThat(latch.await(90, TimeUnit.SECONDS)).isTrue();

A little more comprehensive test is available here

Tuesday, November 22, 2016

Recipe for getting started with Spring Boot and Angular 2

I am primarily a service developer who has to create some passable UI's once in a while. I was adept at basic AngularJS1 based UI's and could get stuff done by using an approach that I have outlined before. With the advent of Angular 2 I had to unfortunately throw my previous approach out of the window and now have an approach with Spring Boot/ Angular 2 that works equally well.

The approach essentially works on the fact that a Spring Boot web application looks for static content in a very specific location - src/main/resources/static folder from the root of the project, so if I can get the final js content into this folder, then I am golden.

So let us jump into it.

Pre-requisites

There is primarily one pre-requisite - the excellent angular-cli tool which is a blessing for UI ignorant developers like me.

The second optional but useful pre-requisite is the Spring-Boot CLI tool described here


Generating a SPA Project


Given these two tools, first create a Spring Boot web project either by starting from http://start.spring.io or using the following CLI command:

spring init --dependencies=web spring-boot-angular2-static-sample

At this point a starter project should have been generated in the spring-boot-angular2-static-sample folder. From that folder generate a Angular 2 project using the angular-cli.

ng init

Change the location where angular-cli builds the artifacts, edit angular-cli.json and modify as follows:




Now build the static content:

ng build

this should get the static content to the src/main/resources/static folder.

And start up the Spring-Boot app:

mvn spring-boot:run

and the AngularJS2 based UI should render cleanly!

Live Reload

One of the advantages of using the Angular-cli is the excellent tool-chain that it comes with - one of them being the ability to make changes and view it reflected on the UI. This ability is lost with the approach documented here where the UI may be primarily driven by services hosted on the Spring-Boot project. To get back the live reload feature on the AngularJS2 development is however a cinch.

First proxy the backend, create a proxy.conf.json file with entry which looks like this:

{
  "/api": {
    "target": "http://localhost:8080",
    "secure": false
  }
}

and start up the Angular-cli server using the command:

ng serve --proxy-config proxy.conf.json

and start up the server part independently using:

mvn spring-boot:run

That is it, now the UI development can be carried out independent of the server side API's!. For an even greater punch just use the excellent devtools that is packaged with Spring Boot to get a live reload(more a restart) feature on the server side also.

Conclusion

This is the recipe I use for any basic UI that I may have to create, this approach probably is not ideal for large projects but should be a perfect fit for small internal projects. I have a sample starter with a backend call hooked up available in my github repo here.