Tuesday, September 16, 2014

Scala and Java 8 type inference in higher order functions sample

One of the concepts mentioned in the Functional Programming in Scala is about the type inference in higher order functions in Scala and how it fails in certain situations and a workaround for the same. So consider a sample higher order function, purely for demonstration:

def filter[A](list: List[A], p: A => Boolean):List[A] = {
 list.filter(p)
} 

Ideally, passing in a list of say integers, you would expect the predicate function to not require an explicit type:

val l = List(1, 5, 9, 20, 30) 

filter(l, i => i < 10)

Type inference does not work in this specific instance however, the fix is to specify the type explicitly:

filter(l, (i:Int) => i < 10)

Or a better fix is to use currying, then the type inference works!

def filter[A](list: List[A])(p: A=>Boolean):List[A] = {
 list.filter(p)
}                                         

filter(l)(i => i < 10) 
//OR
filter(l)(_ < 10) 
I was curious whether Java 8 type inference has this issue and tried a similar sample with Java 8 Lambda expression, the following is an equivalent filter function -
public <A> List<A> filter(List<A> list, Predicate<A> condition) {
 return list.stream().filter(condition).collect(toList());
}
and type inference for the predicate works cleanly -
List ints = Arrays.asList(1, 5, 9, 20, 30);
List lessThan10 =  filter(ints, i -> i < 10);
Another blog entry on a related topic by the author of the "Functional Programming in Scala" book is available here - http://pchiusano.blogspot.com/2011/05/making-most-of-scalas-extremely-limited.html

Sunday, September 14, 2014

Customizing HttpMessageConverters with Spring Boot and Spring MVC

Exposing a REST based endpoint for a Spring Boot application or for that matter a straight Spring MVC application is straightforward, the following is a controller exposing an endpoint to create an entity based on the content POST'ed to it:

@RestController
@RequestMapping("/rest/hotels")
public class RestHotelController {
        ....
 @RequestMapping(method=RequestMethod.POST)
 public Hotel create(@RequestBody @Valid Hotel hotel) {
  return this.hotelRepository.save(hotel);
 }
}

Internally Spring MVC uses a component called a HttpMessageConverter to convert the Http request to an object representation and back.

A set of default converters are automatically registered which supports a whole range of different resource representation formats - json, xml for instance.

Now, if there is a need to customize the message converters in some way, Spring Boot makes it simple. As an example consider if the POST method in the sample above needs to be little more flexible and should ignore properties which are not present in the Hotel entity - typically this can be done by configuring the Jackson ObjectMapper, all that needs to be done with Spring Boot is to create a new HttpMessageConverter bean and that would end up overriding all the default message converters, this way:

@Bean
 public MappingJackson2HttpMessageConverter mappingJackson2HttpMessageConverter() {
  MappingJackson2HttpMessageConverter jsonConverter = new MappingJackson2HttpMessageConverter();
  ObjectMapper objectMapper = new ObjectMapper();
  objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
  jsonConverter.setObjectMapper(objectMapper);
  return jsonConverter;
 }

This works well for a Spring-Boot application, however for straight Spring MVC applications which do not make use of Spring-Boot, configuring a custom converter is a little more complicated - the default converters are not registered by default and an end user has to be explicit about registering the defaults - the following is the relevant code for Spring 4 based applications:

@Configuration
public class WebConfig extends WebMvcConfigurationSupport {

 @Bean
 public MappingJackson2HttpMessageConverter customJackson2HttpMessageConverter() {
  MappingJackson2HttpMessageConverter jsonConverter = new MappingJackson2HttpMessageConverter();
  ObjectMapper objectMapper = new ObjectMapper();
  objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
  jsonConverter.setObjectMapper(objectMapper);
  return jsonConverter;
 }
 
 @Override
 public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
  converters.add(customJackson2HttpMessageConverter());
  super.addDefaultHttpMessageConverters();
 }
}

Here WebMvcConfigurationSupport provides a way to more finely tune the MVC tier configuration of a Spring based application. In the configureMessageConverters method, the custom converter is being registered and then an explicit call is being made to ensure that the defaults are registered also. A little more work than for a Spring-Boot based application.

Thursday, August 28, 2014

Spring MVC endpoint documentation with Spring Boot

A long time ago I had posted about a way to document all the uri mappings exposed by a typical Spring MVC based application. The steps to do this however are very verbose and requires a fairly deep knowledge of some of the underlying Spring MVC components.

Spring Boot makes this kind of documentation way simpler. All you need to do for a Spring-boot based application is to activate the Spring-boot actuator. Adding in the actuator brings in a lot more production ready features to a Spring-boot application, my focus however is specifically on the endpoint mappings.

So, first to add in the actuator as a dependency to a Spring-boot application:

<dependency>
 <groupId>org.springframework.boot</groupId>
 <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

and if the Spring-Boot app is started up now, a REST endpoint at this http://machinename:8080/mappings url should be available which lists out all the uri's exposed by the application, a snippet of this information looks like the following in a sample application I have:

{
  "/**/favicon.ico" : {
    "bean" : "faviconHandlerMapping"
  },
  "/hotels/partialsEdit" : {
    "bean" : "viewControllerHandlerMapping"
  },
  "/hotels/partialsCreate" : {
    "bean" : "viewControllerHandlerMapping"
  },
  "/hotels/partialsList" : {
    "bean" : "viewControllerHandlerMapping"
  },
  "/**" : {
    "bean" : "resourceHandlerMapping"
  },
  "/webjars/**" : {
    "bean" : "resourceHandlerMapping"
  },
  "{[/hotels],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public java.lang.String mvctest.web.HotelController.list(org.springframework.ui.Model)"
  },
  "{[/rest/hotels/{id}],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public mvctest.domain.Hotel mvctest.web.RestHotelController.get(long)"
  },
  "{[/rest/hotels],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public java.util.List<mvctest.domain.Hotel> mvctest.web.RestHotelController.list()"
  },
  "{[/rest/hotels/{id}],methods=[DELETE],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public org.springframework.http.ResponseEntity<java.lang.Boolean> mvctest.web.RestHotelController.delete(long)"
  },
  "{[/rest/hotels],methods=[POST],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public mvctest.domain.Hotel mvctest.web.RestHotelController.create(mvctest.domain.Hotel)"
  },
  "{[/rest/hotels/{id}],methods=[PUT],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public mvctest.domain.Hotel mvctest.web.RestHotelController.update(long,mvctest.domain.Hotel)"
  },
  "{[/],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public java.lang.String mvctest.web.RootController.onRootAccess()"
  },
  "{[/error],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)"
  },
  
  ....

Note that by default the json is not formatted, to get a formatted json just ensure that you have the following entry in your application.properties file:

http.mappers.json-pretty-print=true

This listing is much more comprehensive than the listing that I originally had.

The same information can of course be presented in a better way by rendering it to html and I have opted to use angularjs to present this information, the following is the angularjs service factory to retrieve the mappings and the controller which makes use of this factory to populate a mappings model:

app.factory("mappingsFactory", function($http) {
    var factory = {};
    factory.getMappings = function() {
        return $http.get(URLS.mappingsUrl);
    }
    return factory;
});

app.controller("MappingsCtrl", function($scope, $state, mappingsFactory) {
    function init() {
        mappingsFactory.getMappings().success(function(data) {
           $scope.mappings = data;
        });
    }

    init();
});

The returned mappings model is essentially a map of a map, the key of the map being the uri path exposed by Spring-Boot application and the values being the name of the bean handling the endpoint and if available the details of the controller handling the call, this can be rendered using a template of the following form:

<table class="table table-bordered table-striped">
    <thead>
    <tr>
        <th width="50%">Path</th>
        <th width="10%">Bean</th>
        <th width="40%">Method</th>
    </tr>
    </thead>
    <tbody>
    <tr ng-repeat="(k, v) in mappings">
        <td>{{k}}</td>
        <td>{{v.bean}}</td>
        <td>{{v.method}}</td>
    </tr>
    </tbody>
</table>

the final rendered view of the endpoint mappings is displayed in the following way:

Here is a sample github project with the rendering implemented: https://github.com/bijukunjummen/spring-boot-mvc-test

Thursday, August 21, 2014

GemFire XD cluster using Docker

I stared learning how to build and use Docker containers a few days back and one of my learning samples has been to build a series of docker containers to hold a Pivotal GemFire XD cluster.

First the result and then I will go into some details on how the containers were built -

This is the GemfireXD topology that I wanted to build:


The topology consists of 2 GemFire XD servers each running in its own process and a GemFire XD locator to provide connectivity to clients using this cluster and to load balance between the 2(or potentially more) server processes.

The following fig definition shows how my cluster is configured:

fig.yml:
locator:
  image: bijukunjummen/gfxd-locator
  ports:
    - "10334"
    - "1527:1527"
    - "7075:7075"

server1:
  image: bijukunjummen/gfxd-server
  ports:
    - "1528:1528"
  links:
    - locator
  environment: 
   - CLIENT_PORT=1528

server2:
  image: bijukunjummen/gfxd-server
  ports:
    - "1529:1529"
  links:
    - locator
  environment: 
   - CLIENT_PORT=1529   

This simple fig definition would boot up and start the 3 container cluster, linking the Locator to the GemfireXD server and information about this cluster can be viewed through a tool called Pulse that GemFire XD comes packaged with:

This cluster definition is eminently repeatable - I was able to publish the 2 images "gfxd-locator" and "gfxd-server" to Docker Hub and using the fig.yml the entire cluster can be brought up by anybody with a local installation of Docker and Fig.

So how was the Docker image created:

I required two different Docker image types - a GemFire XD locator and a GemFire XD server, there is a lot common among these images, they both use the common Gemfire XD installation except for how each of them is started up. So I have a base image which is defined in the Dockerfile at this github location which builds on top of the CentOS image and deploys the Gemfire XD to the image. Then I have the Gemfire XD server and the locator images deriving from the base image with ENTRYPOINT's specifying how each of the processes should be started up.

The entire project is available at this github location - https://github.com/bijukunjummen/docker-gfxd-cluster and the README instruction there should provide enough information on how to build up and run the cluster.

To conclude, this has been an excellent learning exercise on how Docker works and how easy Fig makes it to orchestrate multiple containers to create a cohesive cluste, to share a repeatable configuration.

I would like to thank my friends Alvin Henrick and Jeff Cherng for their help with a good part of the Docker and GemFire XD configurations!

Friday, August 1, 2014

Deploying a Spring boot application to Cloud Foundry with Spring-Cloud

I have a small Spring boot based application that uses a Postgres database as a datastore. I wanted to document the steps involved in deploying this sample application to Cloud Foundry.

Some of the steps are described in the Spring Boot reference guide, however the guides do not sufficiently explain how to integrate with the datastore provided in a cloud based environment.

Spring-cloud provides the glue to connect Spring based applications deployed on a Cloud to discover and connect to bound services, so the first step is to pull in the Spring-cloud libraries into the project with the following pom entries:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-spring-service-connector</artifactId>
	<version>1.0.0.RELEASE</version>
</dependency>

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-cloudfoundry-connector</artifactId>
	<version>1.0.0.RELEASE</version>
</dependency>

Once this dependency is pulled in, connecting to a bound service is easy, just define a configuration along these lines:
@Configuration
public class PostgresCloudConfig extends AbstractCloudConfig {

	@Bean
	public DataSource dataSource() {
		return connectionFactory().dataSource();
	}

}

Spring-Cloud understands that the application is deployed on a specific Cloud(currently Cloud Foundry and Heroku by looking for certain characteristics of the deployed Cloud platform), discovers the bound services, recognizes that there is a bound service using which a Postgres based datasource can be created and returns the datasource as a Spring bean.

This application can now deploy cleanly to a Cloud Foundry based Cloud. The sample application can be tried out in a version of Cloud Foundry deployed with bosh-lite, these are how the steps in my machine looks like once Cloud Foundry is up and running with bosh-lite:

The following command creates a user provided service in Cloud Foundry:
cf create-user-provided-service psgservice -p '{"uri":"postgres://postgres:p0stgr3s@bkunjummen-mbp.local:5432/hotelsdb"}'

Now, push the app, however don't start it up. We can do that once the service above is bound to the app:
cf push spring-boot-mvc-test -p target/spring-boot-mvc-test-1.0.0-SNAPSHOT.war --no-start

Bind the service to the app and restart the app:
cf bind-service spring-boot-mvc-test psgservice
cf restart spring-boot-mvc-test

That is essentially it, Spring Cloud should ideally take over at the point and cleanly parse the credentials from the bound service which within Cloud Foundry translates to an environment variable called VCAP_SERVICES, and create the datasource from it.


There is however an issue with this approach - once the datasource bean is created using spring-cloud approach, it does not work in a local environment anymore.

The potential fix for this is to use Spring profiles, assume that there is a different "cloud" Spring profile available in Cloud environment where the Spring-cloud based datasource gets returned:

@Profile("cloud")
@Configuration
public class PostgresCloudConfig extends AbstractCloudConfig {

	@Bean
	public DataSource dataSource() {
		return connectionFactory().dataSource();
	}
}

and let Spring-boot auto-configuration create a datasource in the default local environment, this way the configuration works both local as well as in Cloud. Where does this "cloud" profile come from, it can be created using a ApplicationContextInitializer, and looks this way:

public class SampleWebApplicationInitializer implements ApplicationContextInitializer<AnnotationConfigEmbeddedWebApplicationContext> {

	private static final Log logger = LogFactory.getLog(SampleWebApplicationInitializer.class);

	@Override
	public void initialize(AnnotationConfigEmbeddedWebApplicationContext applicationContext) {
		Cloud cloud = getCloud();
		ConfigurableEnvironment appEnvironment = applicationContext.getEnvironment();

		if (cloud!=null) {
			appEnvironment.addActiveProfile("cloud");
		}

		logger.info("Cloud profile active");
	}

	private Cloud getCloud() {
		try {
			CloudFactory cloudFactory = new CloudFactory();
			return cloudFactory.getCloud();
		} catch (CloudException ce) {
			return null;
		}
	}
}

This initializer makes use of the Spring-cloud's scanning capabilities to activate the "cloud" profile.


One last thing which I wanted to try was to make my local behave like Cloud atleast in the eyes of Spring-Cloud and this can be done by adding in some environment variables using which Spring-Cloud makes the determination of the type of cloud where the application is deployed, the following is my startup script in local for the app to pretend as if it is deployed in Cloud Foundry:

read -r -d '' VCAP_APPLICATION <<'ENDOFVAR'
{"application_version":"1","application_name":"spring-boot-mvc-test","application_uris":[""],"version":"1.0","name":"spring-boot-mvc-test","instance_id":"abcd","instance_index":0,"host":"0.0.0.0","port":61008}
ENDOFVAR

export VCAP_APPLICATION=$VCAP_APPLICATION

read -r -d '' VCAP_SERVICES <<'ENDOFVAR'
{"postgres":[{"name":"psgservice","label":"postgresql","tags":["postgresql"],"plan":"Standard","credentials":{"uri":"postgres://postgres:p0stgr3s@bkunjummen-mbp.local:5432/hotelsdb"}}]}
ENDOFVAR

export VCAP_SERVICES=$VCAP_SERVICES

mvn spring-boot:run

This entire sample is available at this github location:https://github.com/bijukunjummen/spring-boot-mvc-test

Conclusion


Spring Boot along with Spring-Cloud project now provide an excellent toolset to create Spring-powered cloud ready applications, and hopefully these notes are useful in integrating Spring Boot with Spring-Cloud and using these for seamless local and Cloud deployments.

Saturday, July 19, 2014

Tailing a file - Spring Websocket sample

This is a sample that I have wanted to try for sometime - A Websocket application to tail the contents of a file.


The following is the final view of the web-application:



There are a few parts to this application:

Generating a File to tail:


I chose to use a set of 100 random quotes as a source of the file content, every few seconds the application generates a quote and writes this quote to the temporary file. Spring Integration is used for wiring this flow for writing the contents to the file:

<int:channel id="toFileChannel"/>

<int:inbound-channel-adapter ref="randomQuoteGenerator" method="generateQuote" channel="toFileChannel">
	<int:poller fixed-delay="2000"/>
</int:inbound-channel-adapter>

<int:chain input-channel="toFileChannel">
	<int:header-enricher>
		<int:header name="file_name" value="quotes.txt"/>
	</int:header-enricher>
	<int-file:outbound-channel-adapter directory="#{systemProperties['java.io.tmpdir']}" mode="APPEND" />
</int:chain>

Just a quick note, Spring Integration flows can now also be written using a Java Based DSL, and this flow using Java is available here

Tailing the file and sending the content to a broker


The actual tailing of the file itself can be accomplished by OS specific tail command or by using a library like Apache Commons IO. Again in my case I decided to use Spring Integration which provides Inbound channel adapters to tail a file purely using configuration, this flow looks like this:
<int:channel id="toTopicChannel"/>

<int-file:tail-inbound-channel-adapter id="fileInboundChannelAdapter"
				channel="toTopicChannel"
				file="#{systemProperties['java.io.tmpdir']}/quotes.txt"
				delay="2000"
				file-delay="10000"/>

<int:outbound-channel-adapter ref="fileContentRecordingService" method="sendLinesToTopic" channel="toTopicChannel"/>
and its working Java equivalent

There is a reference to a "fileContentRecordingService" above, this is the component which will direct the lines of the file to a place where the Websocket client will subscribe to.

Websocket server configuration

Spring Websocket support makes it super simple to write a Websocket based application, in this instance the entire working configuration is the following:
@Configuration
@EnableWebSocketMessageBroker
public class WebSocketDefaultConfig extends AbstractWebSocketMessageBrokerConfigurer {

	@Override
	public void configureMessageBroker(MessageBrokerRegistry config) {
		//config.enableStompBrokerRelay("/topic/", "/queue/");
		config.enableSimpleBroker("/topic/", "/queue/");
		config.setApplicationDestinationPrefixes("/app");
	}

	@Override
	public void registerStompEndpoints(StompEndpointRegistry registry) {
		registry.addEndpoint("/tailfilesep").withSockJS();
	}
}

This may seem a little over the top, but what these few lines of configuration does is very powerful and the configuration can be better understood by going through the reference here. In brief, it sets up a websocket endpoint at '/tailfileep' uri, this endpoint is enhanced with SockJS support, Stomp is used as a sub-protocol, endpoints `/topic` and `/queue` is configured to a real broker like RabbitMQ or ActiveMQ but in this specific to an in-memory one.

Going back to the "fileContentRecordingService" once more, this component essentially takes the line of the file and sends it this in-memory broker, SimpMessagingTemplate facilitates this wiring:

public class FileContentRecordingService {
	@Autowired
	private SimpMessagingTemplate simpMessagingTemplate;

	public void sendLinesToTopic(String line) {
		this.simpMessagingTemplate.convertAndSend("/topic/tailfiles", line);
	}
}


Websocket UI configuration

The UI is angularjs based, the client controller is set up this way and internally uses the javascript libraries for sockjs and stomp support:

var tailFilesApp = angular.module("tailFilesApp",[]);

tailFilesApp.controller("TailFilesCtrl", function ($scope) {
    function init() {
        $scope.buffer = new CircularBuffer(20);
    }

    $scope.initSockets = function() {
        $scope.socket={};
        $scope.socket.client = new SockJS("/tailfilesep);
        $scope.socket.stomp = Stomp.over($scope.socket.client);
        $scope.socket.stomp.connect({}, function() {
            $scope.socket.stomp.subscribe("/topic/tailfiles", $scope.notify);
        });
        $scope.socket.client.onclose = $scope.reconnect;
    };

    $scope.notify = function(message) {
        $scope.$apply(function() {
            $scope.buffer.add(angular.fromJson(message.body));
        });
    };

    $scope.reconnect = function() {
        setTimeout($scope.initSockets, 10000);
    };

    init();
    $scope.initSockets();
});

The meat of this code is the "notify" function which the callback acting on the messages from the server, in this instance the new lines coming into the file and showing it in a textarea.


This wraps up the entire application to tail a file. A complete working sample without any external dependencies is available at this github location, instructions to start it up is also available at that location.

Conclusion

Spring Websockets provides a concise way to create Websocket based applications, this sample provides a good demonstration of this support. I had presented on this topic recently at my local JUG (IndyJUG) and a deck with the presentation is available here

Friday, July 4, 2014

Scala Tail Recursion confusion

I was looking at a video of Martin Odersky's keynote during Scala Days 2014 and there was a sample tail recursion code that confused me:

@tailrec
private def sameLength[T, U](xs: List[T], ys: List[U]): Boolean = {
  if (xs.isEmpty) ys.isEmpty
  else ys.nonEmpty && sameLength(xs.tail, ys.tail)
}

On a quick glance, this did not appear to be tail recursive to me, as there is the && operation that needs to be called after the recursive call.

However, thinking a little more about it, && is a short-circuit operator and the recursive operation would get called only if the ys.nonEmpty statement evaluates to true, thus maintaining the definition of a tail recursion.

The decompiled class clarifies this a little more, surprisingly the && operator does not appear anywhere in the decompiled code!:

public <T, U> boolean org$bk$sample$SameLengthTest$$sameLength(List<T> xs, List<U> ys)
  {
    for (; ys.nonEmpty(); xs = (List)xs.tail()) ys = (List)ys.tail();
    return 
      xs.isEmpty() ? ys.isEmpty() : 
      false;
  }

If the operator were changed to something that does not have short-circuit behavior, the method of course will not be a tail-recursion at that point, say a hypothetical method with the XOR operator:

private def notWorking[T, U](xs: List[T], ys: List[U]): Boolean = {
  if (xs.isEmpty) ys.isEmpty
  else ys.nonEmpty ^ notWorking(xs.tail, ys.tail)
}

Something fairly basic that tripped me up today!