Monday, June 29, 2015

Learning Spring-Cloud - Writing a microservice

Continuing my Spring-Cloud learning journey, earlier I had covered how to write the infrastructure components of a typical Spring-Cloud and Netflix OSS based micro-services environment - in this specific instance two critical components, Eureka to register and discover services and Spring Cloud Configuration to maintain a centralized repository of configuration for a service. Here I will be showing how I developed two dummy micro-services, one a simple "pong" service and a "ping" service which uses the "pong" service.


Sample-Pong microservice


The endpoint handling the "ping" requests is a typical Spring MVC based endpoint:

@RestController
public class PongController {

    @Value("${reply.message}")
    private String message;

    @RequestMapping(value = "/message", method = RequestMethod.POST)
    public Resource<MessageAcknowledgement> pongMessage(@RequestBody Message input) {
        return new Resource<>(
                new MessageAcknowledgement(input.getId(), input.getPayload(), message));
    }

}

It gets a message and responds with an acknowledgement. Here the service utilizes the Configuration server in sourcing the "reply.message" property. So how does the "pong" service find the configuration server, there are potentially two ways - directly by specifying the location of the configuration server, or by finding the Configuration server via Eureka. I am used to an approach where Eureka is considered a source of truth, so in this spirit I am using Eureka to find the Configuration server. Spring Cloud makes this entire flow very simple, all it requires is a "bootstrap.yml" property file with entries along these lines:

---
spring:
  application:
    name: sample-pong
  cloud:
    config:
      discovery:
        enabled: true
        serviceId: SAMPLE-CONFIG

eureka:
  instance:
    nonSecurePort: ${server.port:8082}
  client:
    serviceUrl:
      defaultZone: http://${eureka.host:localhost}:${eureka.port:8761}/eureka/

The location of Eureka is specified through the "eureka.client.serviceUrl" property and the "spring.cloud.config.discovery.enabled" is set to "true" to specify that the configuration server is discovered via the specified Eureka server.

Just a note, this means that the Eureka and the Configuration server have to be completely up before trying to bring up the actual services, they are the pre-requisites and the underlying assumption is that the Infrastructure components are available at the application boot time.

The Configuration server has the properties for the "sample-pong" service, this can be validated by using the Config-servers endpoint - http://localhost:8888/sample-pong/default, 8888 is the port where I had specified for the server endpoint, and should respond with a content along these lines:

"name": "sample-pong",
  "profiles": [
    "default"
  ],
  "label": "master",
  "propertySources": [
    {
      "name": "classpath:/config/sample-pong.yml",
      "source": {
        "reply.message": "Pong"
      }
    }
  ]
}

As can be seen the "reply.message" property from this central configuration server will be used by the pong service as the acknowledgement message

Now to set up this endpoint as a service, all that is required is a Spring-boot based entry point along these lines:

@SpringBootApplication
@EnableDiscoveryClient
public class PongApplication {
    public static void main(String[] args) {
        SpringApplication.run(PongApplication.class, args);
    }
}

and that completes the code for the "pong" service.


Sample-ping micro-service


So now onto a consumer of the "pong" micro-service, very imaginatively named the "ping" micro-service. Spring-Cloud and Netflix OSS offer a lot of options to invoke endpoints on Eureka registered services, to summarize the options that I had:

1. Use raw Eureka DiscoveryClient to find the instances hosting a service and make calls using Spring's RestTemplate.

2. Use Ribbon, a client side load balancing solution which can use Eureka to find service instances

3. Use Feign, which provides a declarative way to invoke a service call. It internally uses Ribbon.

I went with Feign. All that is required is an interface which shows the contract to invoke the service:

package org.bk.consumer.feign;

import org.bk.consumer.domain.Message;
import org.bk.consumer.domain.MessageAcknowledgement;
import org.springframework.cloud.netflix.feign.FeignClient;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;

@FeignClient("samplepong")
public interface PongClient {

    @RequestMapping(method = RequestMethod.POST, value = "/message",
            produces = MediaType.APPLICATION_JSON_VALUE, consumes = MediaType.APPLICATION_JSON_VALUE)
    @ResponseBody
    MessageAcknowledgement sendMessage(@RequestBody Message message);
}

The annotation @FeignClient("samplepong") internally points to a Ribbon "named" client called "samplepong". This means that there has to be an entry in the property files for this named client, in my case I have these entries in my application.yml file:

samplepong:
  ribbon:
    DeploymentContextBasedVipAddresses: sample-pong
    NIWSServerListClassName: com.netflix.niws.loadbalancer.DiscoveryEnabledNIWSServerList
    ReadTimeout: 5000
    MaxAutoRetries: 2

The most important entry here is the "samplepong.ribbon.DeploymentContextBasedVipAddresses" which points to the "pong" services Eureka registration address using which the service instance will be discovered by Ribbon.

The rest of the application is a routine Spring Boot application. I have exposed this service call behind Hystrix which guards against service call failures and essentially wraps around this FeignClient:

package org.bk.consumer.service;

import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;
import org.bk.consumer.domain.Message;
import org.bk.consumer.domain.MessageAcknowledgement;
import org.bk.consumer.feign.PongClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.stereotype.Service;

@Service("hystrixPongClient")
public class HystrixWrappedPongClient implements PongClient {

    @Autowired
    @Qualifier("pongClient")
    private PongClient feignPongClient;

    @Override
    @HystrixCommand(fallbackMethod = "fallBackCall")
    public MessageAcknowledgement sendMessage(Message message) {
        return this.feignPongClient.sendMessage(message);
    }

    public MessageAcknowledgement fallBackCall(Message message) {
        MessageAcknowledgement fallback = new MessageAcknowledgement(message.getId(), message.getPayload(), "FAILED SERVICE CALL! - FALLING BACK");
        return fallback;
    }
}


"Boot"ing up


I have dockerized my entire set-up, so the simplest way to start up the set of applications is to first build the docker images for all of the artifacts this way:

mvn clean package docker:build -DskipTests

and bring all of them up using the following command, the assumption being that both docker and docker-compose are available locally:

docker-compose up

Assuming everything comes up cleanly, Eureka should show all the registered services, at http://dockerhost:8761 url -


The UI of the ping application should be available at http://dockerhost:8080 url -



Additionally a Hystrix dashboard should be available to monitor the requests to the "pong" app at this url http://dockerhost:8989/hystrix/monitor?stream=http%3A%2F%2Fsampleping%3A8080%2Fhystrix.stream:



References


1. The code is available at my github location - https://github.com/bijukunjummen/spring-cloud-ping-pong-sample

2. Most of the code is heavily borrowed from the spring-cloud-samples repository - https://github.com/spring-cloud-samples

Tuesday, June 23, 2015

Rx-java subscribeOn and observeOn

If you have been confused by Rx-java Observable subscribeOn and observeOn, one of the blog articles that helped me understand these operations is this one by Graham Lea. I wanted to recreate a very small part of the article here, so consider a service which emits values every 200 millseconds:



package obs.threads;

import obs.Util;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import rx.Observable;

public class GeneralService {
    private static final Logger logger = LoggerFactory.getLogger(GeneralService.class);
    public Observable<String> getData() {
        return Observable.<String>create(s -> {
            logger.info("Start: Executing a Service");
            for (int i = 1; i <= 3; i++) {
                Util.delay(200);
                logger.info("Emitting {}", "root " + i);
                s.onNext("root " + i);
            }
            logger.info("End: Executing a Service");
            s.onCompleted();
        });
    }
}

Now, if I were to subscribe to this service, this way:

@Test
public void testThreadedObservable1() throws Exception {
    Observable<String> ob1 = aService.getData();

    CountDownLatch latch = new CountDownLatch(1);

    ob1.subscribe(s -> {
        Util.delay(500);
        logger.info("Got {}", s);
    }, e -> logger.error(e.getMessage(), e), () -> latch.countDown());

    latch.await();
}

All of the emissions and subscriptions will act on the main thread and something along the following lines will be printed:

20:53:29.380 [main] INFO  o.t.GeneralService - Start: Executing a Service
20:53:29.587 [main] INFO  o.t.GeneralService - Emitting root 1
20:53:30.093 [main] INFO  o.t.ThreadedObsTest - Got root 1
20:53:30.298 [main] INFO  o.t.GeneralService - Emitting root 2
20:53:30.800 [main] INFO  o.t.ThreadedObsTest - Got root 2
20:53:31.002 [main] INFO  o.t.GeneralService - Emitting root 3
20:53:31.507 [main] INFO  o.t.ThreadedObsTest - Got root 3
20:53:31.507 [main] INFO  o.t.GeneralService - End: Executing a Service

By default the emissions are not asynchronous in nature. So now, what is the behavior if subscribeOn is used:

public class ThreadedObsTest {
    private GeneralService aService = new GeneralService();

    private static final Logger logger = LoggerFactory.getLogger(ThreadedObsTest.class);
    private ExecutorService executor1 = Executors.newFixedThreadPool(5, new ThreadFactoryBuilder().setNameFormat("SubscribeOn-%d").build());

 @Test
 public void testSubscribeOn() throws Exception {
        Observable<String> ob1 = aService.getData();

        CountDownLatch latch = new CountDownLatch(1);

        ob1.subscribeOn(Schedulers.from(executor1)).subscribe(s -> {
            Util.delay(500);
            logger.info("Got {}", s);
        }, e -> logger.error(e.getMessage(), e), () -> latch.countDown());

        latch.await();
    }
}

Here I am using Guava's ThreadFactoryBuilder to give each thread in the threadpool a unique name pattern, if I were to execute this code, the output will be along these lines:

20:56:47.117 [SubscribeOn-0] INFO  o.t.GeneralService - Start: Executing a Service
20:56:47.322 [SubscribeOn-0] INFO  o.t.GeneralService - Emitting root 1
20:56:47.828 [SubscribeOn-0] INFO  o.t.ThreadedObsTest - Got root 1
20:56:48.032 [SubscribeOn-0] INFO  o.t.GeneralService - Emitting root 2
20:56:48.535 [SubscribeOn-0] INFO  o.t.ThreadedObsTest - Got root 2
20:56:48.740 [SubscribeOn-0] INFO  o.t.GeneralService - Emitting root 3
20:56:49.245 [SubscribeOn-0] INFO  o.t.ThreadedObsTest - Got root 3
20:56:49.245 [SubscribeOn-0] INFO  o.t.GeneralService - End: Executing a Service

Now, the execution has moved away from the main thread and the emissions and the subscriptions are being processed in the threads borrowed from the threadpool.

And what happens if observeOn is used:
public class ThreadedObsTest {
    private GeneralService aService = new GeneralService();

    private static final Logger logger = LoggerFactory.getLogger(ThreadedObsTest.class);
    private ExecutorService executor1 = Executors.newFixedThreadPool(5, new ThreadFactoryBuilder().setNameFormat("SubscribeOn-%d").build());

 @Test
 public void testObserveOn() throws Exception {
        Observable<String> ob1 = aService.getData();

        CountDownLatch latch = new CountDownLatch(1);

        ob1.observeOn(Schedulers.from(executor2)).subscribe(s -> {
            Util.delay(500);
            logger.info("Got {}", s);
        }, e -> logger.error(e.getMessage(), e), () -> latch.countDown());

        latch.await();
    }
}

the output is along these lines:

21:03:08.655 [main] INFO  o.t.GeneralService - Start: Executing a Service
21:03:08.860 [main] INFO  o.t.GeneralService - Emitting root 1
21:03:09.067 [main] INFO  o.t.GeneralService - Emitting root 2
21:03:09.268 [main] INFO  o.t.GeneralService - Emitting root 3
21:03:09.269 [main] INFO  o.t.GeneralService - End: Executing a Service
21:03:09.366 [ObserveOn-1] INFO  o.t.ThreadedObsTest - Got root 1
21:03:09.872 [ObserveOn-1] INFO  o.t.ThreadedObsTest - Got root 2
21:03:10.376 [ObserveOn-1] INFO  o.t.ThreadedObsTest - Got root 3

The emissions are now back on the main thread but the subscriptions are being processed in a threadpool.

That is the difference, when subscribeOn is used the emissions are performed on the specified Scheduler, when observeOn is used the subscriptions are performed are on the specified scheduler!

And the output when both are specified is equally predictable. Now in all cases I had created a Scheduler using a ThreadPool with 5 threads but only 1 of the threads has really been used both for emitting values and for processing subscriptions, this is actually the normal behavior of Observables. If you want to make more efficient use of the Threadpool, one approach may be to create multiple Observable's, say for eg, if I have a service which returns pages of data this way:

public Observable<Integer> getPages(int totalPages) {
    return Observable.create(new Observable.OnSubscribe<Integer>() {
        @Override
        public void call(Subscriber<? super Integer> subscriber) {
            logger.info("Getting pages");
            for (int i = 1; i <= totalPages; i++) {
                subscriber.onNext(i);
            }
            subscriber.onCompleted();
        }
    });
}

and another service which acts on each page of the data:

public Observable<String> actOnAPage(int pageNum) {
    return Observable.<String>create(s -> {
        Util.delay(200);
        logger.info("Acting on page {}",  pageNum);
        s.onNext("Page " + pageNum);
        s.onCompleted();
    });
}

a way to use a Threadpool to process each page of data would be to chain it this way:

getPages(5).flatMap(  page -> aService.actOnAPage(page).subscribeOn(Schedulers.from(executor1)) )
                .subscribe(s -> {
                    logger.info("Completed Processing page: {}", s);
        });

see how the subscribeOn is on the each Observable acting on a page. With this change, the output would look like this:

21:15:45.572 [main] INFO  o.t.ThreadedObsTest - Getting pages
21:15:45.787 [SubscribeOn-1] INFO  o.t.GeneralService - Acting on page 2
21:15:45.787 [SubscribeOn-0] INFO  o.t.GeneralService - Acting on page 1
21:15:45.787 [SubscribeOn-4] INFO  o.t.GeneralService - Acting on page 5
21:15:45.787 [SubscribeOn-3] INFO  o.t.GeneralService - Acting on page 4
21:15:45.787 [SubscribeOn-2] INFO  o.t.GeneralService - Acting on page 3
21:15:45.789 [SubscribeOn-1] INFO  o.t.ThreadedObsTest - Completed Processing page: Page 2
21:15:45.790 [SubscribeOn-1] INFO  o.t.ThreadedObsTest - Completed Processing page: Page 1
21:15:45.790 [SubscribeOn-1] INFO  o.t.ThreadedObsTest - Completed Processing page: Page 3
21:15:45.790 [SubscribeOn-1] INFO  o.t.ThreadedObsTest - Completed Processing page: Page 4
21:15:45.791 [SubscribeOn-1] INFO  o.t.ThreadedObsTest - Completed Processing page: Page 5

Now the threads in the threadpool are being used uniformly.

Saturday, June 13, 2015

Learning Spring-Cloud - Infrastructure and Configuration

I got a chance to play with Spring-Cloud to create a sample set of cloud ready microservices and I am very impressed by how Spring-Cloud enables different infrastructure components and services to work together nicely.

I am used to creating microservices based on Netflix OSS based stack and typically in a Netflix stack Eureka is considered the hub using which the microservices register themselves and discover each other. In the spirit of this model, I wanted to try out a series of services which look like this:




There are 2 microservices here:


  • A sample-pong service which responds to "ping" messages
  • A sample-ping service which uses the "pong" micro-service


And there are two infrastructure components:

  • Sample-config which provides a centralized configuration for the 2 microservices
  • Eureka which is the central hub providing a way for the services to register themselves and discover other services

So to start with, here I will introduce how I went about using spring-cloud to develop the two infrastructure components and follow it up with how the microservices can be developed to use these components.
The entire project is available at my github location.


Eureka

Spring-cloud makes it very simple to bring up an instance of Eureka, all that is required is a class along the following lines:

package org.bk.eureka;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;

@SpringBootApplication
@EnableEurekaServer
public class EurekaApplication {

    public static void main(String[] args) {
        SpringApplication.run(EurekaApplication.class, args);
    }
}

Multiple instances of Eureka can be started up and can be configured to work together in a resilient way, here though I just want a demo standalone Eureka instance and this can be done using a configuration which looks like this, essentially starting up eureka on port 8761 and in a standalone mode by not trying to look for peers:

---
# application.yml
server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false


Configuration Server

Spring-Cloud provides a centralized configuration server that microservices can use for loading up their properties. Typically microservices may want to go one of two ways:


  1. Use Eureka as a hub and find the configuration services
  2. Use Configuration services and find Eureka

I personally prefer the Eureka first approach, in this sample Configuration server registers itself with Eureka and when microservices come up they first check with Eureka, find the Configuration service and use the service to load up their properties.

The configuration server is simple to write using Spring-cloud too, the following is all the code that is required:

package org.bk.configserver;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.config.server.EnableConfigServer;
import org.springframework.cloud.netflix.eureka.EnableEurekaClient;

@SpringBootApplication
@EnableConfigServer
@EnableEurekaClient
public class ConfigServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(ConfigServerApplication.class, args);
    }
}

and the configuration that registers this service with Eureka:

---
# bootstrap.yml
spring:
  application:
    name: sample-config
  profiles:
    active: native

eureka:
  instance:
    nonSecurePort: ${server.port:8888}
  client:
    serviceUrl:
      defaultZone: http://${eureka.host:localhost}:${eureka.port:8761}/eureka/


---
# application.yml
spring:
  cloud:
    config:
      server:
        native:
          searchLocations: classpath:/config

server:
  port: 8888

The configuration server is being started at port 8888, and provides configuration from the classpath. In a real application, the configuration can be set to load from a central git repository, this way providing a clean way to version properties and the ability to centrally manage the properties. In this specific case, since it provides properties for two microservices, there are two sets of files in the classpath and provide appropriate properties to the calling application:

---
#sample-pong.yml
reply:
  message: Pong

---
# sample-ping.yml
send:
  message: Ping


Starting up Eureka and Configuration Server

Since both these applications are Spring-boot based, they can each be started up by running the following command:

mvn spring-boot:run

Once Eureka and Configuration server come up cleanly., Eureka provides a nice interface with details of the services registered with it, in this case the Configuration server shows up with a name of "SAMPLE-CONFIG":


The config server provides properties to the calling applications through endpoints with the pattern:
/{application}/{profile}[/{label}]

So to retrieve the properties for "sample-pong" application, the following url is used internally by the application:

http://localhost:8888/sample-pong/default

and for the "sample-ping" application the properties can be derived from http://localhost:8888/sample-ping/default


This concludes the details around bringing up the Infrastructure components of a Cloud ready system.  I will follow it up with how the microservices can be developed that make use of these infrastructure components. The code behind these samples are available at my github repository.

Saturday, May 30, 2015

Rx-netty and Karyon2 based cloud ready microservice

Netflix Karyon provides a clean framework for creating cloud-ready micro-services. In your organization if you use the Netflix OSS stack consisting of Eureka for service registration and discovery, Archaius for property management, then very likely you use Karyon to create your microservices.

Karyon has been undergoing quite a lot of changes recently and my objective here is to document a good sample using the newer version of Karyon. The old Karyon(call it Karyon1) was based on JAX-RS 1.0 Specs with Jersey as the implementation, the newer version of Karyon(Karyon2) still supports Jersey but also encourages the use of RX-Netty which is a customized version of Netty with support for Rx-java.

With that said, let me jump into a sample. My objective with this sample is to create a "pong" micro-service which takes a "POST"ed "message" and returns an "Acknowledgement"

The following is a sample request:

{
"id": "id",
"payload":"Ping"
}

And an expected response:

{"id":"id","received":"Ping","payload":"Pong"}


The first step is to create a RequestHandler which as the name suggests is an RX-Netty component dealing with routing the incoming request:

package org.bk.samplepong.app;

import com.fasterxml.jackson.databind.ObjectMapper;
import io.netty.buffer.ByteBuf;
import io.netty.handler.codec.http.HttpMethod;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.reactivex.netty.protocol.http.server.HttpServerRequest;
import io.reactivex.netty.protocol.http.server.HttpServerResponse;
import io.reactivex.netty.protocol.http.server.RequestHandler;
import netflix.karyon.transport.http.health.HealthCheckEndpoint;
import org.bk.samplepong.domain.Message;
import org.bk.samplepong.domain.MessageAcknowledgement;
import rx.Observable;

import java.io.IOException;
import java.nio.charset.Charset;


public class RxNettyHandler implements RequestHandler<ByteBuf, ByteBuf> {

    private final String healthCheckUri;
    private final HealthCheckEndpoint healthCheckEndpoint;
    private final ObjectMapper objectMapper = new ObjectMapper();

    public RxNettyHandler(String healthCheckUri, HealthCheckEndpoint healthCheckEndpoint) {
        this.healthCheckUri = healthCheckUri;
        this.healthCheckEndpoint = healthCheckEndpoint;
    }

    @Override
    public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
        if (request.getUri().startsWith(healthCheckUri)) {
            return healthCheckEndpoint.handle(request, response);
        } else if (request.getUri().startsWith("/message") && request.getHttpMethod().equals(HttpMethod.POST)) {
            return request.getContent().map(byteBuf -> byteBuf.toString(Charset.forName("UTF-8")))
                    .map(s -> {
                        try {
                            Message m = objectMapper.readValue(s, Message.class);
                            return m;
                        } catch (IOException e) {
                            throw new RuntimeException(e);
                        }
                    })
                    .map(m -> new MessageAcknowledgement(m.getId(), m.getPayload(), "Pong"))
                    .flatMap(ack -> {
                                try {
                                    return response.writeStringAndFlush(objectMapper.writeValueAsString(ack));
                                } catch (Exception e) {
                                    response.setStatus(HttpResponseStatus.BAD_REQUEST);
                                    return response.close();
                                }
                            }
                    );
        } else {
            response.setStatus(HttpResponseStatus.NOT_FOUND);
            return response.close();
        }
    }
}

This flow is completely asynchronous and internally managed by the RX-java libraries, Java 8 Lambda expressions also help in making the code concise. The one issue that you would see here is that the routing logic(which uri to which controller) is mixed up with the actual controller logic and I believe this is being addressed.

Given this RequestHandler, a server can be started up in a standalone java program, using raw RX-Netty this way, this is essentially it, an endpoint will be brought up at port 8080 to handle the requests:

public final class RxNettyExample {

    public static void main(String... args) throws Exception {
        final ObjectMapper objectMapper = new ObjectMapper();
        RxNettyHandler handler = new RxNettyHandler();

        HttpServer<ByteBuf, ByteBuf> server = RxNetty.createHttpServer(8080, handler);

        server.start();


This is however the native Rx-netty way, for a cloud-ready micro-service a few things have to happen, the service should register with Eureka and should respond to the healthchecks back from Eureka and should be able to load up properties using Archaius.

So with Karyon2, the startup in a main program looks a little different:

package org.bk.samplepong.app;

import netflix.adminresources.resources.KaryonWebAdminModule;
import netflix.karyon.Karyon;
import netflix.karyon.KaryonBootstrapModule;
import netflix.karyon.ShutdownModule;
import netflix.karyon.archaius.ArchaiusBootstrapModule;
import netflix.karyon.eureka.KaryonEurekaModule;
import netflix.karyon.servo.KaryonServoModule;
import netflix.karyon.transport.http.health.HealthCheckEndpoint;
import org.bk.samplepong.resource.HealthCheck;

public class SamplePongApp {

    public static void main(String[] args) {
        HealthCheck healthCheckHandler = new HealthCheck();
        Karyon.forRequestHandler(8888,
                new RxNettyHandler("/healthcheck",
                        new HealthCheckEndpoint(healthCheckHandler)),
                new KaryonBootstrapModule(healthCheckHandler),
                new ArchaiusBootstrapModule("sample-pong"),
                KaryonEurekaModule.asBootstrapModule(),
                Karyon.toBootstrapModule(KaryonWebAdminModule.class),
                ShutdownModule.asBootstrapModule(),
                KaryonServoModule.asBootstrapModule()
        ).startAndWaitTillShutdown();
    }
}

Now it is essentially cloud ready, this version of the program on startup would register cleanly with Eureka and expose a healthcheck endpoint. It additionally exposes a neat set of admin endpoints at port 8077.


Conclusion

I hope this provides a good intro on using Karyon2 to develop Netflix OSS based. The entire sample is available at my github repo here: https://github.com/bijukunjummen/sample-ping-pong-netflixoss/tree/master/sample-pong. As a follow up I will show how the same service can be developed using spring-cloud which is the Spring way to create Micro-services.

Thursday, May 21, 2015

Akka samples with scala and Spring

I was looking around recently for Akka samples with Spring and found a starter project which appeared to fit the bill well. The project however utilizes Spring-Scala which is an excellent project, but is no longer maintained. So I wanted to update the sample to use core Spring java libraries instead. So here is an attempt on a fork of this starter project with core Spring instead of Spring-scala. The code is available here.

The project utilizes Akka extensions to hook in Spring based dependency injection into Akka.

Here is what the extension looks like:

package sample

import akka.actor.{ActorSystem, Props, Extension}
import org.springframework.context.ApplicationContext
/**
 * The Extension implementation.
 */
class SpringExtension extends Extension {
  var applicationContext: ApplicationContext = _

  /**
   * Used to initialize the Spring application context for the extension.
   * @param applicationContext
   */
  def initialize(applicationContext: ApplicationContext) = {
    this.applicationContext = applicationContext
    this
  }

  /**
   * Create a Props for the specified actorBeanName using the
   * SpringActorProducer class.
   *
   * @param actorBeanName  The name of the actor bean to create Props for
   * @return a Props that will create the named actor bean using Spring
   */
  def props(actorBeanName: String): Props =
    Props(classOf[SpringActorProducer], applicationContext, actorBeanName)

}

object SpringExtension {
  def apply(system : ActorSystem )(implicit ctx: ApplicationContext) :  SpringExtension =  SpringExt(system).initialize(ctx)
}

So the extension wraps around a Spring application context. The extensions provides a props method which returns an Akka Props configuration object which uses the application context and the name which the actor is registered with Spring to return an instance of the Actor. The following is the SpringActorProducer:

package sample

import akka.actor.{Actor, IndirectActorProducer}
import org.springframework.context.ApplicationContext


class SpringActorProducer(ctx: ApplicationContext, actorBeanName: String) extends IndirectActorProducer {

  override def produce: Actor = ctx.getBean(actorBeanName, classOf[Actor])

  override def actorClass: Class[_ <: Actor] =
    ctx.getType(actorBeanName).asInstanceOf[Class[Actor]]

}

Given this base code, how does Spring find the actors, I have used scanning annotations to annotate the actors this way:

package sample.actor

import akka.actor.Actor
import sample.service.CountingService
import sample.SpringExtension._
import org.springframework.stereotype.Component
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.context.annotation.Scope
import akka.actor.ActorRef
import sample.SpringExtension
import org.springframework.context.ApplicationContext

@Component("countingCoordinatingActor")
@Scope("prototype")
class CountingCoordinating @Autowired() (implicit ctx: ApplicationContext) extends Actor {

  import sample.messages._

  var counter: Option[ActorRef] = None

  
  def receive = {
    case COUNT => countingActor() ! COUNT
    case g:GET => countingActor() ! g
  }
  
  private def countingActor(): ActorRef = {
     if (counter.isEmpty) {
        val countingActorProp = SpringExtension(context.system).props("countingActor")
        counter = Some(context.actorOf(countingActorProp, "counter"))
     }  
     
     counter.get
  }
  
}


@Component("countingActor")
@Scope("prototype")
class CountingActor @Autowired()(countingService: CountingService) extends Actor {

  import sample.messages._

  private var count = 0

  def receive = {
    case COUNT => count = countingService.increment(count)
    case GET(requester: ActorRef) => requester ! RESULT(count)
  }
  
}

The CountingService is a simple service that gets injected in by Spring. The following is the main Spring Application configuration where all the wiring takes place:

import akka.actor.ActorSystem
import org.springframework.context.ApplicationContext
import org.springframework.context.annotation.Configuration
import org.springframework.context.annotation.Bean
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.context.annotation.ComponentScan

@Configuration
@ComponentScan(Array("sample.service", "sample.actor"))
class AppConfiguration {

  @Autowired
  implicit var ctx: ApplicationContext = _;
  
  /**
   * Actor system singleton for this application.
   */
  @Bean
  def actorSystem() = {
    val system = ActorSystem("AkkaScalaSpring")
    // initialize the application context in the Akka Spring Extension
    SpringExt(system)
    system    
  }
}

To make use of this entire set-up in a sample program:

import akka.actor.{ActorRef, ActorSystem}
import sample.SpringExtension._
import scala.concurrent.duration._
import scala.concurrent._
import scala.util._
import sample.messages._
import org.springframework.context.annotation.AnnotationConfigApplicationContext
import akka.actor.Inbox


object Main extends App {
  // create a spring context
  implicit val ctx = new AnnotationConfigApplicationContext(classOf[AppConfiguration])

  import Config._

  // get hold of the actor system
  val system = ctx.getBean(classOf[ActorSystem])

  val inbox = Inbox.create(system)
  
  val prop = SpringExtension(system).props("countingCoordinatingActor")

  // use the Spring Extension to create props for a named actor bean
  val countingCoordinator = system.actorOf(prop, "counter")

  // tell it to count three times
  inbox.send(countingCoordinator, COUNT)
  inbox.send(countingCoordinator, COUNT)
  inbox.send(countingCoordinator, COUNT)
  
  inbox.send(countingCoordinator, GET(inbox.getRef()))
  
  val RESULT(count) = inbox.receive(5.seconds)

  println(s"Got $count")
  system.shutdown
  system.awaitTermination
}

References:

Monday, May 11, 2015

Docker on Mac OSX with docker-machine and VMWare fusion


If you use Cisco Anyconnect VPN on your Mac OSX machine you would have found your experience with docker using boot2docker not working at times. The basic issue is that the Cisco Anyconnect VPN rewrites the routing rules which map the boot2docker Virtual box network interfaces.

The fix that has worked better for me in the last few days has been to not use boot2docker, but instead to use docker-machine to create a VMWare fusion based docker VM.

So to try out this approach first ensure that you have VMWare Fusion installed. Then install docker and docker-machine. Docker and docker-machine command line can be installed using homebrew:

brew install docker
brew install docker-machine

Once the VMWare fusion, docker and docker-machine are in place, then use docker-machine to create a docker host using VMWare fusion this way:

docker-machine create -d vmwarefusion                       

Once the Host is properly created, verify that it is in a running state using docker-machine:

docker-machine ls

That is essentially it, to use this shiny new docker host ensure that the appropriate environment variables are set in the shell:

eval "$(docker-machine env fusion)"

and your docker commands should work with or without VPN connectivity:

docker ps -a

Tuesday, May 5, 2015

Netflix Archaius for property management - Basics

Netflix Archaius provides a neat set of features to load dynamic properties into an application.

This blog post is just a documentation of the extent of Archaius that I have understood, there is much more to it than I have documented here, but this should provide a good start:

Default Behavior

Consider a simple properties file:
stringprop=propvalue
listprop=value1, value2, value3
mapprop=key1=value1, key2=value2
longprop=100

If these entries are placed in a config.properties file in the classpath, then the following test demonstrates how each of these properties can be resolved by Archaius in code:

@Test
public void testBasicStringProps() {
    DynamicStringProperty sampleProp = DynamicPropertyFactory.getInstance().getStringProperty("stringprop", "");
    assertThat(sampleProp.get(), equalTo("propvalue"));
}

@Test
public void testBasicListProps() {
    DynamicStringListProperty listProperty = new DynamicStringListProperty("listprop", Collections.emptyList());
    assertThat(listProperty.get(), contains("value1", "value2", "value3"));
}

@Test
public void testBasicMapProps() {
    DynamicStringMapProperty mapProperty = new DynamicStringMapProperty("mapprop", Collections.emptyMap());
    assertThat(mapProperty.getMap(), allOf(hasEntry("key1", "value1"), hasEntry("key2", "value2")));
}

@Test
public void testBasicLongProperty() {
    DynamicLongProperty longProp = DynamicPropertyFactory.getInstance().getLongProperty("longprop", 1000);
    assertThat(longProp.get(), equalTo(100L));
}

Loading Properties from a non-default file in classpath

So now, how do we handle a case where the content is to be loaded from a file with a different name, say newconfig.properties but still available in the classpath. The following is one way to do that:

@Before
public void setUp() throws Exception{
    ConfigurationManager.loadCascadedPropertiesFromResources("newconfig");
}

With this change the previous test will just work.

Another option is to provide a system property to indicate the name of the properties file to load from the classpath:

System.setProperty("archaius.configurationSource.defaultFileName", "newconfig.properties");

Overriding for environments


Now, how do we override the properties for different application environments - Archaius provides a neat feature where a base property file can be loaded up but then overridden based on the context. More details are here. To demonstrate this consider two files, one containing the defaults and one containing overrides for a "test" environment:

sample.properties
sampleprop=propvalue
@next=sample-${@environment}.properties

sample-test.properties
sampleprop=propvalue-test

See the notation at the end of the default file @next=sample-${@environment}.properties, it is a way to indicate to Archaius that more properties need to be loaded up based on the resolved @environment parameter. This parameter can be injected in a couple of ways and the following test demonstrates this:

@Before
public void setUp() throws Exception{
    ConfigurationManager.getConfigInstance().setProperty("@environment", "test");
    ConfigurationManager.loadCascadedPropertiesFromResources("sample");
}

@Test
public void testBasicStringPropsInTestEnvironment() throws Exception {
    DynamicStringProperty sampleProp = DynamicPropertyFactory.getInstance().getStringProperty("sampleprop", "");
    assertThat(sampleProp.get(), equalTo("propvalue-test"));
}

The base property file itself now has to be loaded in through a call to ConfigurationManager.loadCascadedPropertiesFromResources..

Conclusion

These are essentially the basics of Netflix Archaius, there is much more to it of course which can be gleaned from the wiki on the Archaius github site. If you are interested in exploring the samples shown here a little more, they are available in this github project