Tuesday, February 14, 2017

Bootstrapping an OAuth2 Authorization server using UAA

A quick way to get a robust OAuth2 server running in your local machine is to use the excellent Cloud Foundry UAA project. UAA is used as the underlying OAUth2 authorization server in Cloud Foundry deployments and can scale massively, but is still small enough that it can be booted up on modest hardware.

I will cover using the UAA in two posts. In this post, I will go over how to get a local UAA server running and populate it with some of the actors involved in an OAuth2 authorization_code flow - clients and users, and in a follow up post I will show how to use this Authorization server with a sample client application and in securing a resource.

Starting up the UAA

The repository for the UAA project is at https://github.com/cloudfoundry/uaa


Downloading the project is simple, just clone this repo:
git clone https://github.com/cloudfoundry/uaa

If you have a local JDK available, start it up using:
./gradlew run

This version of UAA uses an in-memory database, so the test data generated will be lost on restart of the application.


Populate some data

An awesome way to interact with UAA is its companion CLI application called uaac, available here. Assuming that you have the uaac cli downloaded and UAA started up at its default port of 8080, let us start by pointing the uaac to the uaa application:

uaac target http://localhost:8080/uaa

and log into it using one of the canned client credentials(admin/adminsecret):

uaac token client get admin -s adminsecret

Now that a client has logged in, the token can be explored using :
uaac context

This would display the details of the token issued by UAA, along these lines:

[3]*[http://localhost:8080/uaa]

  [2]*[admin]
      client_id: admin
      access_token: eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiJkOTliMjg1MC1iZDQ1LTRlOTctODIyZS03NGE2MmUwN2Y0YzUiLCJzdWIiOiJhZG1pbiIsImF1dGhvcml0aWVzIjpbImNsaWVudHMucmVhZCIsImNsaWVudHMuc2VjcmV0IiwiY2xpZW50cy53cml0ZSIsInVhYS5hZG1pbiIsImNsaWVudHMuYWRtaW4iLCJzY2ltLndyaXRlIiwic2NpbS5yZWFkIl0sInNjb3BlIjpbImNsaWVudHMucmVhZCIsImNsaWVudHMuc2VjcmV0IiwiY2xpZW50cy53cml0ZSIsInVhYS5hZG1pbiIsImNsaWVudHMuYWRtaW4iLCJzY2ltLndyaXRlIiwic2NpbS5yZWFkIl0sImNsaWVudF9pZCI6ImFkbWluIiwiY2lkIjoiYWRtaW4iLCJhenAiOiJhZG1pbiIsImdyYW50X3R5cGUiOiJjbGllbnRfY3JlZGVudGlhbHMiLCJyZXZfc2lnIjoiZTc4YjAyMTMiLCJpYXQiOjE0ODcwMzk3NzYsImV4cCI6MTQ4NzA4Mjk3NiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL3VhYS9vYXV0aC90b2tlbiIsInppZCI6InVhYSIsImF1ZCI6WyJhZG1pbiIsImNsaWVudHMiLCJ1YWEiLCJzY2ltIl19.B-RmeIvYttxJOMr_CX1Jsinsr6G_e8dVU-Fv-3Qq1ow
      token_type: bearer
      expires_in: 43199
      scope: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
      jti: d99b2850-bd45-4e97-822e-74a62e07f4c5

To see a more readable and decoded form of token, just run:
uaac token decode 
which should display a decoded form of the token:
jti: d99b2850-bd45-4e97-822e-74a62e07f4c5
  sub: admin
  authorities: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
  scope: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
  client_id: admin
  cid: admin
  azp: admin
  grant_type: client_credentials
  rev_sig: e78b0213
  iat: 1487039776
  exp: 1487082976
  iss: http://localhost:8080/uaa/oauth/token
  zid: uaa
  aud: admin clients uaa scim


Now, to create a brand new client(called client1), which I will be using in a follow on post:

uaac client add client1  \
  --name client1 --scope resource.read,resource.write \
  --autoapprove true  \
  -s client1 \
  --authorized_grant_types authorization_code,refresh_token,client_credentials \
  --authorities uaa.resource

This client is going to request a scope of resource.read, resource.write from users and will participate in authorization_code grant-type OAuth2 flows


Creating a resource owner or a user of the system:

uaac user add user1 -p user1 --emails user1@user1.com

and assigning this user a resource.read scope:

uaac group add resource.read
uaac member add resource.read user1


Exercise a test flow

Now that we have a client and a resource owner, let us exercise a quick authorization_code flow, uaac provides a handy command line option that provides the necessary redirect hooks to capture auth code and transforms the auth_code to an access token.

uaac token authcode get -c client1 -s client1 --no-cf

Invoking the above command should open up a browser window and prompt for user creds:



Logging in with the user1/user1 user that was created previously should respond with a message in the command line that the token has been successfully fetched, this can be explored once more using the following command:

uaac context

with the output, showing the details of the logged in user!:
jti: c8ddfdfc-9317-4f16-b3a9-808efa76684b
  nonce: 43c8d9f7d6264fb347ede40c1b7b44ae
  sub: 7fdd9a7e-5b92-42e7-ae75-839e21b932e1
  scope: resource.read
  client_id: client1
  cid: client1
  azp: client1
  grant_type: authorization_code
  user_id: 7fdd9a7e-5b92-42e7-ae75-839e21b932e1
  origin: uaa
  user_name: user1
  email: user1@user1.com
  auth_time: 1487040497
  rev_sig: c107f5c0
  iat: 1487040497
  exp: 1487083697
  iss: http://localhost:8080/uaa/oauth/token
  zid: uaa
  aud: resource client1

This concludes the whirlwind tour of setting up a local UAA and adding a couple of roles involved in a OAuth2 flow - a client and a user. I have not covered the OAuth2 flows itself, the Digital Ocean intro to OAuth2 is a very good primer on the flows.

I will follow this post with a post on how this infrastructure can be used for securing a sample resource and demonstrate a flow using Spring Security and Spring Boot.

Saturday, January 28, 2017

Spring Data support for Cassandra 3

One of the items that caught my eye from the announcement of the new Spring Data release train named Ingalls was that the Spring Data Cassandra finally supports Cassandra 3+. So I revisited one of my old samples and tried it with a newer version of Cassandra.


Installing Cassandra


The first step is to install a local version of Cassandra and I continue to find the ccm tool to be outstanding in being able to bring up and tear down a small cluster. Here is the command that I am running to bring up a 3 node Apache Cassandra 3.9 based cluster.

ccm create test -v 3.9 -n 3 -s --vnodes


Create Schemas



Connect to a node in the cluster:
ccm node1 cqlsh

CREATE KEYSPACE IF NOT EXISTS sample WITH replication = {'class':'SimpleStrategy', 'replication_factor':1};

Next, I need to create the tables to hold the data. A general Cassandra recommendation is to model the tables based on query patterns - given this let me first define a table to hold the basic "hotel" information:

CREATE TABLE IF NOT EXISTS  sample.hotels (
    id UUID,
    name varchar,
    address varchar,
    state varchar,
    zip varchar,
    primary key((id), name)
);


Assuming I have to support two query patterns - a retrieval of hotels based on say the first letter, and a retrieval of hotels by state, I have a "hotels_by_letter" denormalized table to support retrieval by "first letter":

CREATE TABLE IF NOT EXISTS  sample.hotels_by_letter (
    first_letter varchar,
    hotel_name varchar,
    hotel_id UUID,
    address varchar,
    state varchar,
    zip varchar,
    primary key((first_letter), hotel_name, hotel_id)
);


And just for variety a "hotels_by_state" materialized view to support retrieval by state that the hotels are in:

CREATE MATERIALIZED VIEW sample.hotels_by_state AS
    SELECT id, name, address, state, zip FROM hotels
        WHERE state IS NOT NULL AND id IS NOT NULL AND name IS NOT NULL
    PRIMARY KEY ((state), name, id)
    WITH CLUSTERING ORDER BY (name DESC)


Coding Repositories


On the Java side, since I am persisting and querying a simple domain type called "Hotel", it looks like this:

@Table("hotels")
public class Hotel implements Serializable {
    @PrimaryKey
    private UUID id;
    private String name;
    private String address;
    private String state;
    private String zip;
    ...
}

Now, to be able to perform a basic CRUD operation on this entity all that is required is a repository interface as shown in the following code:
import cass.domain.Hotel;
import org.springframework.data.repository.CrudRepository;

import java.util.UUID;

public interface HotelRepository extends CrudRepository<Hotel, UUID>, HotelRepositoryCustom {}

This repository is additionally inheriting from a HotelRepositoryCustom interface which is to provide the custom finders to support retrieval by first name and state.

Now to persist a Hotel entity all I have to do is to call the repository method:

hotelRepository.save(hotel);

The data in the materialized view is automatically synchronized and maintained by Cassandra, however the data in the "hotels_by_letter" table has to be managed through code, so I have another repository defined to maintain data in this table:

public interface HotelByLetterRepository 
        extends CrudRepository<HotelByLetter, HotelByLetterKey>, HotelByLetterRepositoryCustom {}


The custom interface and its implementation is to facilitate searching this table on queries based on first letter of the hotel name and is implemented this way through the a custom repository implementation feature of Spring data Cassandra.

import com.datastax.driver.core.querybuilder.QueryBuilder;
import com.datastax.driver.core.querybuilder.Select;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.cassandra.core.CassandraTemplate;
import org.springframework.stereotype.Repository;

import java.util.List;

@Repository
public class HotelRepositoryImpl implements HotelRepositoryCustom {

    private final CassandraTemplate cassandraTemplate;

    @Autowired
    public HotelRepositoryImpl(CassandraTemplate cassandraTemplate) {
        this.cassandraTemplate = cassandraTemplate;
    }

    @Override
    public List<Hotel> findByState(String state) {
        Select select = QueryBuilder.select().from("hotels_by_state");
        select.where(QueryBuilder.eq("state", state));
        return this.cassandraTemplate.select(select, Hotel.class);
    }
}

@Repository
public class HotelByLetterRepositoryImpl implements HotelByLetterRepositoryCustom {
    private final CassandraTemplate cassandraTemplate;

    public HotelByLetterRepositoryImpl(CassandraTemplate cassandraTemplate) {
        this.cassandraTemplate = cassandraTemplate;
    }

    @Override
    public List<HotelByLetter> findByFirstLetter(String letter) {
        Select select = QueryBuilder.select().from("hotels_by_letter");
        select.where(QueryBuilder.eq("first_letter", letter));
        return this.cassandraTemplate.select(select, HotelByLetter.class);
    }

}


Given these repository classes, custom repositories that provide query support, the rest of the code is to wire everything together which Spring Boot's Cassandra Auto Configuration facilitates.

That is essentially all there is to it, the Spring Data Cassandra makes it ridiculously simple to interact with Cassandra 3+.

A complete working project is I believe a far better way to get familiar with this excellent library and I have such a sample available in my github repo here - https://github.com/bijukunjummen/sample-boot-with-cassandra





Sunday, January 15, 2017

Gradle Plugins DSL and Spring-Boot Plugin

Gradle Plugins DSL is a new gradle feature which provides a very succinct way of adding a plugin to a Gradle based project. A good way to show the utility of this new mechanism is in how it simplifies a sample Spring Boot based gradle build file.

If I were to generate a sample gradle based Spring boot project from the excellent http://start.spring.io site, a snippet of the gradle file which adds in the Spring Boot gradle plugin looks like this:

buildscript {
 ext {
  springBootVersion = '1.4.3.RELEASE'
 }
 repositories {
  mavenCentral()
 }
 dependencies {
  classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
 }
}

apply plugin: 'org.springframework.boot'

The new Gradle Plugins DSL simplifies this boilerplate drastically. An equivalent declaration using the new Plugins DSL is the following:

plugins {
  id "org.springframework.boot" version "1.4.3.RELEASE"
}

This IMHO reads far better, though it does require some level of mental parsing. The best way to understand this new syntax though may to know that this works in concert with the Gradle plugins portal, a centralized repository of plugins, to resolve the plugin related dependencies. The page for the Spring Boot plugin is here - https://plugins.gradle.org/plugin/org.springframework.boot.

Wednesday, January 11, 2017

Deploying akka-http app to Cloud Foundry - Part 2

In a preceding post I had gone over the steps to deploy a simple akka-http app to Cloud Foundry. The gist of it was that as long there is a way to create a runnable fat(uber) jar, the deployment is very straightforward - Cloud Foundry's Java buildpack can take the bits and wire up everything needed to get it up an running in the Cloud Foundry environment.

Here I wanted to go over a slightly more involved scenario - this is where the app has an external database dependency say to a MySQL database.

In a local environment the details of the database would have been resolved using a configuration typically specified like this:

sampledb = {
  url = "jdbc:mysql://localhost:3306/mydb?useSSL=false"
  user = "myuser"
  password = "mypass"
}

If the Mysql database were to be outside of Cloud Foundry environment this approach of specifying the database configuration will continue to work nicely. However if the service resides in a Cloud Foundry market place , then the details of the service is created dynamically at bind time with the Application.

Just to make this a little more concrete, in my local PCF Dev, I have a marketplace with "p-mysql" service available.



And if I were to create a "service instance" out of this:


and bind this instance to an app:


essentially what happens at this point is that the application has an environment variable called VCAP_SERVICES available to it and this has to be parsed to get the db creds. VCAP_SERVICES in the current scenario looks something like this:

{
  "p-mysql": [
   {
    "credentials": {
     "hostname": "mysql-broker.local.pcfdev.io",
     "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/myinstance?user=user\u0026password=pwd",
     "name": "myinstance",
     "password": "pwd",
     "port": 3306,
     "uri": "mysql://user:pwd@mysql-broker.local.pcfdev.io:3306/myinstance?reconnect=true",
     "username": "user"
    },
    "label": "p-mysql",
    "name": "mydb",
    "plan": "512mb",
    "provider": null,
    "syslog_drain_url": null,
    "tags": [
     "mysql"
    ]
   }
  ]
 }

This can be parsed very easily using Typesafe config, a sample (admittedly hacky) code looks like this:

  def getConfigFor(serviceType: String, name: String): Config = {
    val vcapServices = env("VCAP_SERVICES")
    val rootConfig = ConfigFactory.parseString(vcapServices)
    val configs = rootConfig.getConfigList(serviceType).asScala
      .filter(_.getString("name") == name)
      .map(instance => instance.getConfig("credentials"))

    if (configs.length > 0) configs.head
    else ConfigFactory.empty()
  }

and called the following way:
val dbConfig = cfServicesHelper.getConfigFor("p-mysql", "mydb")

This would dynamically resolve the credentials for mysql and would allow the application to connect to the database.

An easier way to follow all this may be to look at a sample code available in my github repo here - https://github.com/bijukunjummen/sample-akka-http-rest.

Tuesday, January 3, 2017

Deploying akka-http app to Cloud Foundry - Part 1

It is easy to deploy an akka-http application to Cloud Foundry. I experimented with a few variations recently and will cover ways to deploy an Akka-http based REST app in two parts - first a simple app with no external resource dependencies, the second a little more complex CRUD app that maintains state in a MySQL database.


Pre Requisites


A quick way to get a running Cloud Foundry instance is using PCF Dev, a small footprint distribution of Cloud Foundry that can be started up on a developer laptop.

The sample app that I am using is a stock demo app available via the Lightbend Activator, if you have activator binaries available locally, you can create a quick project using the following command:


Generating the sample App and running it locally

activator new sample-akka-http akka-http-microservice


The application can be brought up by running sbt and using the "re-start" task

$ sbt
> re-start

By default the app comes up on port 9000 and can be tested with a sample CURL call - more here:

$ curl http://localhost:9000/ip/8.8.8.8
{
  "city": "Mountain View",
  "query": "8.8.8.8",
  "country": "United States",
  "lon": -122.0881,
  "lat": 37.3845
}


Deploying to Cloud Foundry

There is one change that needs to be made to the application to get it to work in Cloud Foundry - adjusting the port where the application listens on. When the app is deployed to Cloud Foundry, an environment variable called "PORT" is the port that the application is expected to listen on. This change is the following for the sample app:

val port = if (sys.env.contains("PORT")) sys.env("PORT").toInt else config.getInt("http.port")
Http().bindAndHandle(routes, config.getString("http.interface"), port)

Here I look for the PORT environment variable and use that port if available.

There is already the "assembly" sbt plugin available which creates a fat jar with the appropriate entries, to be able to start up the main class of the application

> assembly

Go to "target/scala-2.11" folder and run the fat jar:
$ java -jar sample-akka-http-assembly-1.0.jar

And the application should come up cleanly.

Having a fat jar greatly simplifies the deployment to Cloud Foundry - In the Cloud Foundry world, a buildpack takes the application binaries and layers in the runtime(jvm, a container like tomcat, application certs, monitoring agents etc). Given this fat jar all that is needed to deploy to Cloud Foundry is a command which looks like this:

$ cf push -p sample-akka-http-assembly-1.0.jar sample-akka-http  

Assuming that you are targeting the local PCF Dev environment the application should get cleanly deployed using the appropriate buildpack(java buildpack in this instance) and be available to handle requests in a few minutes:


Which I can test using a curl command similar to what I had before:
$ curl http:// sample-akka-http.local.pcfdev.io/ip/8.8.8.8
{
  "city": "Mountain View",
  "query": "8.8.8.8",
  "country": "United States",
  "lon": -122.0881,
  "lat": 37.3845
}

That is all there is to it - if say some customizations need to be made to the application, say more jvm heap size, this can be easily done via other command line flags or using an application manifest. The process to deploy with external resource dependencies is a little more complex and I will cover this in a follow up post.

Tuesday, December 27, 2016

Practical Reactor operations - Retrieve Details of a Cloud Foundry Application

CF-Java-Client is a library which enables programatic access to a Cloud Foundry Cloud Controller API. It is built on top of Project Reactor, an implementation of Reactive Streams specification and it is a fun exercise using this library to do something practical in a Cloud Foundry environment.

Consider a sample use case - Given an application id I need to find a little more detail of this application, more details of the application along with the details of the organization and the space that it belongs to.

To start with, the basis of all API operations with cf-java-client is a type unsurprisingly called the CloudFoundryClient(org.cloudfoundry.client.CloudFoundryClient), cf-java-client's github page has details on how to get hold of an instance of this type.

Given a CloudFoundryClient instance, the details of an application given its id can be obtained as follows:

Mono<GetApplicationResponse> applicationResponseMono = this.cloudFoundryClient
  .applicationsV2().get(GetApplicationRequest.builder().applicationId(applicationId).build());

Note that the API returns a reactor "Mono" type, this is in general the behavior of all the API calls of cf-java-client.


  • If an API returns one item then typically a Mono type is returned
  • If the API is expected to return more than one item then a Flux type is returned, and
  • If the API is called purely for side effects - say printing some information then it returns a Mono<Void> type


The next step is to retrieve the space identifier from the response and make an API call to retrieve the details of the space and looks like this:

Mono<Tuple2<GetApplicationResponse, GetSpaceResponse>> appAndSpaceMono = applicationResponseMono
  .and(appResponse -> this.cloudFoundryClient.spaces()
    .get(GetSpaceRequest.builder()
      .spaceId(appResponse.getEntity().getSpaceId()).build()));



Here I am using an "and" operator to combine the application response with another Mono that returns the space information, the result is a "Tuple2" type holding both the pieces of information - the application detail and the detail of the space that it is in.

Finally to retrieve the Organization that the app is deployed in:

Mono<Tuple3<GetApplicationResponse, GetSpaceResponse, GetOrganizationResponse>> t3 =
  appAndSpaceMono.then(tup2 -> this.cloudFoundryClient.organizations()
      .get(GetOrganizationRequest.builder()
        .organizationId(tup2.getT2().getEntity()
          .getOrganizationId())
        .build())
      .map(orgResp -> Tuples.of(tup2.getT1(), tup2.getT2(),
        orgResp)));

Here a "then" operation is being used to retrieve the organization detail given the id from the previous step and the result added onto the previous tuple to create a Tuple3 type holding the "Application Detail", "Space Detail" and the "Organization Detail". "then" is the equivalent of flatMap operator familiar in the Scala and ReactiveX world.

This essentially covers the way you would typically deal with "cf-java-client" library and use the fact that it is built on the excellent "Reactor" library and its collection of very useful operators to get results together. Just to take the final step of transforming the result to a type that may be more relevant to your domain and to handle any errors along the way:

Mono<AppDetail> appDetail =  
 t3.map(tup3 -> {
   String appName = tup3.getT1().getEntity().getName();
   String spaceName = tup3.getT2().getEntity().getName();
   String orgName = tup3.getT3().getEntity().getName();
   return new AppDetail(appName, orgName, spaceName);
  }).otherwiseReturn(new AppDetail("", "", ""));


If you are interested in trying out a working sample, I have an example available in my github repo here - https://github.com/bijukunjummen/boot-firehose-to-syslog

And the code shown in the article is available here - https://github.com/bijukunjummen/boot-firehose-to-syslog/blob/master/src/main/java/io.pivotal.cf.nozzle/service/CfAppDetailsService.java


Sunday, December 18, 2016

Spring Boot and Application Context Hierarchy

Spring Boot supports a simple way of specifying a Spring application context hierarchy.

This post is simply demonstrating this feature, I am yet to find a good use of it in the projects I have worked on. Spring Cloud uses this feature for creating a bootstrap context where properties are loaded up, if required, from an external configuration server which is made available to the main application context later on.

To quickly take a step back - a Spring Application Context manages the lifecycle of all the beans registered with it. Application Context hierarchies provide a way to reuse beans, beans defined in the parent context is accessible in the child contexts.

Consider a contrived use-case of using multiple application contexts and application context hierarchy - this is to provide two different ports with different set of endpoints at each of these ports.


Child1 and Child2 are typical Spring Boot Applications, along these lines:

package child1;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.PropertySource;
import root.RootBean;

@SpringBootApplication
@PropertySource("classpath:/child1.properties")
public class ChildContext1 {

    @Bean
    public ChildBean1 childBean(RootBean rootBean, @Value("${root.property}") String someProperty) {
        return new ChildBean1(rootBean, someProperty);
    }
}


Each of the application resides in its own root package to avoid collisions when scanning for beans. Note that the bean in the child contexts depend on a bean that is expected to come from the root context.

The port to listen on is provided as properties, since the two contexts are expected to listen on different ports I have explicitly specified the property file to load with a content along these lines:

server.port=8080
spring.application.name=child1

Given this set-up, Spring Boot provides a fluid interface to load up the root context and the two child contexts:

SpringApplicationBuilder appBuilder =
       new SpringApplicationBuilder()
               .parent(RootContext.class)
               .child(ChildContext1.class)
               .sibling(ChildContext2.class);

ConfigurableApplicationContext applicationContext  = appBuilder.run();

The application context returned by the SpringBootApplicationBuilder appears to be the final one in the chain, defined via ChildContext2 above.

If the application is now started up, there would be a root context with two different child contexts each exposing an endpoint via a different port. A visualization via the /beans actuator endpoint shows this:


Not everything is clean though, there are errors displayed in the console related to exporting jmx endpoints, however these are informational and don't appear to affect the start-up.

Samples are available in my github repo