Friday, May 19, 2017

Cloud Foundry Custom User Provided Services(CUPS) and tagging

Custom User Provided Services or CUPS for short is a way to deliver credentials for external services to an application hosted on Cloud Foundry.

Consider a set of credentials represented as a json of the following form:

{
 "hostname": "mysql-broker.local.pcfdev.io",
 "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/somedb?user=someuser\u0026password=somepass",
 "name": "somedb",
 "password": "somepass",
 "port": 3306,
 "uri": "mysql://someuser:somepass@mysql-broker.local.pcfdev.io:3306/somedb?reconnect=true",
 "username": "someuser"
}

I could create a user provided service out of these values using cf-cli. The following is highly bash shell specific, so on a different shell the mileage is likely to vary:

CUPS_PARAM=$(cat <<-'EOF'
{
 "hostname": "mysql-broker.local.pcfdev.io",
 "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/somedb?user=someuser\u0026password=somepass",
 "name": "somedb",
 "password": "somepass",
 "port": 3306,
 "uri": "mysql://someuser:somepass@mysql-broker.local.pcfdev.io:3306/somedb?reconnect=true",
 "username": "someuser"
}
EOF
)

cf create-user-provided-service mycups -p ''"$CUPS_PARAM"''

This Custom User provided service can be bound to an app:

cf bind-service myapp mycups

and the application can retrieve the credentials via an environment variable called VCAP_SERVICES at runtime.


Issue

There is one small issue with the Custom User provided services over normal services created via Service Brokers on Cloud Foundry - there is no simple way to tag a Custom User Provided service. Tags are sometimes useful in getting a little more information about the service bound to an app and is extensively used by Spring Cloud Connectors to connect to services.


Solution

I have written a custom service broker called the CUPS tagging broker using which a service can be created with all the parameters normally passed to create the CUPS, additionally since it is a normal service it can be tagged.

Assuming that the "CUPS tagging broker" has been installed using the instructions here, an equivalent user provided service with tags can be created the following way, with two tags attached to the service:

cf create-service cups-tagging-service default my-cups-tagged -c ''"$CUPS_PARAM"'' -t "tag1, tag2"

If I were to bind this service to an app, the VCAP_SERVICES environment variable of the app would be along these lines:

{"cups-tagging-service":[{
  "credentials": {
    "hostname": "mysql-broker.local.pcfdev.io",
    "jdbcUrl": "jdbc:mysql://mysql-broker.local.pcfdev.io:3306/somedb?user=someuser&password=somepass",
    "name": "somedb",
    "password": "somepass",
    "port": 3306,
    "uri": "mysql://someuser:somepass@mysql-broker.local.pcfdev.io:3306/somedb?reconnect=true",
    "username": "someuser"
  },
  "syslog_drain_url": null,
  "volume_mounts": [

  ],
  "label": "cups-tagging-service",
  "provider": null,
  "plan": "default",
  "name": "my-cups-tagged",
  "tags": [
    "cups-tag",
    "tag1",
    "tag2"
  ]
}]}

See how the two additional tags show up.

That is all there is to the CUPS tagging Service Broker!





Thursday, May 4, 2017

Integrating Gatling into a Gradle build - Understanding SourceSets and Configuration

I recently worked on a project where we had to integrate the excellent load testing tool Gatling into a Gradle based build. There are gradle plugins available which make this easy, two of them being this and this, however for most of the needs a simple execution of the command line tool itself suffices, so this post will go into some details of how gatling can be hooked up into a gradle build and in the process understand some good gradle concepts.


SourceSets and Configuration


To execute the gatling cli I need to do two things, I need a location for the source code and related content of the Gatling simulations, and I need a way to get the gatling libraries. This is where two concepts of Gradle(SourceSets and Configuration) come into play.

Let us start with the first one - SourceSets.

SourceSets


SourceSets are simply a logical grouping of related files and are best demonstrated with an example. If I were to add a "java" plugin to a gradle build:

apply plugin: 'java'


sourceSets property would now show up with two values "main" and "test" and if I wanted to find details of the these sourceSets, a gradle task can be used for printing the details:

task sourceSetDetails {
    doLast {
        sourceSets {
            main {
                println java.properties
                println resources.properties
            }
        
            test {
                println java.properties
                println resources.properties
            }
        }
    }
}

Coming back to gatling, I can essentially create a new sourceSet to hold the gatling simulations:

sourceSets {
    simulations
}

This would now expect the gatling simulations to reside in "src/simulations/java" and the resources related to it in "src/simulations/resources" folders, which is okay, but ideally I would want to keep it totally separate from the project sources. I would want my folder structure to be with load simulations in "simulations/load" and resources in "simulations/resources" folder. This can be tweaked by first applying the "scala" plugin, which brings in scala compilation support to the project and then modifying the "simulations" source set along these lines:

apply plugin: 'scala'

sourceSets {
    simulations {
        scala {
            srcDirs = ['simulations/load']
        }
        resources {
            srcDirs = ['simulations/resources']
        }
    }
}

With these set of changes, I can now put my simulations in the right place, but the dependency of gatling and scala has not been pulled in, this is where the "configuration" feature of gradle comes in.

Configuration


Gradle Configuration is a way of grouping related dependencies together. If I were to print the existing set of configurations using a task:

task showConfigurations  {
    doLast {
        configurations.all { conf -> println(conf) }
    }
}

these show up:

configuration ':archives'
configuration ':compile'
configuration ':compileClasspath'
configuration ':compileOnly'
configuration ':default'
configuration ':runtime'
configuration ':simulationsCompile'
configuration ':simulationsCompileClasspath'
configuration ':simulationsCompileOnly'
configuration ':simulationsRuntime'
configuration ':testCompile'
configuration ':testCompileClasspath'
configuration ':testCompileOnly'
configuration ':testRuntime'
configuration ':zinc'

"compile" and "testCompile" should be familiar one's, that is where a normal source dependency and a test dependency is typically declared like this:

dependencies {
    compile 'org.slf4j:slf4j-api:1.7.21'
    testCompile 'junit:junit:4.12'   
}

However, it also looks like there is now configuration for "simulations" sourceset also available - "simulationsCompile" and "simulationsRuntime" etc, so with this I can declare the dependencies required for my gatling simulations using these configurations, however my intention is to declare a custom configuration just to go over the concept a little more, so let us explicitly declare one:

configurations {
    gatling
}

and use this configuration for declaring the dependencies of gatling:
dependencies {
    gatling 'org.scala-lang:scala-library:2.11.8'
    gatling 'io.gatling.highcharts:gatling-charts-highcharts:2.2.5'
}

Almost there, now how do we tell the sources in simulations source set to use dependency from gatling configuration..by tweaking the sourceSet a little:

sourceSets {
    simulations {
        scala {
            srcDirs = ['simulations/load']
        }
        resources {
            srcDirs = ['simulations/resources']
        }

        compileClasspath += configurations.gatling
    }
}


Running a Gatling Scenario

With the source sets and the configuration defined, all we need to do is to write a task to run a gatling simulation, which can be along these lines:

task gatlingRun(type: JavaExec) {
    description = 'Run gatling tests'
    new File("${buildDir}/reports/gatling").mkdirs()

    classpath = sourceSets.simulations.runtimeClasspath + configurations.gatling

    main = "io.gatling.app.Gatling"
    args = ['-s', 'simulations.SimpleSimulation',
            '-sf', 'simulations/resources',
            '-df', 'simulations/resources',
            '-rf', "${buildDir}/reports/gatling"
    ]
}

See how the compiled sources of the simulations and the dependencies from the gatling configuration is being set as the classpath for the "JavaExec" task


A good way to review this would be to look at a complete working sample that I have here in my github repo - https://github.com/bijukunjummen/cf-show-env

Friday, April 21, 2017

Spring Web-Flux - Functional Style with Cassandra Backend

In a previous post I had walked through the basics of Spring Web-Flux which denotes the reactive support in the web layer of Spring framework.

I had demonstrated an end to end sample using Spring Data Cassandra and using the traditional annotations support in the Spring Web Layers, along these lines:

...
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
...

@RestController
@RequestMapping("/hotels")
public class HotelController {

    @GetMapping(path = "/{id}")
    public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
        ...
    }

    @GetMapping(path = "/startingwith/{letter}")
    public Flux<HotelByLetter> findHotelsWithLetter(
            @PathVariable("letter") String letter) {
        ...
    }

}

This looks like the traditional Spring Web annotations except for the return types, instead of returning the domain types these endpoints are returning the Publisher type via the implementations of Mono and Flux in reactor-core and Spring-Web handles streaming the content back.


In this post I will cover a different way of exposing the endpoints - using a functional style instead of the annotations style. Let me acknowledge that I have found Baeldung's article and Rossen Stoyanchev's post invaluable in my understanding of the functional style of exposing the web endpoints.


Mapping the annotations to routes

Let me start with a few annotation based endpoints, one to retrieve an entity and one to save an entity:

@GetMapping(path = "/{id}")
public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
    return this.hotelService.findOne(uuid);
}

@PostMapping
public Mono<ResponseEntity<Hotel>> save(@RequestBody Hotel hotel) {
    return this.hotelService.save(hotel)
            .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED));
}


In a functional style of exposing the endpoints, each of the endpoints would translate to a RouterFunction, and they can composed to create all the endpoints of the app, along these lines:

package cass.web;

import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.server.RouterFunction;

import static org.springframework.web.reactive.function.server.RequestPredicates.*;
import static org.springframework.web.reactive.function.server.RouterFunctions.*;

public interface ApplicationRoutes {
    static RouterFunction<?> routes(HotelHandler hotelHandler) {
        return nest(path("/hotels"),
                nest(accept(MediaType.APPLICATION_JSON),
                        route(GET("/{id}"), hotelHandler::get)
                                .andRoute(POST("/"), hotelHandler::save)
                ));
    }
}


There are helper functions(nest, route, GET, accept etc) which make composing all the RouterFunction(s) together a breeze. Once an appropriate RouterFunction is found, the request is handled by a HandlerFunction which in the above sample is abstracted by the HotelHandler and for the save and get functionality looks like this:

import org.springframework.web.reactive.function.server.ServerRequest;
import org.springframework.web.reactive.function.server.ServerResponse;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

import java.util.UUID;

@Service
public class HotelHandler {

    ...
    
    public Mono<ServerResponse> get(ServerRequest request) {
        UUID uuid = UUID.fromString(request.pathVariable("id"));
        Mono<ServerResponse> notFound = ServerResponse.notFound().build();
        return this.hotelService.findOne(uuid)
                .flatMap(hotel -> ServerResponse.ok().body(Mono.just(hotel), Hotel.class))
                .switchIfEmpty(notFound);
    }

    public Mono<ServerResponse> save(ServerRequest serverRequest) {
        Mono<Hotel> hotelToBeCreated = serverRequest.bodyToMono(Hotel.class);
        return hotelToBeCreated.flatMap(hotel ->
                ServerResponse.status(HttpStatus.CREATED).body(hotelService.save(hotel), Hotel.class)
        );
    }

    ...
}    


This is how a complete RouterFunction for all the API's supported by the original annotation based project looks like:

import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.server.RouterFunction;

import static org.springframework.web.reactive.function.server.RequestPredicates.*;
import static org.springframework.web.reactive.function.server.RouterFunctions.*;

public interface ApplicationRoutes {
    static RouterFunction<?> routes(HotelHandler hotelHandler) {
        return nest(path("/hotels"),
                nest(accept(MediaType.APPLICATION_JSON),
                        route(GET("/{id}"), hotelHandler::get)
                                .andRoute(POST("/"), hotelHandler::save)
                                .andRoute(PUT("/"), hotelHandler::update)
                                .andRoute(DELETE("/{id}"), hotelHandler::delete)
                                .andRoute(GET("/startingwith/{letter}"), hotelHandler::findHotelsWithLetter)
                                .andRoute(GET("/fromstate/{state}"), hotelHandler::findHotelsInState)
                ));
    }
}

Testing functional Routes

It is easy to test these routes also, Spring Webflux provides a WebTestClient to test out the routes while providing the ability to mock the implementations behind it

For eg, to test the get by id endpoint, I would bind the WebTestClient to the RouterFunction defined before and use the assertions that it provides to test the behavior.

import org.junit.Before;
import org.junit.Test;
import org.springframework.test.web.reactive.server.WebTestClient;
import reactor.core.publisher.Mono;

import java.util.UUID;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;


public class GetRouteTests {

    private WebTestClient client;
    private HotelService hotelService;

    private UUID sampleUUID = UUID.fromString("fd28ec06-6de5-4f68-9353-59793a5bdec2");

    @Before
    public void setUp() {
        this.hotelService = mock(HotelService.class);
        when(hotelService.findOne(sampleUUID)).thenReturn(Mono.just(new Hotel(sampleUUID, "test")));
        HotelHandler hotelHandler = new HotelHandler(hotelService);
        
        this.client = WebTestClient.bindToRouterFunction(ApplicationRoutes.routes(hotelHandler)).build();
    }

    @Test
    public void testHotelGet() throws Exception {
        this.client.get().uri("/hotels/" + sampleUUID)
                .exchange()
                .expectStatus().isOk()
                .expectBody(Hotel.class)
                .isEqualTo(new Hotel(sampleUUID, "test"));
    }
}

Conclusion

The functional way of defining the routes is definitely a very different approach from the annotation based one - I like that it is a far more explicit way of defining an endpoint and how the calls for the endpoint is handled, the annotations always felt a little more magical.

I have a complete working code in my github repo which may be easier to follow than the code in this post.

Saturday, April 1, 2017

Hystrix Command - Java 8 helpers

Let me start by acknowledging that what I am posting here is far from original, it is inspired by the post here by Demian Neidetcher which was further adapted by two of my former colleagues - Alexey Dmitrovsky1(T-Mobile) and Pavel Orda(Altoros).


Motivation

So the motivation is fairly simple, consider two remote calls the result of which is aggregated in some way:

String  r1 = remoteCall1();
Integer r2 = remoteCall2();

String aggregated = r1 + r2;
assertThat(aggregated).isEqualTo("result1");

Ideally you would want the remote calls to be protected by the excellent Hystrix library, what if I could do it along these lines:

String  r1 = execute("remote1", "remote1", () -> remoteCall1());
Integer r2 = execute("remote2", "remote2", () -> remoteCall2());

String aggregated = r1 + r2;
assertThat(aggregated).isEqualTo("result1");

I have avoided all the boiler plate around needing to define an explicit HystrixCommand around each of my remote calls this way, and instead wrapped the remote calls using a Java 8 lambda expression which resolves to a Supplier functional interface

Even better, a variation of this allows me to aggregate the results in a reactive way by returning an Rx-java Observable instead:

Observable<String>  r1Obs = executeObservable("remote1", "remote1", () -> remoteCall1());
Observable<Integer> r2Obs = executeObservable("remote2", "remote2", () -> remoteCall2());

String aggregated = Observable.zip(r1Obs, r2Obs, (r1, r2) -> (r1 + r2)).toBlocking().single();

assertThat(aggregated).isEqualTo("result1");

What about fallbacks, I can support it by taking in another lambda expression which transforms an exception to a reasonable fallback(and logs the exception in the process):


Observable<String> r1Obs = executeObservable("remote1", "remote1",
        () -> {
            throw new RuntimeException("!!");
        },
        (t) -> {
            logger.error(t.getMessage(), t);
            return "fallback";
        });
Observable<Integer> r2Obs = executeObservable("remote2", "remote2",
        () -> {
            throw new RuntimeException("!!");
        },
        (t) -> {
            logger.error(t.getMessage(), t);
            return 0;
        });

String aggregated = Observable.zip(r1Obs, r2Obs, (r1, r2) -> (r1 + r2)).toBlocking().single();

assertThat(aggregated).isEqualTo("fallback0");


Implementation


The implementation is fairly simple and in its entirety is the following:

import com.netflix.hystrix.HystrixCommand;
import com.netflix.hystrix.HystrixCommandGroupKey;
import com.netflix.hystrix.HystrixCommandKey;
import rx.Observable;

import java.util.function.Function;
import java.util.function.Supplier;

public class GenericHystrixCommand<T> extends HystrixCommand<T> {

    private Supplier<T> toRun;

    private Function<Throwable, T> fallback;


    public static <T> T execute(String groupKey, String commandkey, Supplier<T> toRun) {
        return execute(groupKey, commandkey, toRun, null);
    }

    public static <T> T execute(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        return new GenericHystrixCommand<>(groupKey, commandkey, toRun, fallback).execute();
    }

    public static <T> Observable<T> executeObservable(String groupKey, String commandkey, 
               Supplier<T> toRun) {
        return executeObservable(groupKey, commandkey, toRun, null);
    }

    public static <T> Observable<T> executeObservable(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        return new GenericHystrixCommand<>(groupKey, commandkey, toRun, fallback)
                .toObservable();
    }

    public GenericHystrixCommand(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        super(Setter
                .withGroupKey(HystrixCommandGroupKey.Factory.asKey(groupKey))
                .andCommandKey(HystrixCommandKey.Factory.asKey(commandkey)));
        this.toRun = toRun;
        this.fallback = fallback;
    }

    protected T run() throws Exception {
        return this.toRun.get();
    }

    @Override
    protected T getFallback() {
        return (this.fallback != null)
                ? this.fallback.apply(getExecutionException())
                : super.getFallback();
    }
}


All it does is to take in the code that needs to be wrapped as a Java8 Supplier and the fallback as a Java 8 Function


If you are interested in playing with this pattern, I have a little more fleshed out sample here in my github repo.

Sunday, March 19, 2017

Spring Web-Flux - First steps

Spring Web-Flux term is used for denoting the Reactive programming support in the web layer of Spring Framework. It provides support for both creating reactive server based web applications and also has client libraries to make remote REST calls.

In this post, I will demonstrate a sample web application which makes use of Spring Web-Flux. As detailed here, the Web-Flux support in Spring 5+ supports two different programming style - the traditional annotation based style and the new functional style. In this post I will be sticking to the traditional annotation style and will follow it up in another blog post(now available here) detailing a similar application but with endpoints defined in a functional style. My focus is going to be purely the programming model.

Data and Services Layer


I have a fairly simple REST interface supporting CRUD operations of a Hotel resource with a structure along these lines:

public class Hotel {

    private UUID id;

    private String name;

    private String address;

    private String state;

    private String zip;
    
    ....

}

I am using Cassandra as a store of this entity and using the reactive support in Spring Data Cassandra allows the data layer to be reactive, supporting an API that looks like this - I have two repositories here, one facilitating the storage of the Hotel entity above, another maintaining a duplicated data which makes searching for Hotel entity by its first letter a little more efficient:

public interface HotelRepository  {
    Mono<Hotel> save(Hotel hotel);
    Mono<Hotel> update(Hotel hotel);
    Mono<Hotel> findOne(UUID hotelId);
    Mono<Boolean> delete(UUID hotelId);
    Flux<Hotel> findByState(String state);
}

public interface HotelByLetterRepository {
    Flux<HotelByLetter> findByFirstLetter(String letter);
    Mono<HotelByLetter> save(HotelByLetter hotelByLetter);
    Mono<Boolean> delete(HotelByLetterKey hotelByLetterKey);
}


The operations which return one instance of an entity now return a Mono type and operations which return more than one element return a Flux type.


Given this let me touch on one quick use of returning the reactive types, when a Hotel is updated I have to delete the duplicated data maintained via HotelByLetter repository and recreate it again, this can be accomplished something like the following, using the excellent operators provided by Flux and Mono types:

public Mono<Hotel> update(Hotel hotel) {
    return this.hotelRepository.findOne(hotel.getId())
            .flatMap(existingHotel ->
                    this.hotelByLetterRepository.delete(new HotelByLetter(existingHotel).getHotelByLetterKey())
                            .then(this.hotelByLetterRepository.save(new HotelByLetter(hotel)))
                            .then(this.hotelRepository.update(hotel))).next();
}


Web Layer

Now to the focus of the article, support for annotation based reactive programming model support in the web layer!

The @Controller and @RestController annotations have been the workhorses of the Spring MVC's REST endpoint support for years now, traditionally they have enabled taking in and returning Java POJO's. These controllers in the reactive model have now been tweaked to take in and return the Reactive types - Mono and Flux in my examples, but additionally also the Rx-Java 1/2 and Reactive Streams types.

Given this, my controller in almost its entirety looks like this:

@RestController
@RequestMapping("/hotels")
public class HotelController {

    ....

    @GetMapping(path = "/{id}")
    public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
        return this.hotelService.findOne(uuid);
    }

    @PostMapping
    public Mono<ResponseEntity<Hotel>> save(@RequestBody Hotel hotel) {
        return this.hotelService.save(hotel)
                .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED));
    }

    @PutMapping
    public Mono<ResponseEntity<Hotel>> update(@RequestBody Hotel hotel) {
        return this.hotelService.update(hotel)
                .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED))
                .defaultIfEmpty(new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    @DeleteMapping(path = "/{id}")
    public Mono<ResponseEntity<String>> delete(
            @PathVariable("id") UUID uuid) {
        return this.hotelService.delete(uuid).map((Boolean status) ->
                new ResponseEntity<>("Deleted", HttpStatus.ACCEPTED));
    }

    @GetMapping(path = "/startingwith/{letter}")
    public Flux<HotelByLetter> findHotelsWithLetter(
            @PathVariable("letter") String letter) {
        return this.hotelService.findHotelsStartingWith(letter);
    }

    @GetMapping(path = "/fromstate/{state}")
    public Flux<Hotel> findHotelsInState(
            @PathVariable("state") String state) {
        return this.hotelService.findHotelsInState(state);
    }
}

The traditional @RequestMapping, @GetMapping, @PostMapping is unchanged, what is different is the return types - for instances where atmost 1 result is expected I am now returning a Mono type and where a list would have been returned before, now a Flux type is returned.

With the use of the reactive support in Spring Data Cassandra the entire web to services and back is reactive and specifically for the focus of the article, eminently readable and intuitive.


It may be easier to simply try out the code behind this post which I have available in my github repo here.

Tuesday, February 28, 2017

Using UAA OAuth2 authorization server - client and resource

In a previous post I had gone over how to bring up an OAuth2 authorization server using Cloud Foundry UAA project and populating it with some of the actors involved in a OAuth2 Authorization Code flow.


I have found this article at the Digital Ocean site does a great job of describing the OAuth2 Authorization code flow, so instead of rehashing what is involved in this flow I will directly jump into implementing this flow using Spring Boot/Spring Security.

The following diagram inspired by the one here shows a high level flow in an Authorization Code grant type:




I will have two applications - a resource server exposing some resources of a user, and a client application that wants to access those resources on behalf of a user. The Authorization server itself can be brought up as described in the previous blog post.

The rest of the post can be more easily followed along with the code available in my github repo here

Authorization Server

The Cloud Foundry UAA server can be easily brought up using the steps described in my previous blog post. Once it is up the following uaac commands can be used for populating the different credentials required to run the sample.

These scripts will create a client credential for the client app and add a user called "user1" with a scope of "resource.read" and "resource.write".

# Login as a canned client
uaac token client get admin -s adminsecret

# Add a client credential with client_id of client1 and client_secret of client1
uaac client add client1 \
   --name client1 \
   --scope resource.read,resource.write \
   -s client1 \
   --authorized_grant_types authorization_code,refresh_token,client_credentials \
   --authorities uaa.resource


# Another client credential resource1/resource1
uaac client add resource1 \
  --name resource1 \
  -s resource1 \
  --authorized_grant_types client_credentials \
  --authorities uaa.resource


# Add a user called user1/user1
uaac user add user1 -p user1 --emails user1@user1.com


# Add two scopes resource.read, resource.write
uaac group add resource.read
uaac group add resource.write

# Assign user1 both resource.read, resource.write scopes..
uaac member add resource.read user1
uaac member add resource.write user1


Resource Server

The resource server exposes a few endpoints, expressed using Spring MVC and secured using Spring Security, the following way:

@RestController
public class GreetingsController {
    @PreAuthorize("#oauth2.hasScope('resource.read')")
    @RequestMapping(method = RequestMethod.GET, value = "/secured/read")
    @ResponseBody
    public String read(Authentication authentication) {
        return String.format("Read Called: Hello %s", authentication.getCredentials());
    }

    @PreAuthorize("#oauth2.hasScope('resource.write')")
    @RequestMapping(method = RequestMethod.GET, value = "/secured/write")
    @ResponseBody
    public String write(Authentication authentication) {
        return String.format("Write Called: Hello %s", authentication.getCredentials());
    }
}

There are two endpoint uri's being exposed - "/secured/read" authorized for scope "resource.read" and "/secured/write" authorized for scope "resource.write"

The configuration which secures these endpoints and marks the application as a resource server is the following:

@Configuration
@EnableResourceServer
@EnableWebSecurity
@EnableGlobalMethodSecurity(securedEnabled = true, prePostEnabled = true)
public class ResourceServerConfiguration extends ResourceServerConfigurerAdapter {

    @Override
    public void configure(ResourceServerSecurityConfigurer resources) throws Exception {
        resources.resourceId("resource");
    }

    @Override
    public void configure(HttpSecurity http) throws Exception {
        http
                .antMatcher("/secured/**")
                .authorizeRequests()
                .anyRequest().authenticated();
    }
}

This configuration along with properties describing how the token is to be validated is all that is required to get the resource server running.


Client

The client configuration for OAuth2 using Spring Security OAuth2 is also fairly simple, @EnableAuth2SSO annotation pulls in all the required configuration to wire up the spring security filters for OAuth2 flows:

@EnableOAuth2Sso
@Configuration
public class OAuth2SecurityConfig extends WebSecurityConfigurerAdapter {
    @Override
    public void configure(WebSecurity web) throws Exception {
        super.configure(web);
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.csrf().disable();

        //@formatter:off
        http.authorizeRequests()
                .antMatchers("/secured/**")
                    .authenticated()
                .antMatchers("/")
                    .permitAll()
                .anyRequest()
                    .authenticated();

        //@formatter:on

    }

}

To call a downstream system, the client has to pass on the OAuth token as a header in the downstream calls, this is done by hooking a specialized RestTemplate called the OAuth2RestTemplate that can grab the access token from the context and pass it downstream, once it is hooked up a secure downstream call looks like this:

public class DownstreamServiceHandler {

    private final OAuth2RestTemplate oAuth2RestTemplate;
    private final String resourceUrl;


    public DownstreamServiceHandler(OAuth2RestTemplate oAuth2RestTemplate, String resourceUrl) {
        this.oAuth2RestTemplate = oAuth2RestTemplate;
        this.resourceUrl = resourceUrl;
    }


    public String callRead() {
        return callDownstream(String.format("%s/secured/read", resourceUrl));
    }

    public String callWrite() {
        return callDownstream(String.format("%s/secured/write", resourceUrl));
    }

    public String callInvalidScope() {
        return callDownstream(String.format("%s/secured/invalid", resourceUrl));
    }

    private String callDownstream(String uri) {
        try {
            ResponseEntity<String> responseEntity = this.oAuth2RestTemplate.getForEntity(uri, String.class);
            return responseEntity.getBody();
        } catch(HttpStatusCodeException statusCodeException) {
            return statusCodeException.getResponseBodyAsString();
        }
    }
}


Demonstration

The Client and the resource server can be brought up using the instructions here. Once all the systems are up, accessing the client will present the user with a page which looks like this:


Accessing the secure page, will result in a login page being presented by the authorization server:



The client is requesting a "resource.read" and "resource.write" scope from the user, user is prompted to authorize these scopes:


Assuming that the user has authorized "resource.read" but not "resource.write", the token will be presented to the user:

At this point if the downstream resource is requested which requires a scope of "resource.read", it should get retrieved:


And if a downstream resource is requested with a scope that the user has not authorized - "resource.write" in this instance:



Reference

  • Most of the code is based on the Cloud Foundry UAA application samples available here - https://github.com/pivotal-cf/identity-sample-apps
  • The code in the post is here: https://github.com/bijukunjummen/oauth-uaa-sample

Tuesday, February 14, 2017

Bootstrapping an OAuth2 Authorization server using UAA

A quick way to get a robust OAuth2 server running in your local machine is to use the excellent Cloud Foundry UAA project. UAA is used as the underlying OAUth2 authorization server in Cloud Foundry deployments and can scale massively, but is still small enough that it can be booted up on modest hardware.

I will cover using the UAA in two posts. In this post, I will go over how to get a local UAA server running and populate it with some of the actors involved in an OAuth2 authorization_code flow - clients and users, and in a follow up post I will show how to use this Authorization server with a sample client application and in securing a resource.

Starting up the UAA

The repository for the UAA project is at https://github.com/cloudfoundry/uaa


Downloading the project is simple, just clone this repo:
git clone https://github.com/cloudfoundry/uaa

If you have a local JDK available, start it up using:
./gradlew run

This version of UAA uses an in-memory database, so the test data generated will be lost on restart of the application.


Populate some data

An awesome way to interact with UAA is its companion CLI application called uaac, available here. Assuming that you have the uaac cli downloaded and UAA started up at its default port of 8080, let us start by pointing the uaac to the uaa application:

uaac target http://localhost:8080/uaa

and log into it using one of the canned client credentials(admin/adminsecret):

uaac token client get admin -s adminsecret

Now that a client has logged in, the token can be explored using :
uaac context

This would display the details of the token issued by UAA, along these lines:

[3]*[http://localhost:8080/uaa]

  [2]*[admin]
      client_id: admin
      access_token: eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiJkOTliMjg1MC1iZDQ1LTRlOTctODIyZS03NGE2MmUwN2Y0YzUiLCJzdWIiOiJhZG1pbiIsImF1dGhvcml0aWVzIjpbImNsaWVudHMucmVhZCIsImNsaWVudHMuc2VjcmV0IiwiY2xpZW50cy53cml0ZSIsInVhYS5hZG1pbiIsImNsaWVudHMuYWRtaW4iLCJzY2ltLndyaXRlIiwic2NpbS5yZWFkIl0sInNjb3BlIjpbImNsaWVudHMucmVhZCIsImNsaWVudHMuc2VjcmV0IiwiY2xpZW50cy53cml0ZSIsInVhYS5hZG1pbiIsImNsaWVudHMuYWRtaW4iLCJzY2ltLndyaXRlIiwic2NpbS5yZWFkIl0sImNsaWVudF9pZCI6ImFkbWluIiwiY2lkIjoiYWRtaW4iLCJhenAiOiJhZG1pbiIsImdyYW50X3R5cGUiOiJjbGllbnRfY3JlZGVudGlhbHMiLCJyZXZfc2lnIjoiZTc4YjAyMTMiLCJpYXQiOjE0ODcwMzk3NzYsImV4cCI6MTQ4NzA4Mjk3NiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL3VhYS9vYXV0aC90b2tlbiIsInppZCI6InVhYSIsImF1ZCI6WyJhZG1pbiIsImNsaWVudHMiLCJ1YWEiLCJzY2ltIl19.B-RmeIvYttxJOMr_CX1Jsinsr6G_e8dVU-Fv-3Qq1ow
      token_type: bearer
      expires_in: 43199
      scope: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
      jti: d99b2850-bd45-4e97-822e-74a62e07f4c5

To see a more readable and decoded form of token, just run:
uaac token decode 
which should display a decoded form of the token:
jti: d99b2850-bd45-4e97-822e-74a62e07f4c5
  sub: admin
  authorities: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
  scope: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
  client_id: admin
  cid: admin
  azp: admin
  grant_type: client_credentials
  rev_sig: e78b0213
  iat: 1487039776
  exp: 1487082976
  iss: http://localhost:8080/uaa/oauth/token
  zid: uaa
  aud: admin clients uaa scim


Now, to create a brand new client(called client1), which I will be using in a follow on post:

uaac client add client1  \
  --name client1 --scope resource.read,resource.write \
  --autoapprove true  \
  -s client1 \
  --authorized_grant_types authorization_code,refresh_token,client_credentials \
  --authorities uaa.resource

This client is going to request a scope of resource.read, resource.write from users and will participate in authorization_code grant-type OAuth2 flows


Creating a resource owner or a user of the system:

uaac user add user1 -p user1 --emails user1@user1.com

and assigning this user a resource.read scope:

uaac group add resource.read
uaac member add resource.read user1


Exercise a test flow

Now that we have a client and a resource owner, let us exercise a quick authorization_code flow, uaac provides a handy command line option that provides the necessary redirect hooks to capture auth code and transforms the auth_code to an access token.

uaac token authcode get -c client1 -s client1 --no-cf

Invoking the above command should open up a browser window and prompt for user creds:



Logging in with the user1/user1 user that was created previously should respond with a message in the command line that the token has been successfully fetched, this can be explored once more using the following command:

uaac context

with the output, showing the details of the logged in user!:
jti: c8ddfdfc-9317-4f16-b3a9-808efa76684b
  nonce: 43c8d9f7d6264fb347ede40c1b7b44ae
  sub: 7fdd9a7e-5b92-42e7-ae75-839e21b932e1
  scope: resource.read
  client_id: client1
  cid: client1
  azp: client1
  grant_type: authorization_code
  user_id: 7fdd9a7e-5b92-42e7-ae75-839e21b932e1
  origin: uaa
  user_name: user1
  email: user1@user1.com
  auth_time: 1487040497
  rev_sig: c107f5c0
  iat: 1487040497
  exp: 1487083697
  iss: http://localhost:8080/uaa/oauth/token
  zid: uaa
  aud: resource client1

This concludes the whirlwind tour of setting up a local UAA and adding a couple of roles involved in a OAuth2 flow - a client and a user. I have not covered the OAuth2 flows itself, the Digital Ocean intro to OAuth2 is a very good primer on the flows.

I will follow this post with a post on how this infrastructure can be used for securing a sample resource and demonstrate a flow using Spring Security and Spring Boot.