Friday, April 21, 2017

Spring Web-Flux - Functional Style with Cassandra Backend

In a previous post I had walked through the basics of Spring Web-Flux which denotes the reactive support in the web layer of Spring framework.

I had demonstrated an end to end sample using Spring Data Cassandra and using the traditional annotations support in the Spring Web Layers, along these lines:

...
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
...

@RestController
@RequestMapping("/hotels")
public class HotelController {

    @GetMapping(path = "/{id}")
    public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
        ...
    }

    @GetMapping(path = "/startingwith/{letter}")
    public Flux<HotelByLetter> findHotelsWithLetter(
            @PathVariable("letter") String letter) {
        ...
    }

}

This looks like the traditional Spring Web annotations except for the return types, instead of returning the domain types these endpoints are returning the Publisher type via the implementations of Mono and Flux in reactor-core and Spring-Web handles streaming the content back.


In this post I will cover a different way of exposing the endpoints - using a functional style instead of the annotations style. Let me acknowledge that I have found Baeldung's article and Rossen Stoyanchev's post invaluable in my understanding of the functional style of exposing the web endpoints.


Mapping the annotations to routes

Let me start with a few annotation based endpoints, one to retrieve an entity and one to save an entity:

@GetMapping(path = "/{id}")
public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
    return this.hotelService.findOne(uuid);
}

@PostMapping
public Mono<ResponseEntity<Hotel>> save(@RequestBody Hotel hotel) {
    return this.hotelService.save(hotel)
            .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED));
}


In a functional style of exposing the endpoints, each of the endpoints would translate to a RouterFunction, and they can composed to create all the endpoints of the app, along these lines:

package cass.web;

import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.server.RouterFunction;

import static org.springframework.web.reactive.function.server.RequestPredicates.*;
import static org.springframework.web.reactive.function.server.RouterFunctions.*;

public interface ApplicationRoutes {
    static RouterFunction<?> routes(HotelHandler hotelHandler) {
        return nest(path("/hotels"),
                nest(accept(MediaType.APPLICATION_JSON),
                        route(GET("/{id}"), hotelHandler::get)
                                .andRoute(POST("/"), hotelHandler::save)
                ));
    }
}


There are helper functions(nest, route, GET, accept etc) which make composing all the RouterFunction(s) together a breeze. Once an appropriate RouterFunction is found, the request is handled by a HandlerFunction which in the above sample is abstracted by the HotelHandler and for the save and get functionality looks like this:

import org.springframework.web.reactive.function.server.ServerRequest;
import org.springframework.web.reactive.function.server.ServerResponse;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

import java.util.UUID;

@Service
public class HotelHandler {

    ...
    
    public Mono<ServerResponse> get(ServerRequest request) {
        UUID uuid = UUID.fromString(request.pathVariable("id"));
        Mono<ServerResponse> notFound = ServerResponse.notFound().build();
        return this.hotelService.findOne(uuid)
                .flatMap(hotel -> ServerResponse.ok().body(Mono.just(hotel), Hotel.class))
                .switchIfEmpty(notFound);
    }

    public Mono<ServerResponse> save(ServerRequest serverRequest) {
        Mono<Hotel> hotelToBeCreated = serverRequest.bodyToMono(Hotel.class);
        return hotelToBeCreated.flatMap(hotel ->
                ServerResponse.status(HttpStatus.CREATED).body(hotelService.save(hotel), Hotel.class)
        );
    }

    ...
}    


This is how a complete RouterFunction for all the API's supported by the original annotation based project looks like:

import org.springframework.http.MediaType;
import org.springframework.web.reactive.function.server.RouterFunction;

import static org.springframework.web.reactive.function.server.RequestPredicates.*;
import static org.springframework.web.reactive.function.server.RouterFunctions.*;

public interface ApplicationRoutes {
    static RouterFunction<?> routes(HotelHandler hotelHandler) {
        return nest(path("/hotels"),
                nest(accept(MediaType.APPLICATION_JSON),
                        route(GET("/{id}"), hotelHandler::get)
                                .andRoute(POST("/"), hotelHandler::save)
                                .andRoute(PUT("/"), hotelHandler::update)
                                .andRoute(DELETE("/{id}"), hotelHandler::delete)
                                .andRoute(GET("/startingwith/{letter}"), hotelHandler::findHotelsWithLetter)
                                .andRoute(GET("/fromstate/{state}"), hotelHandler::findHotelsInState)
                ));
    }
}

Testing functional Routes

It is easy to test these routes also, Spring Webflux provides a WebTestClient to test out the routes while providing the ability to mock the implementations behind it

For eg, to test the get by id endpoint, I would bind the WebTestClient to the RouterFunction defined before and use the assertions that it provides to test the behavior.

import org.junit.Before;
import org.junit.Test;
import org.springframework.test.web.reactive.server.WebTestClient;
import reactor.core.publisher.Mono;

import java.util.UUID;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;


public class GetRouteTests {

    private WebTestClient client;
    private HotelService hotelService;

    private UUID sampleUUID = UUID.fromString("fd28ec06-6de5-4f68-9353-59793a5bdec2");

    @Before
    public void setUp() {
        this.hotelService = mock(HotelService.class);
        when(hotelService.findOne(sampleUUID)).thenReturn(Mono.just(new Hotel(sampleUUID, "test")));
        HotelHandler hotelHandler = new HotelHandler(hotelService);
        
        this.client = WebTestClient.bindToRouterFunction(ApplicationRoutes.routes(hotelHandler)).build();
    }

    @Test
    public void testHotelGet() throws Exception {
        this.client.get().uri("/hotels/" + sampleUUID)
                .exchange()
                .expectStatus().isOk()
                .expectBody(Hotel.class)
                .isEqualTo(new Hotel(sampleUUID, "test"));
    }
}

Conclusion

The functional way of defining the routes is definitely a very different approach from the annotation based one - I like that it is a far more explicit way of defining an endpoint and how the calls for the endpoint is handled, the annotations always felt a little more magical.

I have a complete working code in my github repo which may be easier to follow than the code in this post.

Saturday, April 1, 2017

Hystrix Command - Java 8 helpers

Let me start by acknowledging that what I am posting here is far from original, it is inspired by the post here by Demian Neidetcher which was further adapted by two of my former colleagues - Alexey Dmitrovsky1(T-Mobile) and Pavel Orda(Altoros).


Motivation

So the motivation is fairly simple, consider two remote calls the result of which is aggregated in some way:

String  r1 = remoteCall1();
Integer r2 = remoteCall2();

String aggregated = r1 + r2;
assertThat(aggregated).isEqualTo("result1");

Ideally you would want the remote calls to be protected by the excellent Hystrix library, what if I could do it along these lines:

String  r1 = execute("remote1", "remote1", () -> remoteCall1());
Integer r2 = execute("remote2", "remote2", () -> remoteCall2());

String aggregated = r1 + r2;
assertThat(aggregated).isEqualTo("result1");

I have avoided all the boiler plate around needing to define an explicit HystrixCommand around each of my remote calls this way, and instead wrapped the remote calls using a Java 8 lambda expression which resolves to a Supplier functional interface

Even better, a variation of this allows me to aggregate the results in a reactive way by returning an Rx-java Observable instead:

Observable<String>  r1Obs = executeObservable("remote1", "remote1", () -> remoteCall1());
Observable<Integer> r2Obs = executeObservable("remote2", "remote2", () -> remoteCall2());

String aggregated = Observable.zip(r1Obs, r2Obs, (r1, r2) -> (r1 + r2)).toBlocking().single();

assertThat(aggregated).isEqualTo("result1");

What about fallbacks, I can support it by taking in another lambda expression which transforms an exception to a reasonable fallback(and logs the exception in the process):


Observable<String> r1Obs = executeObservable("remote1", "remote1",
        () -> {
            throw new RuntimeException("!!");
        },
        (t) -> {
            logger.error(t.getMessage(), t);
            return "fallback";
        });
Observable<Integer> r2Obs = executeObservable("remote2", "remote2",
        () -> {
            throw new RuntimeException("!!");
        },
        (t) -> {
            logger.error(t.getMessage(), t);
            return 0;
        });

String aggregated = Observable.zip(r1Obs, r2Obs, (r1, r2) -> (r1 + r2)).toBlocking().single();

assertThat(aggregated).isEqualTo("fallback0");


Implementation


The implementation is fairly simple and in its entirety is the following:

import com.netflix.hystrix.HystrixCommand;
import com.netflix.hystrix.HystrixCommandGroupKey;
import com.netflix.hystrix.HystrixCommandKey;
import rx.Observable;

import java.util.function.Function;
import java.util.function.Supplier;

public class GenericHystrixCommand<T> extends HystrixCommand<T> {

    private Supplier<T> toRun;

    private Function<Throwable, T> fallback;


    public static <T> T execute(String groupKey, String commandkey, Supplier<T> toRun) {
        return execute(groupKey, commandkey, toRun, null);
    }

    public static <T> T execute(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        return new GenericHystrixCommand<>(groupKey, commandkey, toRun, fallback).execute();
    }

    public static <T> Observable<T> executeObservable(String groupKey, String commandkey, 
               Supplier<T> toRun) {
        return executeObservable(groupKey, commandkey, toRun, null);
    }

    public static <T> Observable<T> executeObservable(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        return new GenericHystrixCommand<>(groupKey, commandkey, toRun, fallback)
                .toObservable();
    }

    public GenericHystrixCommand(String groupKey, String commandkey, 
               Supplier<T> toRun, Function<Throwable, T> fallback) {
        super(Setter
                .withGroupKey(HystrixCommandGroupKey.Factory.asKey(groupKey))
                .andCommandKey(HystrixCommandKey.Factory.asKey(commandkey)));
        this.toRun = toRun;
        this.fallback = fallback;
    }

    protected T run() throws Exception {
        return this.toRun.get();
    }

    @Override
    protected T getFallback() {
        return (this.fallback != null)
                ? this.fallback.apply(getExecutionException())
                : super.getFallback();
    }
}


All it does is to take in the code that needs to be wrapped as a Java8 Supplier and the fallback as a Java 8 Function


If you are interested in playing with this pattern, I have a little more fleshed out sample here in my github repo.

Sunday, March 19, 2017

Spring Web-Flux - First steps

Spring Web-Flux term is used for denoting the Reactive programming support in the web layer of Spring Framework. It provides support for both creating reactive server based web applications and also has client libraries to make remote REST calls.

In this post, I will demonstrate a sample web application which makes use of Spring Web-Flux. As detailed here, the Web-Flux support in Spring 5+ supports two different programming style - the traditional annotation based style and the new functional style. In this post I will be sticking to the traditional annotation style and will follow it up in another blog post(now available here) detailing a similar application but with endpoints defined in a functional style. My focus is going to be purely the programming model.

Data and Services Layer


I have a fairly simple REST interface supporting CRUD operations of a Hotel resource with a structure along these lines:

public class Hotel {

    private UUID id;

    private String name;

    private String address;

    private String state;

    private String zip;
    
    ....

}

I am using Cassandra as a store of this entity and using the reactive support in Spring Data Cassandra allows the data layer to be reactive, supporting an API that looks like this - I have two repositories here, one facilitating the storage of the Hotel entity above, another maintaining a duplicated data which makes searching for Hotel entity by its first letter a little more efficient:

public interface HotelRepository  {
    Mono<Hotel> save(Hotel hotel);
    Mono<Hotel> update(Hotel hotel);
    Mono<Hotel> findOne(UUID hotelId);
    Mono<Boolean> delete(UUID hotelId);
    Flux<Hotel> findByState(String state);
}

public interface HotelByLetterRepository {
    Flux<HotelByLetter> findByFirstLetter(String letter);
    Mono<HotelByLetter> save(HotelByLetter hotelByLetter);
    Mono<Boolean> delete(HotelByLetterKey hotelByLetterKey);
}


The operations which return one instance of an entity now return a Mono type and operations which return more than one element return a Flux type.


Given this let me touch on one quick use of returning the reactive types, when a Hotel is updated I have to delete the duplicated data maintained via HotelByLetter repository and recreate it again, this can be accomplished something like the following, using the excellent operators provided by Flux and Mono types:

public Mono<Hotel> update(Hotel hotel) {
    return this.hotelRepository.findOne(hotel.getId())
            .flatMap(existingHotel ->
                    this.hotelByLetterRepository.delete(new HotelByLetter(existingHotel).getHotelByLetterKey())
                            .then(this.hotelByLetterRepository.save(new HotelByLetter(hotel)))
                            .then(this.hotelRepository.update(hotel))).next();
}


Web Layer

Now to the focus of the article, support for annotation based reactive programming model support in the web layer!

The @Controller and @RestController annotations have been the workhorses of the Spring MVC's REST endpoint support for years now, traditionally they have enabled taking in and returning Java POJO's. These controllers in the reactive model have now been tweaked to take in and return the Reactive types - Mono and Flux in my examples, but additionally also the Rx-Java 1/2 and Reactive Streams types.

Given this, my controller in almost its entirety looks like this:

@RestController
@RequestMapping("/hotels")
public class HotelController {

    ....

    @GetMapping(path = "/{id}")
    public Mono<Hotel> get(@PathVariable("id") UUID uuid) {
        return this.hotelService.findOne(uuid);
    }

    @PostMapping
    public Mono<ResponseEntity<Hotel>> save(@RequestBody Hotel hotel) {
        return this.hotelService.save(hotel)
                .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED));
    }

    @PutMapping
    public Mono<ResponseEntity<Hotel>> update(@RequestBody Hotel hotel) {
        return this.hotelService.update(hotel)
                .map(savedHotel -> new ResponseEntity<>(savedHotel, HttpStatus.CREATED))
                .defaultIfEmpty(new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    @DeleteMapping(path = "/{id}")
    public Mono<ResponseEntity<String>> delete(
            @PathVariable("id") UUID uuid) {
        return this.hotelService.delete(uuid).map((Boolean status) ->
                new ResponseEntity<>("Deleted", HttpStatus.ACCEPTED));
    }

    @GetMapping(path = "/startingwith/{letter}")
    public Flux<HotelByLetter> findHotelsWithLetter(
            @PathVariable("letter") String letter) {
        return this.hotelService.findHotelsStartingWith(letter);
    }

    @GetMapping(path = "/fromstate/{state}")
    public Flux<Hotel> findHotelsInState(
            @PathVariable("state") String state) {
        return this.hotelService.findHotelsInState(state);
    }
}

The traditional @RequestMapping, @GetMapping, @PostMapping is unchanged, what is different is the return types - for instances where atmost 1 result is expected I am now returning a Mono type and where a list would have been returned before, now a Flux type is returned.

With the use of the reactive support in Spring Data Cassandra the entire web to services and back is reactive and specifically for the focus of the article, eminently readable and intuitive.


It may be easier to simply try out the code behind this post which I have available in my github repo here.

Tuesday, February 28, 2017

Using UAA OAuth2 authorization server - client and resource

In a previous post I had gone over how to bring up an OAuth2 authorization server using Cloud Foundry UAA project and populating it with some of the actors involved in a OAuth2 Authorization Code flow.


I have found this article at the Digital Ocean site does a great job of describing the OAuth2 Authorization code flow, so instead of rehashing what is involved in this flow I will directly jump into implementing this flow using Spring Boot/Spring Security.

The following diagram inspired by the one here shows a high level flow in an Authorization Code grant type:




I will have two applications - a resource server exposing some resources of a user, and a client application that wants to access those resources on behalf of a user. The Authorization server itself can be brought up as described in the previous blog post.

The rest of the post can be more easily followed along with the code available in my github repo here

Authorization Server

The Cloud Foundry UAA server can be easily brought up using the steps described in my previous blog post. Once it is up the following uaac commands can be used for populating the different credentials required to run the sample.

These scripts will create a client credential for the client app and add a user called "user1" with a scope of "resource.read" and "resource.write".

# Login as a canned client
uaac token client get admin -s adminsecret

# Add a client credential with client_id of client1 and client_secret of client1
uaac client add client1 \
   --name client1 \
   --scope resource.read,resource.write \
   -s client1 \
   --authorized_grant_types authorization_code,refresh_token,client_credentials \
   --authorities uaa.resource


# Another client credential resource1/resource1
uaac client add resource1 \
  --name resource1 \
  -s resource1 \
  --authorized_grant_types client_credentials \
  --authorities uaa.resource


# Add a user called user1/user1
uaac user add user1 -p user1 --emails user1@user1.com


# Add two scopes resource.read, resource.write
uaac group add resource.read
uaac group add resource.write

# Assign user1 both resource.read, resource.write scopes..
uaac member add resource.read user1
uaac member add resource.write user1


Resource Server

The resource server exposes a few endpoints, expressed using Spring MVC and secured using Spring Security, the following way:

@RestController
public class GreetingsController {
    @PreAuthorize("#oauth2.hasScope('resource.read')")
    @RequestMapping(method = RequestMethod.GET, value = "/secured/read")
    @ResponseBody
    public String read(Authentication authentication) {
        return String.format("Read Called: Hello %s", authentication.getCredentials());
    }

    @PreAuthorize("#oauth2.hasScope('resource.write')")
    @RequestMapping(method = RequestMethod.GET, value = "/secured/write")
    @ResponseBody
    public String write(Authentication authentication) {
        return String.format("Write Called: Hello %s", authentication.getCredentials());
    }
}

There are two endpoint uri's being exposed - "/secured/read" authorized for scope "resource.read" and "/secured/write" authorized for scope "resource.write"

The configuration which secures these endpoints and marks the application as a resource server is the following:

@Configuration
@EnableResourceServer
@EnableWebSecurity
@EnableGlobalMethodSecurity(securedEnabled = true, prePostEnabled = true)
public class ResourceServerConfiguration extends ResourceServerConfigurerAdapter {

    @Override
    public void configure(ResourceServerSecurityConfigurer resources) throws Exception {
        resources.resourceId("resource");
    }

    @Override
    public void configure(HttpSecurity http) throws Exception {
        http
                .antMatcher("/secured/**")
                .authorizeRequests()
                .anyRequest().authenticated();
    }
}

This configuration along with properties describing how the token is to be validated is all that is required to get the resource server running.


Client

The client configuration for OAuth2 using Spring Security OAuth2 is also fairly simple, @EnableAuth2SSO annotation pulls in all the required configuration to wire up the spring security filters for OAuth2 flows:

@EnableOAuth2Sso
@Configuration
public class OAuth2SecurityConfig extends WebSecurityConfigurerAdapter {
    @Override
    public void configure(WebSecurity web) throws Exception {
        super.configure(web);
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.csrf().disable();

        //@formatter:off
        http.authorizeRequests()
                .antMatchers("/secured/**")
                    .authenticated()
                .antMatchers("/")
                    .permitAll()
                .anyRequest()
                    .authenticated();

        //@formatter:on

    }

}

To call a downstream system, the client has to pass on the OAuth token as a header in the downstream calls, this is done by hooking a specialized RestTemplate called the OAuth2RestTemplate that can grab the access token from the context and pass it downstream, once it is hooked up a secure downstream call looks like this:

public class DownstreamServiceHandler {

    private final OAuth2RestTemplate oAuth2RestTemplate;
    private final String resourceUrl;


    public DownstreamServiceHandler(OAuth2RestTemplate oAuth2RestTemplate, String resourceUrl) {
        this.oAuth2RestTemplate = oAuth2RestTemplate;
        this.resourceUrl = resourceUrl;
    }


    public String callRead() {
        return callDownstream(String.format("%s/secured/read", resourceUrl));
    }

    public String callWrite() {
        return callDownstream(String.format("%s/secured/write", resourceUrl));
    }

    public String callInvalidScope() {
        return callDownstream(String.format("%s/secured/invalid", resourceUrl));
    }

    private String callDownstream(String uri) {
        try {
            ResponseEntity<String> responseEntity = this.oAuth2RestTemplate.getForEntity(uri, String.class);
            return responseEntity.getBody();
        } catch(HttpStatusCodeException statusCodeException) {
            return statusCodeException.getResponseBodyAsString();
        }
    }
}


Demonstration

The Client and the resource server can be brought up using the instructions here. Once all the systems are up, accessing the client will present the user with a page which looks like this:


Accessing the secure page, will result in a login page being presented by the authorization server:



The client is requesting a "resource.read" and "resource.write" scope from the user, user is prompted to authorize these scopes:


Assuming that the user has authorized "resource.read" but not "resource.write", the token will be presented to the user:

At this point if the downstream resource is requested which requires a scope of "resource.read", it should get retrieved:


And if a downstream resource is requested with a scope that the user has not authorized - "resource.write" in this instance:



Reference

  • Most of the code is based on the Cloud Foundry UAA application samples available here - https://github.com/pivotal-cf/identity-sample-apps
  • The code in the post is here: https://github.com/bijukunjummen/oauth-uaa-sample

Tuesday, February 14, 2017

Bootstrapping an OAuth2 Authorization server using UAA

A quick way to get a robust OAuth2 server running in your local machine is to use the excellent Cloud Foundry UAA project. UAA is used as the underlying OAUth2 authorization server in Cloud Foundry deployments and can scale massively, but is still small enough that it can be booted up on modest hardware.

I will cover using the UAA in two posts. In this post, I will go over how to get a local UAA server running and populate it with some of the actors involved in an OAuth2 authorization_code flow - clients and users, and in a follow up post I will show how to use this Authorization server with a sample client application and in securing a resource.

Starting up the UAA

The repository for the UAA project is at https://github.com/cloudfoundry/uaa


Downloading the project is simple, just clone this repo:
git clone https://github.com/cloudfoundry/uaa

If you have a local JDK available, start it up using:
./gradlew run

This version of UAA uses an in-memory database, so the test data generated will be lost on restart of the application.


Populate some data

An awesome way to interact with UAA is its companion CLI application called uaac, available here. Assuming that you have the uaac cli downloaded and UAA started up at its default port of 8080, let us start by pointing the uaac to the uaa application:

uaac target http://localhost:8080/uaa

and log into it using one of the canned client credentials(admin/adminsecret):

uaac token client get admin -s adminsecret

Now that a client has logged in, the token can be explored using :
uaac context

This would display the details of the token issued by UAA, along these lines:

[3]*[http://localhost:8080/uaa]

  [2]*[admin]
      client_id: admin
      access_token: eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiJkOTliMjg1MC1iZDQ1LTRlOTctODIyZS03NGE2MmUwN2Y0YzUiLCJzdWIiOiJhZG1pbiIsImF1dGhvcml0aWVzIjpbImNsaWVudHMucmVhZCIsImNsaWVudHMuc2VjcmV0IiwiY2xpZW50cy53cml0ZSIsInVhYS5hZG1pbiIsImNsaWVudHMuYWRtaW4iLCJzY2ltLndyaXRlIiwic2NpbS5yZWFkIl0sInNjb3BlIjpbImNsaWVudHMucmVhZCIsImNsaWVudHMuc2VjcmV0IiwiY2xpZW50cy53cml0ZSIsInVhYS5hZG1pbiIsImNsaWVudHMuYWRtaW4iLCJzY2ltLndyaXRlIiwic2NpbS5yZWFkIl0sImNsaWVudF9pZCI6ImFkbWluIiwiY2lkIjoiYWRtaW4iLCJhenAiOiJhZG1pbiIsImdyYW50X3R5cGUiOiJjbGllbnRfY3JlZGVudGlhbHMiLCJyZXZfc2lnIjoiZTc4YjAyMTMiLCJpYXQiOjE0ODcwMzk3NzYsImV4cCI6MTQ4NzA4Mjk3NiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL3VhYS9vYXV0aC90b2tlbiIsInppZCI6InVhYSIsImF1ZCI6WyJhZG1pbiIsImNsaWVudHMiLCJ1YWEiLCJzY2ltIl19.B-RmeIvYttxJOMr_CX1Jsinsr6G_e8dVU-Fv-3Qq1ow
      token_type: bearer
      expires_in: 43199
      scope: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
      jti: d99b2850-bd45-4e97-822e-74a62e07f4c5

To see a more readable and decoded form of token, just run:
uaac token decode 
which should display a decoded form of the token:
jti: d99b2850-bd45-4e97-822e-74a62e07f4c5
  sub: admin
  authorities: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
  scope: clients.read clients.secret clients.write uaa.admin clients.admin scim.write scim.read
  client_id: admin
  cid: admin
  azp: admin
  grant_type: client_credentials
  rev_sig: e78b0213
  iat: 1487039776
  exp: 1487082976
  iss: http://localhost:8080/uaa/oauth/token
  zid: uaa
  aud: admin clients uaa scim


Now, to create a brand new client(called client1), which I will be using in a follow on post:

uaac client add client1  \
  --name client1 --scope resource.read,resource.write \
  --autoapprove true  \
  -s client1 \
  --authorized_grant_types authorization_code,refresh_token,client_credentials \
  --authorities uaa.resource

This client is going to request a scope of resource.read, resource.write from users and will participate in authorization_code grant-type OAuth2 flows


Creating a resource owner or a user of the system:

uaac user add user1 -p user1 --emails user1@user1.com

and assigning this user a resource.read scope:

uaac group add resource.read
uaac member add resource.read user1


Exercise a test flow

Now that we have a client and a resource owner, let us exercise a quick authorization_code flow, uaac provides a handy command line option that provides the necessary redirect hooks to capture auth code and transforms the auth_code to an access token.

uaac token authcode get -c client1 -s client1 --no-cf

Invoking the above command should open up a browser window and prompt for user creds:



Logging in with the user1/user1 user that was created previously should respond with a message in the command line that the token has been successfully fetched, this can be explored once more using the following command:

uaac context

with the output, showing the details of the logged in user!:
jti: c8ddfdfc-9317-4f16-b3a9-808efa76684b
  nonce: 43c8d9f7d6264fb347ede40c1b7b44ae
  sub: 7fdd9a7e-5b92-42e7-ae75-839e21b932e1
  scope: resource.read
  client_id: client1
  cid: client1
  azp: client1
  grant_type: authorization_code
  user_id: 7fdd9a7e-5b92-42e7-ae75-839e21b932e1
  origin: uaa
  user_name: user1
  email: user1@user1.com
  auth_time: 1487040497
  rev_sig: c107f5c0
  iat: 1487040497
  exp: 1487083697
  iss: http://localhost:8080/uaa/oauth/token
  zid: uaa
  aud: resource client1

This concludes the whirlwind tour of setting up a local UAA and adding a couple of roles involved in a OAuth2 flow - a client and a user. I have not covered the OAuth2 flows itself, the Digital Ocean intro to OAuth2 is a very good primer on the flows.

I will follow this post with a post on how this infrastructure can be used for securing a sample resource and demonstrate a flow using Spring Security and Spring Boot.

Saturday, January 28, 2017

Spring Data support for Cassandra 3

One of the items that caught my eye from the announcement of the new Spring Data release train named Ingalls was that the Spring Data Cassandra finally supports Cassandra 3+. So I revisited one of my old samples and tried it with a newer version of Cassandra.


Installing Cassandra


The first step is to install a local version of Cassandra and I continue to find the ccm tool to be outstanding in being able to bring up and tear down a small cluster. Here is the command that I am running to bring up a 3 node Apache Cassandra 3.9 based cluster.

ccm create test -v 3.9 -n 3 -s --vnodes


Create Schemas



Connect to a node in the cluster:
ccm node1 cqlsh

CREATE KEYSPACE IF NOT EXISTS sample WITH replication = {'class':'SimpleStrategy', 'replication_factor':1};

Next, I need to create the tables to hold the data. A general Cassandra recommendation is to model the tables based on query patterns - given this let me first define a table to hold the basic "hotel" information:

CREATE TABLE IF NOT EXISTS  sample.hotels (
    id UUID,
    name varchar,
    address varchar,
    state varchar,
    zip varchar,
    primary key((id), name)
);


Assuming I have to support two query patterns - a retrieval of hotels based on say the first letter, and a retrieval of hotels by state, I have a "hotels_by_letter" denormalized table to support retrieval by "first letter":

CREATE TABLE IF NOT EXISTS  sample.hotels_by_letter (
    first_letter varchar,
    hotel_name varchar,
    hotel_id UUID,
    address varchar,
    state varchar,
    zip varchar,
    primary key((first_letter), hotel_name, hotel_id)
);


And just for variety a "hotels_by_state" materialized view to support retrieval by state that the hotels are in:

CREATE MATERIALIZED VIEW sample.hotels_by_state AS
    SELECT id, name, address, state, zip FROM hotels
        WHERE state IS NOT NULL AND id IS NOT NULL AND name IS NOT NULL
    PRIMARY KEY ((state), name, id)
    WITH CLUSTERING ORDER BY (name DESC)


Coding Repositories


On the Java side, since I am persisting and querying a simple domain type called "Hotel", it looks like this:

@Table("hotels")
public class Hotel implements Serializable {
    @PrimaryKey
    private UUID id;
    private String name;
    private String address;
    private String state;
    private String zip;
    ...
}

Now, to be able to perform a basic CRUD operation on this entity all that is required is a repository interface as shown in the following code:
import cass.domain.Hotel;
import org.springframework.data.repository.CrudRepository;

import java.util.UUID;

public interface HotelRepository extends CrudRepository<Hotel, UUID>, HotelRepositoryCustom {}

This repository is additionally inheriting from a HotelRepositoryCustom interface which is to provide the custom finders to support retrieval by first name and state.

Now to persist a Hotel entity all I have to do is to call the repository method:

hotelRepository.save(hotel);

The data in the materialized view is automatically synchronized and maintained by Cassandra, however the data in the "hotels_by_letter" table has to be managed through code, so I have another repository defined to maintain data in this table:

public interface HotelByLetterRepository 
        extends CrudRepository<HotelByLetter, HotelByLetterKey>, HotelByLetterRepositoryCustom {}


The custom interface and its implementation is to facilitate searching this table on queries based on first letter of the hotel name and is implemented this way through the a custom repository implementation feature of Spring data Cassandra.

import com.datastax.driver.core.querybuilder.QueryBuilder;
import com.datastax.driver.core.querybuilder.Select;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.cassandra.core.CassandraTemplate;
import org.springframework.stereotype.Repository;

import java.util.List;

@Repository
public class HotelRepositoryImpl implements HotelRepositoryCustom {

    private final CassandraTemplate cassandraTemplate;

    @Autowired
    public HotelRepositoryImpl(CassandraTemplate cassandraTemplate) {
        this.cassandraTemplate = cassandraTemplate;
    }

    @Override
    public List<Hotel> findByState(String state) {
        Select select = QueryBuilder.select().from("hotels_by_state");
        select.where(QueryBuilder.eq("state", state));
        return this.cassandraTemplate.select(select, Hotel.class);
    }
}

@Repository
public class HotelByLetterRepositoryImpl implements HotelByLetterRepositoryCustom {
    private final CassandraTemplate cassandraTemplate;

    public HotelByLetterRepositoryImpl(CassandraTemplate cassandraTemplate) {
        this.cassandraTemplate = cassandraTemplate;
    }

    @Override
    public List<HotelByLetter> findByFirstLetter(String letter) {
        Select select = QueryBuilder.select().from("hotels_by_letter");
        select.where(QueryBuilder.eq("first_letter", letter));
        return this.cassandraTemplate.select(select, HotelByLetter.class);
    }

}


Given these repository classes, custom repositories that provide query support, the rest of the code is to wire everything together which Spring Boot's Cassandra Auto Configuration facilitates.

That is essentially all there is to it, the Spring Data Cassandra makes it ridiculously simple to interact with Cassandra 3+.

A complete working project is I believe a far better way to get familiar with this excellent library and I have such a sample available in my github repo here - https://github.com/bijukunjummen/sample-boot-with-cassandra





Sunday, January 15, 2017

Gradle Plugins DSL and Spring-Boot Plugin

Gradle Plugins DSL is a new gradle feature which provides a very succinct way of adding a plugin to a Gradle based project. A good way to show the utility of this new mechanism is in how it simplifies a sample Spring Boot based gradle build file.

If I were to generate a sample gradle based Spring boot project from the excellent http://start.spring.io site, a snippet of the gradle file which adds in the Spring Boot gradle plugin looks like this:

buildscript {
 ext {
  springBootVersion = '1.4.3.RELEASE'
 }
 repositories {
  mavenCentral()
 }
 dependencies {
  classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
 }
}

apply plugin: 'org.springframework.boot'

The new Gradle Plugins DSL simplifies this boilerplate drastically. An equivalent declaration using the new Plugins DSL is the following:

plugins {
  id "org.springframework.boot" version "1.4.3.RELEASE"
}

This IMHO reads far better, though it does require some level of mental parsing. The best way to understand this new syntax though may to know that this works in concert with the Gradle plugins portal, a centralized repository of plugins, to resolve the plugin related dependencies. The page for the Spring Boot plugin is here - https://plugins.gradle.org/plugin/org.springframework.boot.