Saturday, September 22, 2018

Knative serving - using Ambassador gateway

This is a continuation of my experimentation with Knative serving, this time around building a gateway on top of a Knative serving applications. This builds on two of my previous posts - on using Knative to deploy a Spring Boot App and making a service to service call in Knative.

Why a Gateway on top of Knative application


To explain this let me touch on my previous blog post. Assuming that Knative serving is already available in a Kubernetes environment, the way to deploy an application is using a manifest which looks like this:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: sample-boot-knative-service
  namespace: default
spec:
  runLatest:
    configuration:
      revisionTemplate:
        spec:
          container:
            image: bijukunjummen/sample-boot-knative-app:0.0.3-SNAPSHOT
            env:
            - name: ASAMPLE_ENV
              value: "sample-env-val"


Now to invoke this application, I have to make the call via an ingress created by Knative serving, which can be obtained the following way in a minikube environment:

export GATEWAY_URL=$(echo $(minikube ip):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}'))

The request now has to go through the ingress and the ingress uses a Host http header to then route the request to the app. The host header for the deployed service can be obtained using the following bash script:

export APP_DOMAIN=$(kubectl get services.serving.knative.dev sample-boot-knative-service  -o="jsonpath={.status.domain}")

and then a call via the knative ingress gateway made the following way, using CURL:

curl -X "POST" "http://${GATEWAY_URL}/messages" \
     -H "Accept: application/json" \
     -H "Content-Type: application/json" \
     -H "Host: ${APP_DOMAIN}" \
     -d $'{
  "id": "1",
  "payload": "one",
  "delay": "300"
}'

or using httpie:

http http://${GATEWAY_URL}/messages Host:"${APP_DOMAIN}" id=1 payload=test delay=1

There are too many steps involved in making a call to the application via the knative ingress:



My objective in this post is to simplify the users experience in making a call to the app by using a Gateway like Ambassador.


Integrating Ambassador to Knative


There is nothing special about installing Ambassador into a Knative environment, the excellent instructions provided here worked cleanly in my minikube environment.

Now my objective with the gateway is summarized in this picture:


With Ambassador in place, all the user has to do is to send a request to Ambassador Gateway and it would take care of plugging in the Host header before making a request to the Knative Ingress.

So how does this work, fairly easily! assuming Ambassador is in place, all it needs is a configuration which piggybacks on a Kubernetes service the following way:

---
apiVersion: v1
kind: Service
metadata:
  name: sample-knative-app-gateway
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v0
      kind:  Mapping
      name: sample-boot-knative-app
      prefix: /messages
      rewrite: /messages
      service: knative-ingressgateway.istio-system.svc.cluster.local 
      host_rewrite: sample-boot-knative-service.default.example.com
spec:
  type: LoadBalancer
  ports:
  - name: ambassador
    port: 80
    targetPort: 80
  selector:
    service: ambassador

Here I am providing configuration via a Service annotations, intercepting any calls to /messages uri and forwarding these request to the knative ingressgatway service (knative-ingressgateway.istio-system.svc.cluster.local) and adding the host header of "sample-boot-knative-service.default.example.com".


Now the interaction from a user perspective is far simpler, all I have to do is to get the url for this new service and to make the api call, in a minikube environment using the following bash script:

export AMB_URL=$(echo $(minikube ip):$(kubectl get svc sample-knative-app-gateway -n default -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}'))

http http://${AMB_URL}/messages id=1 payload=test delay=1


It may be easier to try this on a real code, which is available my github repo here.

Tuesday, September 4, 2018

Knative Serving - Service to Service call

In a previous post I had covered using Knative's Serving feature to run a sample Java Application. This post will be go into the steps to deploy two applications, with one application calling the other.





Details of the Sample

The entire sample is available at my github repository - https://github.com/bijukunjummen/sleuth-webflux-sample.

The applications are Spring Boot based. The backend application exposes an endpoint "/messages" when invoked with a payload which looks like this:

{
    "delay": "0",
    "id": "123",
    "payload": "test",
    "throw_exception": "true"
}

would respond after the specified delay. If the payload has the "throw_exception" flag set to true, then it would return a 5XX after the specified delay.

The client application exposes a "/passthrough/messages" endpoint, which takes in the exact same payload and simply forwards it to the backend application. The url to the backend app is passed to the client app as a "LOAD_TARGET_URL" environment property.



Deploying as a Knative Serving service

The subfolder to this project - knative, holds the manifest for deploying the Knative serving Service for the 2 applications. The backend application's knative service manifest looks like this:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: sample-backend-app
  namespace: default
spec:
  runLatest:
    configuration:
      revisionTemplate:
        spec:
          container:
            image: bijukunjummen/sample-backend-app:0.0.1-SNAPSHOT
            env:
            - name: VERSION
              value: "0.0.2-SNAPSHOT"
            - name: SERVER_PORT
              value: "8080"

The client app has to point to the backend service and is specified in the specs:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: sample-client-app
  namespace: default
spec:
  runLatest:
    configuration:
      revisionTemplate:
        spec:
          container:
            image: bijukunjummen/sample-client-app:0.0.2-SNAPSHOT
            env:
            - name: VERSION
              value: "0.0.1-SNAPSHOT"
            - name: LOAD_TARGET_URL
              value: http://sample-backend-app.default.svc.cluster.local
            - name: SERVER_PORT
              value: "8080"


The domain "sample-backend-app.default.svc.cluster.local", points to the dns name of the backend service created by the Knative serving service resource


Testing

It was easier for me to simply create a small video with how I tested this:



As in my previous post, the request to the application is via the knative ingress gateway, the url to which can be obtained the following way(for a minikube environment):

export GATEWAY_URL=$(echo $(minikube ip):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}'))

And a sample request made the following way, note that the routing in the Gateway is via the host header, in this instance "sample-client-app.default.example.com":

export CLIENT_DOMAIN=$(kubectl get services.serving.knative.dev sample-client-app  -o="jsonpath={.status.domain}")

http http://${GATEWAY_URL}/passthrough/messages Host:"${CLIENT_DOMAIN}" id=1 payload=test delay=100 throw_exception=false


Sunday, July 29, 2018

"Knative Serving" for Spring Boot Applications

I got a chance to try Knative's Serving feature to deploy a Spring Boot application and this post is simply documenting a sample and the approach I took.

I don't understand the internals of Knative enough yet to have an opinion on whether this approach is better than the deployment + services + ingress based approach.

One feature that is awesome is the auto-scaling feature in Knative Serving, which based on the load, increases/decreases the number of pods as part of a "Deployment" handling the request.

Details of the Sample


My entire sample is available here and it is mostly developed based on the java sample available with Knative Serving documentation. I used Knative with a minikube environment to try the sample.


Deploying to Kubernetes/Knative

Assuming that a Kubernetes environment with Istio and Knative has been set-up, the way to run the application is to deploy a Kubernetes manifest this way:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: sample-boot-knative-service
  namespace: default
spec:
  runLatest:
    configuration:
      revisionTemplate:
        spec:
          container:
            image: bijukunjummen/sample-boot-knative-app:0.0.1-SNAPSHOT

The image "bijukunjummen/sample-boot-knative-app:0.0.1-SNAPSHOT" is publicly available via Dockerhub, so this sample should work out of the box.

Applying this manifest:

kubectl apply -f service.yml

should register a Knative Serving Service resource with Kubernetes, the Knative serving services resource manages the lifecycle of other Knative resources (configuration, revision, route) the details of which can be viewed using the following commands, if anything goes wrong, the details should show up in the output:

kubectl get services.serving.knative.dev sample-boot-knative-service -o yaml

Testing

Assuming that the Knative serving service is deployed cleanly, the first oddity to see is that no pods show up for the application!


If I were to make a request to the app now, which is done via a routing layer managed by Knative - this can be retrieved for a minikube environment using the following bash script:

export GATEWAY_URL=$(echo $(minikube ip):$(kubectl get svc knative-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[?(@.port==80)].nodePort}'))
export APP_DOMAIN=$(kubectl get services.serving.knative.dev sample-boot-knative-service  -o="jsonpath={.status.domain}")

and making a call to an endpoint of the app using CUrl:

curl -X "POST" "http://${GATEWAY_URL}/messages" \
     -H "Accept: application/json" \
     -H "Content-Type: application/json" \
     -H "Host: ${APP_DOMAIN}" \
     -d $'{
  "id": "1",
  "payload": "one",
  "delay": "300"
}'
OR httpie

http http://${GATEWAY_URL}/messages Host:"${APP_DOMAIN}" id=1 payload=test delay=100

should magically, using the auto-scaler component start spinning up the pods to handle the request:


The first request took almost 17 seconds to complete, the time it takes to spin up a pod, but subsequent requests are quick.

Now, to show the real power of auto-scaler I ran a small load test with a 50 user load and pods are scaled up and down as required.



Conclusion

I can see the promise of Knative in automatically managing the resources, once defined using a fairly simple manifest, in a Kubernetes environment and letting a developer focus on the code and logic.

Thursday, July 19, 2018

Jib - Building docker image for a Spring Boot App

I was pleasantly surprised by how easy it was to create a docker image for a sample Spring Boot application using Jib.

Let me first contrast Jib with an approach that I was using before.

I was creating docker images using bmuschko's excellent gradle-docker plugin. Given access to a docker daemon and a gradle dsl based description of the Dockerfile or a straight Dockerfile, it would create the docker image using a gradle task. In my case, the task to create the docker image looks something like this:

task createDockerImage(type: DockerBuildImage) {
    inputDir = file('.')
    dockerFile = project.file('docker/Dockerfile')
    tags = ['sample-micrometer-app:' + project.version]
}

createDockerImage.dependsOn build

and my Dockerfile itself derived off "java:8" base image:

FROM java:8
...

gradle-docker-plugin made it simple to create a docker image right from gradle with the catch that the plugin needs access to a docker daemon to create the image. Also since the base "java:8" image is large the final docker image turns out to be around 705MB on my machine. Again no fault of the gradle-docker plugin but based on my choice of base image.


Now with Jib, all I have to do is to add the plugin:

plugins {
    id 'com.google.cloud.tools.jib' version '0.9.6'
}

Configure it to give the image a name:

jib {
    to {
        image = "sample-micrometer-app:0.0.1-SNAPSHOT"
    }
}

And that is it. With a local docker daemon available, I can create my docker image using the following task:


./gradlew jibDockerBuild

Jib automatically selects a very lightweight base image - my new image is just about 150 MB in size.

If I had access to a docker registry available then the local docker daemon is not required, it can directly create and publish the image to a docker registry!

Jib gradle plugin provides an interesting task - "jibExportDockerContext" to export out the docker file, this way if needed a docker build can be run using this Dockerfile, for my purposes I wanted to see the contents of this file and it looks something like this:

FROM gcr.io/distroless/java

COPY libs /app/libs/
COPY resources /app/resources/
COPY classes /app/classes/

ENTRYPOINT ["java","-cp","/app/libs/*:/app/resources/:/app/classes/","sample.meter.SampleServiceAppKt"]


All in all, a very smooth experience and Jib does live up to its goals. My sample project with jib integrated with a gradle build is available here.


Friday, June 22, 2018

Tracing a reactive flow - Using Spring Cloud Sleuth with Boot 2

Spring Cloud Sleuth which adds Spring instrumentation support on top of OpenZipkin Brave makes distributed tracing trivially simple for Spring Boot applications. This is a quick write up on what it takes to add support for distributed tracing using this excellent library.

Consider two applications - a client application which uses an upstream service application, both using Spring WebFlux, the reactive web stack for Spring:


My objective is to ensure that flows from user to the client application to the service application can be traced and latencies cleanly recorded for requests.


The final topology that Spring Cloud Sleuth enables is the following:


The sampled trace information from the client and the service app is exported to Zipkin via a queuing mechanism like RabbitMQ.


So what are the changes required to the client and the service app - like I said it is trivially simple! The following libraries need to be pulled in - in my case via gradle:

compile("org.springframework.cloud:spring-cloud-starter-sleuth")
 compile("org.springframework.cloud:spring-cloud-starter-zipkin")
 compile("org.springframework.amqp:spring-rabbit")

The versions are not specified as they are expected to be pulled in via Spring Cloud BOM and thanks to Spring Gradle Dependency Management plugin:


ext {
    springCloudVersion = 'Finchley.RELEASE'
}

apply plugin: 'io.spring.dependency-management'

dependencyManagement {
    imports {
        mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
    }
}

And that is actually it, any logs from the application should now start recording the trace and the spans, see how he traceid carried forward in the following logs spanning two different services:

2018-06-22 04:06:28.579  INFO [sample-client-app,c3d507df405b8aaf,c3d507df405b8aaf,true] 9 --- [server-epoll-13] sample.load.PassThroughHandler           : handling message: Message(id=null, payload=Test, delay=1000)
2018-06-22 04:06:28.586  INFO [sample-service-app,c3d507df405b8aaf,829fde759da15e63,true] 8 --- [server-epoll-11] sample.load.MessageHandler               : Handling message: Message(id=5e7ba240-f97d-405a-9633-5540bbfe0df1, payload=Test, delay=1000)

Further the Zipkin UI records the exported information and can visually show a sample trace the following way:



This sample is available in my github repository here - https://github.com/bijukunjummen/sleuth-webflux-sample and can be started up easily using docker-compose with all the dependencies wired in.

Tuesday, June 12, 2018

Zuul 2 - Sample filter

Zuul 2 has finally been open sourced. I first heard of Zuul 2 during the Spring One 2016 talk by Mikey Cohen which is available here, it is good to finally be able to play with it.

To quickly touch on the purpose of a Gateway like Zuul 2 - Gateways provide an entry point to an ecosystem of microservices. Since all the customer requests are routed through the Gateway, it can control aspects of routing, request and response flowing through it:

  • Routing based on different criteria - uri patterns, headers etc.
  • Monitors service health
  • Loadbalancing and throttling requests to origin servers
  • Security
  • Canary testing


My objective in this post is simple - to write a Zuul2 filter that can remove a path prefix and send a request to a downstream service and back.

Zuul2 filters are the mechanism by which Zuul is customized. Say if a client sends a request to /passthrough/someapi call, then I want the Zuul 2 layer to forward the request to a downstream service using /someapi uri. Zuul2 filters are typically packaged up as groovy files and are dynamically loaded(and potentially refreshed) and applied. My sample here will be a little different though, my filters are coded in Java and I had to bypass the loading mechanism built into Zuul.

It may be easier simply to follow the code, which is available in my github repository here - https://github.com/bijukunjummen/boot2-load-demo/tree/master/applications/zuul2-sample, it is packaged in with a set of samples which provide a similar functionality. The code is based on the Zuul 2 samples available here.



This is how my filter looks:

import com.netflix.zuul.context.SessionContext;
import com.netflix.zuul.filters.http.HttpInboundSyncFilter;
import com.netflix.zuul.message.http.HttpRequestMessage;

import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;

public class StripPrefixFilter extends HttpInboundSyncFilter {
    private final List<String> prefixPatterns;

    public StripPrefixFilter(List<String> prefixPatterns) {
        this.prefixPatterns = prefixPatterns;
    }

    @Override
    public HttpRequestMessage apply(HttpRequestMessage input) {
        SessionContext context = input.getContext();
        String path = input.getPath();
        String[] parts = path.split("/");
        if (parts.length > 0) {
            String targetPath = Arrays.stream(parts)
                    .skip(1).collect(Collectors.joining("/"));
            context.set("overrideURI", targetPath);
        }
        return input;
    }

    @Override
    public int filterOrder() {
        return 501;
    }

    @Override
    public boolean shouldFilter(HttpRequestMessage msg) {
        for (String target: prefixPatterns) {
            if (msg.getPath().matches(target)) {
                return true;
            }
        }
        return false;
    }
}


It extends "HttpInboundSyncFilter", these are filters which handle the request inbound to origin servers. As you can imagine there is a "HttpOutboundSyncFilter" which intercept calls outbound from the origin servers. There is a "HttpInboundFilter" and "HttpOutboundFilter" counterpart to these "sync" filters, they return RxJava Observable type.

There is a magic string "overrideUri" in my filter implementation. If you are curious about how I found that to be the override uri, it is by scanning through the Zuul2 codebase. There is likely a lot of filters used internally at Netflix which haven't been released for general consumption yet.

With this filter in place, I have bypassed the dynamic groovy scripts loading feature of Zuul2 by explicitly registering my custom filter using this component:

import com.netflix.zuul.filters.FilterRegistry;
import com.netflix.zuul.filters.ZuulFilter;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;

public class FiltersRegisteringService {

    private final List<ZuulFilter> filters;
    private final FilterRegistry filterRegistry;

    @Inject
    public FiltersRegisteringService(FilterRegistry filterRegistry, Set<ZuulFilter> filters) {
        this.filters = new ArrayList<>(filters);
        this.filterRegistry = filterRegistry;
    }

    public List<ZuulFilter> getFilters() {
        return filters;
    }

    @PostConstruct
    public void initialize() {
        for (ZuulFilter filter: filters) {
            this.filterRegistry.put(filter.filterName(), filter);
        }
    }
}

I had to make a few more minor tweaks to get this entire set-up with my custom filter bootstrapped, these can be followed in the github repo


Once the Zuul2 sample with this custom filter is started up, the behavior is that any request to /passthrough/messages is routed to a downstream system after the prefix "/passthrough" is stipped out. The instructions to start-up the Zuul 2 app is part of the README of the repo.

This concludes a quick intro to writing a custom Zuul2 filter, I hope this gives just enough of a feel to evaluate Zuul 2.

Wednesday, May 23, 2018

TestContainers and Spring Boot

TestContainers is just awesome! It provides a very convenient way to start up and CLEANLY tear down docker containers in JUnit tests. This feature is very useful for integration testing of applications against real databases and any other resource for which a docker image is available.

My objective is to demonstrate a sample test for a JPA based Spring Boot Application using TestContainers. The sample is based on an example at the TestContainer github repo.

Sample App


The Spring Boot based application is straightforward - It is a Spring Data JPA based application with the web layer written using Spring Web Flux. The entire sample is available at my github repo and it may be easier to just follow the code directly there.

The City entity being persisted looks like this (using Kotlin):

import javax.persistence.Entity
import javax.persistence.GeneratedValue
import javax.persistence.Id

@Entity
data class City(
        @Id @GeneratedValue var id: Long? = null,
        val name: String,
        val country: String,
        val pop: Long
) {
    constructor() : this(id = null, name = "", country = "", pop = 0L)
}

All that is needed to provide a repository to manage this entity is the following interface, thanks to the excellent Spring Data JPA project:

import org.springframework.data.jpa.repository.JpaRepository
import samples.geo.domain.City

interface CityRepo: JpaRepository<City, Long>


I will not cover the web layer here as it is not relevant to the discussion.


Testing the Repository

Spring Boot provides a feature called the Slice tests which is a neat way to test different horizontal slices of the application. A test for the CityRepo repository looks like this:

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.test.context.junit4.SpringRunner;
import samples.geo.domain.City;
import samples.geo.repo.CityRepo;

import static org.assertj.core.api.Assertions.assertThat;

@RunWith(SpringRunner.class)
@DataJpaTest
public class CitiesWithEmbeddedDbTest {

    @Autowired
    private CityRepo cityRepo;

    @Test
    public void testWithDb() {
        City city1 = cityRepo.save(new City(null, "city1", "USA", 20000L));
        City city2 = cityRepo.save(new City(null, "city2", "USA", 40000L));

        assertThat(city1)
                .matches(c -> c.getId() != null && c.getName() == "city1" && c.getPop() == 20000L);

        assertThat(city2)
                .matches(c -> c.getId() != null && c.getName() == "city2" && c.getPop() == 40000L);

        assertThat(cityRepo.findAll()).containsExactly(city1, city2);
    }

}

The "@DataJpaTest" annotation starts up an embedded h2 databases, configures JPA and loads up any Spring Data JPA repositories(CityRepo in this instance).

This kind of a test works well, considering that JPA provides the database abstraction and if JPA is used correctly the code should be portable across any supported databases. However, assuming that this application is expected to be run against a PostgreSQL in production, ideally, there would be some level of integration testing done against the database, which is where TestContainer fits in. It provides a way to boot up PostgreSQL as a docker container.

TestContainers

The same repository test using TestContainers looks like this:

import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.boot.test.util.TestPropertyValues;
import org.springframework.context.ApplicationContextInitializer;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
import org.testcontainers.containers.PostgreSQLContainer;
import samples.geo.domain.City;
import samples.geo.repo.CityRepo;

import java.time.Duration;

import static org.assertj.core.api.Assertions.assertThat;

@RunWith(SpringRunner.class)
@DataJpaTest
@ContextConfiguration(initializers = {CitiesWithPostgresContainerTest.Initializer.class})
public class CitiesWithPostgresContainerTest {

    @ClassRule
    public static PostgreSQLContainer postgreSQLContainer =
            (PostgreSQLContainer) new PostgreSQLContainer("postgres:10.4")
                    .withDatabaseName("sampledb")
                    .withUsername("sampleuser")
                    .withPassword("samplepwd")
                    .withStartupTimeout(Duration.ofSeconds(600));

    @Autowired
    private CityRepo cityRepo;

    @Test
    public void testWithDb() {
        City city1 = cityRepo.save(new City(null, "city1", "USA", 20000L));
        City city2 = cityRepo.save(new City(null, "city2", "USA", 40000L));

        assertThat(city1)
                .matches(c -> c.getId() != null && c.getName() == "city1" && c.getPop() == 20000L);

        assertThat(city2)
                .matches(c -> c.getId() != null && c.getName() == "city2" && c.getPop() == 40000L);

        assertThat(cityRepo.findAll()).containsExactly(city1, city2);
    }

    static class Initializer
            implements ApplicationContextInitializer<ConfigurableApplicationContext> {
        public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
            TestPropertyValues.of(
                    "spring.datasource.url=" + postgreSQLContainer.getJdbcUrl(),
                    "spring.datasource.username=" + postgreSQLContainer.getUsername(),
                    "spring.datasource.password=" + postgreSQLContainer.getPassword()
            ).applyTo(configurableApplicationContext.getEnvironment());
        }
    }
}

The core of the code looks same as the previous test, but the repository here is being tested against a real PostgreSQL database here. To go into a little more detail -

A PostgreSQL container is being started up using a JUnit Class Rule which gets triggered before any of the tests are run. This dependency is being pulled in using a gradle dependency of the following type:

    testCompile("org.testcontainers:postgresql:1.7.3")

The class rule starts up a PostgreSQL docker container(postgres:10.4) and configures a database, and credentials for the database. Now from Spring Boot's perspective, these details need to be passed on the application as properties BEFORE Spring starts creating a test context for the test to run in, and this is done for the test using an ApplicationContextInitializer, this is invoked by Spring very early in the lifecycle of a Spring Context.

The custom ApplicationContextInitializer which sets the database name, url and user credentials is hooked up to the test using this code:

...
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;
...

@RunWith(SpringRunner.class)
@DataJpaTest
@ContextConfiguration(initializers = {CitiesWithPostgresContainerTest.Initializer.class})
public class CitiesWithPostgresContainerTest {
...

With this boiler plate set up in place TestContainer and Spring Boot slice test will take over running of the test. More importantly TestContainers also takes care of tear down, the JUnit Class Rule ensures that once the test is complete the containers are stopped and removed.

Conclusion

This was a whirlwind tour of TestContainers, there is far more to TestContainers than what I have covered here but I hope this provides a taste for what is feasible using this excellent library and how to configure it with Spring Boot. This sample is available at my github repo