5.6 KiB

Transaction Auth Service Assignment

This is the basic readme for the ChargePoint assessment. The application is written in Spring Boot with Kotlin, with the build managed by Gradle-Kotlin, and the application can be deployed as a Docker container. It is very simple, with the code for both the web API and the authentication service being stored in a single source tree. In a real environment, I assume they would be in separate source trees, if not separate repos.

Spring profiles are used to separate the configurations, so that the application can run as a separate web API and auth service backend (though they can also run in the same process). All communication between the web socket API and the "auth service" are done over Kafka, even if everything is running in the same server process.

Building / Running

To run the application in a docker compose setup:

./gradlew build -x test
docker compose up --build

This will start Kafka (and Zookeeper), the "authentication service," and the websocket API. The test UI is available on http://localhost:8080/


To use the test UI, press the Connect button to activate the websocket. Then a station UUID and driver identifier can be filled in and sent to the service via the websocket connection.

  • Any station UUID can be given, as long as it's a UUID.
  • There are two identifiers in the hardcoded whitelist by default. They are listed on the page itself.
  • As this is just a demo application, the UI is extremely bare-bones and nowhere close to what would be called "production ready" (and also not designed for a charging station, obviously).
  • Browser console logs connectivity status and incoming messages (as well as outputs them to the table).

Running Tests

./gradlew test

This will run all the tests, both unit tests and integration tests.


Notes and thoughts on development of the application:

  • I think it would make sense to return more information in the response, given the fully async communication. Like driver and station IDs.
    • By only returning the status, some information is lost, and it can make it hard to determine which response belongs to which request.
    • This could be alleviated by passing a request ID or something along (and it seems that's the case in the diagram in the PDF).
  • It would be useful to return more detailed error messages in responses instead of just "Invalid."


  • I used Arrow for typed error handling in the validation. The Kotlin Result type is good, but has some limitations that make it impractical for more specific error handling in many cases.
  • Arrow has a very similar dev experience to ScalaZ for validation (notably, zipAndAccumulate).
  • I find typed error handling to be extremely useful, and probably one of the best things in functional programming. Kotlin inherits this slightly, and Arrow rounds it out.
  • Web socket in-memory broker is used because apparently there is no Kafka-backed WebSocket broker for Spring? Or maybe there is now, and my info is out of date.
  • Weird issue: When writing the integration test for the web socket API, using a Jackson message converter would cause the responses to be lost in the aether and the response to never be received. Switching to GSON message converter made it work immediately.


  • Development started by running Kafka locally in a basic docker-compose file (which later became the final docker-compose).
  • Connection was initially basic HTTP request-response, then quickly replaced with a functioning websocket connection and a stupidly simple UI that I found on the internet as a test harness.
  • Eventually, I split the application configuration into three separate Spring profiles to allow running everything in one process, or separating the web API and auth consumer into multiple processes.
  • Kafka communication started with a basic regular Kafka template, but I quickly switched to the ReplyingKafkaTemplate to get fully async request-reply semantics.
  • The docker-compose runs two instances of the application: one for the websocket API, and another for the Kafka-backed "auth service" with the hardcoded whitelist.
    • In a real environment, the auth service is almost assuredly a completely separate application either in another repo, or in a separate source folder.
    • Even when running in 1 process, the application will communicate over Kafka, and with in-memory methods.

Software choices:

  • I think I would prefer to use Kotlin coroutines instead of CompletableFuture for the async communication, but since the Kafka lib for Spring is built on CompletableFuture, it was easier to make use of that API for this simple project.
  • I think Arrow is a good choice for validation. It doesn't necessarily need to be applied across the whole codebase, especially because its more powerful uses are somewhat esoteric.
  • Scaling: With the way the service is built, any number of instances of the web API should be able to sit behind a load balancer, and any number of auth service consumers should be able to connect to Kafka.
  • Having a specific annotation on a method for creating a Kafka consumer listener is easy, but I feel like the consumer method could get lost in a larger codebase. Maybe it's good (assuming it's possible) to wire up a consumer directly in Spring config?
    • I prefer explicit wiring over Spring's classpath scanning, especially for services with complexity.
    • Classpath scanning is easy, but too much magic can make it hard to tell what's going on and why beans are defined or not.