In the age of Serverless & Container architectures, there is once again chatter about Java being too fat (and dying). While I can understand the “too fat” observation, I will not put my money on the “java is dying/dead” chatter. That obituary has been written multiple times and the language lives on. It is true that Java was not born in the Container/Cloud era. Yes, it was born in a different age and time, but the language and framework ecosystem has evolved. In the Microservices cloud-native app world where horizontal scaling and fast startup times are expected, Java may (at times depending on the architecture) not be the fastest horse in town.
By the time you package a Spring service with some usually required needs (logging, JPA, monitoring, security, messaging, etc.) the final jar file can be quite large. Every dependency needs to be pulled in regardless of whether it is actually used in the runtime app or not.
New frameworks likeĀ Micronaut and Quarkus are trying to prepare Java for the new world of Serverless & Containers. They promise…
- Fast startup
- Low memory footprint
- Fast throughput
- and still provide the familiar annotation-based programming model that JEE/Spring developers are used to
Micronaut (the focus of this blog) achieves this by eliminating the need for reflection during runtime, and by pulling in required dependencies for DI at compile time (only bringing in what is required and discarding the rest).
My initial observation of writing a simple service (that accesses an external service) in minimal Spring and Micronaut samples, does show that the final package size of the jar and runtime heap size are a lot smaller with Micronaut. I have not compared startup and response times as of yet. But what I see so far is very encouraging.
Micronaut uses the annotation-based programming model that engineers are used to with Spring or JEE. It also has the many integrations required to build an enterprise app – database, messaging, monitoring, logging, etc.
This excerpt from Micronaut summarizes a lot for me
with Micronaut your application startup time and memory consumption is not bound to the size of your codebase in the same way as a framework that uses reflection. Reflection-based IoC frameworks load and cache reflection data for every single field, method, and constructor in your code. Thus as your code grows in size so do your memory requirements, whilst with Micronaut this is not the case.
In this blog, we will…
- Clone the code fromĀ https://github.com/thomasma/microhello
- Deploy a simple Micronaut service (that calls an external service to pull in a random joke).
- The service will be deployed on a local minikube Kubernetes (k8s) cluster
- The service will also expose a /prometheus endpoint that Prometheus can scrape metrics.
- We will use Grafana to view the metrics.
- Grafana & Prometheus are installed on local Mac and not in the minikube k8s cluster. If you prefer the latter then, that is left to you. Eventually, I plan to use AWS Managed Prometheus & Grafana; thus I kept it outside for now (that is a blog for another day).
Note: Eventually Java/JVM will provide support for lighter profiles and frameworks (such as Spring) will evolve to support the Serverless & Container requirements of low memory & fast startup times. Until then these newer frameworks will fill the gap and evolve in the future. Regardless I am personally excited to see more non-Spring options. It is good for the ecosystem and creates more innovation (and some healthy competition).
Setup instructions
- Install (and start) Docker (for Mac in my case)
- Install minikube (and start) to set up a local k8s cluster – https://minikube.sigs.k8s.io/docs/start/
- Start minikube with minikube start
- To retrieve the minikube cluster IP (you will need this to access the app and test drive)
1 2 3 4 |
minikube service --url $SERVICE # in our case myapp is the service minikube service --url myapp |
- Install Grafana locally in a docker container (note I am not installing it in the k8s cluster)
1 |
docker run -d -p 3000:3000 grafana/grafana |
Application
Clone the app from Git – TBD. Open using your IDE of choice (in my case VS Code). Review the pom.xml’s plugin section on how to configure the docker registry. We will use that to push the container image to the registry.
Build the application ./mvnw clean package
Push to docker registry ./mvnw deploy -Dpackaging=docker
In the app, repo review the k8s.yaml file that has the k8s deployment specs for our application.
1 2 3 4 5 6 7 8 9 |
# to apply the changes for the first time or update desired future state kubectl apply -f k8s.yaml # to remove the application completely from the minikube k8s cluster kubectl delete -f k8s.yaml # always good to check if the pods & deployment have successfully started before accessing the app service kubectl get deployments kubectl get pods |
A few pointers to access the app service and other dashboards
1 2 3 4 5 6 7 8 9 |
# This IP address will be different for you. See my note earlier on how to access the service URL for you # This will list all the metrics that your micronaut application is making available for Prometheus scraping http://192.168.64.3:31099/prometheus # if you are running Prometheus locally then access it via (note the prometheus.yaml file will need the actual IP address vs localhost) http://localhost:9090/ # access grafana (if running locally) http://localhost:3000/ |
Prometheus
The prometheus.yml file is in the Git repo but copied below. Please update the IP addresses of your service as exposed by minikube — in my case – 192.168.64.3:31099
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
# my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. The default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['127.0.0.1:9090'] - job_name: 'spring-actuator' metrics_path: '/prometheus' scrape_interval: 5s static_configs: - targets: [192.168.64.3:31099] |
Install Prometheus locally in a docker container (note I am not installing it in the k8s cluster). Point it to the prometheus.yml file above. Please change the command below to your path (only change /Users/mathew/temp/microhello/prometheus.yml )
1 |
docker run -d --name=prometheus -p 9090:9090 -v /Users/mathew/temp/microhello/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus --config.file=/etc/prometheus/prometheus.yml |
Post this step Prometheus will attempt to scrape the /prometheus endpoint of your Micronaut application.
Grafana
Inside the Grafana web console, add Prometheus as the data source and also import the pre-built Grafana dashboard https://grafana.com/grafana/dashboards/4701
Access the app service endpoint and you should see metrics being updated in Grafana and Prometheus. In the Prometheus browser console search for metric web_access_total. That is the custom metric I added (as a counter) in the app service. It is incremented each time the service is invoked.
To get some sizable metrics let’s run submit some load to our service using Apache Bench – https://httpd.apache.org/docs/2.4/programs/ab.html
1 |
ab -n 100 -c 10 http://192.168.64.3:31099/joke |
If all your setup was good so far you should see a Grafana dashboard such as…
And a custom dashboard that retrieves an out-of-the-box metric and one custom metric (web_access_total). ‘http_client_requests_seconds_count’ below shows a successful count as well as a few requests that timed out.
The Application Code
The actual application code for the service is written using a familiar programming style (familiar to anyone using Spring or JEE).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
package hello.world; import javax.inject.Inject; import io.micrometer.core.instrument.MeterRegistry; import io.micronaut.http.HttpRequest; import io.micronaut.http.MediaType; import io.micronaut.http.annotation.Controller; import io.micronaut.http.annotation.Get; import io.micronaut.http.client.RxHttpClient; import io.micronaut.http.client.annotation.Client; import io.reactivex.Flowable; @Controller("/joke") public class JokeController { private MeterRegistry meterRegistry; public JokeController(MeterRegistry meterRegistry) { this.meterRegistry = meterRegistry; } @Client("http://api.icndb.com/jokes/random?firstName=Chuck&lastName=Doe") @Inject RxHttpClient httpClient; @Get(produces = MediaType.APPLICATION_JSON) public Flowable<FunQuote> index() { meterRegistry .counter("web.access", "controller", "index", "action", "hello") .increment(); return httpClient.retrieve(HttpRequest.GET(""), FunQuote.class); } } |
In the above you can also see the custom metric I am adding; a simple counter that increments each time the method is accessed. This metric will be available on the /prometheus endpoint