Eclipse Vert.x metrics now with Micrometer.io

Vert.x has al­ready been pro­vid­ing met­rics for some time, through the vertx-​dropwizard-metrics and vertx-​hawkular-metrics mod­ules. Both of them im­ple­ment a ser­vice provider in­ter­face (SPI) to col­lect the Vert.x met­rics and make them avail­able to their re­spec­tive back­ends.

A new mod­ule, vertx-​micrometer-metrics, is now added to the fam­ily. It im­ple­ments the same SPI, which means that it is able to pro­vide the same met­rics. Mi­crom­e­ter.io is a pretty new met­rics li­brary, quite com­pa­ra­ble to drop­wiz­ard met­rics in that it col­lects met­rics in a local, in-​memory reg­istry and is able to store them in var­i­ous back­ends such as Graphite or In­fluxDB. It has sev­eral ad­van­tages as we will see below.

Tell me more about Micrometer

Mi­crom­e­ter.io de­scribes it­self as a a vendor-​neutral ap­pli­ca­tion met­rics fa­cade. It pro­vides a well de­signed API, in Java, to de­fine gauges, coun­ters, timers and dis­tri­b­u­tion sum­maries.

Among the avail­able back­ends, Mi­crom­e­ter na­tively sup­ports Graphite, In­fluxDB, JMX, Prometheus and sev­eral oth­ers. Prometheus is very pop­u­lar in the Ku­ber­netes and mi­croser­vices ecosys­tems, so its sup­port by Mi­crom­e­ter was a strong mo­ti­va­tion for im­ple­ment­ing it in Vert.x.

For the the mo­ment, our im­ple­men­ta­tion in Vert.x sup­ports Prometheus, In­fluxDB and JMX. More should quickly come in the near fu­ture.

Dimensionality

An­other in­ter­est­ing as­pect in Mi­crom­e­ter is that it han­dles met­rics di­men­sion­al­ity: met­rics can be as­so­ci­ated with a set of key/value pairs (some­times ref­ered as tags, some­times as la­bels). Every value brings a new di­men­sion to the met­ric, so that in Prometheus or any other back­end that sup­ports di­men­sion­al­ity, we can query for dat­a­points of one or sev­eral di­men­sions, or query for dat­a­points ag­gre­gated over sev­eral di­men­sions.

Ex­am­ple: our met­ric vertx_http_server_connections ac­cepts la­bels local and remote, that are used to store ad­dresses on HTTP con­nec­tions

Prometheus is used in the fol­low­ing ex­am­ples, but equiv­a­lent queries can be per­formed with In­fluxDB.

In Prometheus, the query vertx_http_server_connections will re­turn as many time­series as there are com­bi­na­tions of local and remote val­ues. Ex­am­ple:

vertx_http_server_connections{local="0.0.0.0:8080",remote="1.1.1.1"}
vertx_http_server_connections{local="0.0.0.0:8080",remote="2.2.2.2"}
vertx_http_server_connections{local="0.0.0.0:8080",remote="3.3.3.3"}

To query on a sin­gle di­men­sion, we must pro­vide the la­bels:

vertx_http_server_connections{local="0.0.0.0:8080",remote="1.1.1.1"}. It will re­turn a sin­gle time­series.

To get an ag­gre­gate, Prometheus (PromQL) pro­vides sev­eral ag­gre­ga­tion op­er­a­tors:

sum(vertx_http_server_connections) will re­turn the sum across all di­men­sions.

So what are the Vert.x metrics?

Peo­ple al­ready fa­mil­iar with the ex­ist­ing met­rics mod­ules (drop­wiz­ard or hawku­lar) will not be too dis­ori­ented. They are roughly the same. The main dif­fer­ence is where pre­vi­ously, some met­ric names could have a vari­able part within - such as vertx.eventbus.handlers.myaddress - here we take ad­van­tage of di­men­sion­al­ity and we will have vertx_eventbus_handlers{address="myaddress"}.

Some other met­rics are no longer use­ful, for in­stance the drop­wiz­ard’s vertx.eventbus.messages.pending, vertx.eventbus.messages.pending-local and vertx.eventbus.messages.pending-remote are now just vertx_eventbus_pending{side=local} and vertx_eventbus_pending{side=remote} in mi­crom­e­ter. The sum of them can eas­ily be com­puted at query time.

The met­rics pro­vided by Vert.x are dis­patched into eight big fam­i­lies:

  • Net client: dis­tri­b­u­tion sum­maries of bytes sent and re­ceived, num­ber of con­nec­tions, etc.
  • Net server: dis­tri­b­u­tion sum­maries of bytes sent and re­ceived, num­ber of con­nec­tions, etc.
  • HTTP client: counter of re­quests, re­sponse times, etc.
  • HTTP server: counter of re­quests, pro­cess­ing times, etc.
  • Event bus: counter of han­dlers, mes­sages sent and re­ceived, etc.
  • Pool: for worker pools and some data­source pools, queue size and wait­ing time, pro­cess­ing time, etc.
  • Ver­ti­cles: num­ber of ver­ti­cles de­ployed.

The full list of col­lected met­rics is avail­able here.

Getting started

This sec­tion will guide you through a quick setup to run a Vert.x ap­pli­ca­tion with Mi­crom­e­ter. The code ex­am­ples used here are taken from the micrometer-metrics-example in vertx-examples repos­i­tory, in Java, using maven. But the same could be done with other Vert.x sup­ported lan­guages, as well as gra­dle in­stead of maven.

Maven configuration

The con­fig­u­ra­tion and the maven im­ports will vary ac­cord­ing to the back­end stor­age that will be used. For maven, the com­mon part is al­ways:

<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-micrometer-metrics</artifactId>
  <version>3.5.1</version>
</dependency>
  • Then, to re­port to In­fluxDB:
<dependency>
  <groupId>io.micrometer</groupId>
  <artifactId>micrometer-registry-influx</artifactId>
  <version>1.0.0</version>
</dependency>
  • Or Prometheus:
<dependency>
  <groupId>io.micrometer</groupId>
  <artifactId>micrometer-registry-prometheus</artifactId>
  <version>1.0.0</version>
</dependency>
<dependency>
  <groupId>io.vertx</groupId>
  <artifactId>vertx-web</artifactId>
  <version>3.5.1</version>
</dependency>

Re­mark that, since Prometheus pulls met­rics from their source, they must be ex­posed on an HTTP end­point. That’s why vertx-web is im­ported here. It is not ab­solutely nec­es­sary (it’s pos­si­ble to get the met­rics reg­istry con­tent and ex­pose it in any other way) but it’s prob­a­bly the eas­i­est way to do.

  • Or JMX:
<dependency>
  <groupId>io.micrometer</groupId>
  <artifactId>micrometer-registry-jmx</artifactId>
  <version>1.0.0</version>
</dependency>

At the mo­ment, it is not pos­si­ble to re­port met­rics to sev­eral back­ends at the same time. It might be soon im­ple­mented.

Setting up Vert.x options

A MicrometerMetricsOptions ob­ject must be cre­ated and passed to VertxOptions, with one back­end con­fig­ured (though hav­ing no back­end is pos­si­ble: you would get met­rics sent to a de­fault Mi­crom­e­ter reg­istry, but with­out any per­sis­tent stor­age).

  • In­fluxDB ex­am­ple:
// Default InfluxDB options will push metrics to localhost:8086, db "default"
MicrometerMetricsOptions options = new MicrometerMetricsOptions()
  .setInfluxDbOptions(new VertxInfluxDbOptions().setEnabled(true))
  .setEnabled(true);
Vertx vertx = Vertx.vertx(new VertxOptions().setMetricsOptions(options));
// Then deploy verticles with this vertx instance
  • Prometheus ex­am­ple:
// Deploy with embedded server: prometheus metrics will be automatically exposed,
// independently from any other HTTP server defined
MicrometerMetricsOptions options = new MicrometerMetricsOptions()
  .setPrometheusOptions(new VertxPrometheusOptions()
    .setStartEmbeddedServer(true)
    .setEmbeddedServerOptions(new HttpServerOptions().setPort(8081))
    .setEnabled(true))
  .setEnabled(true);
Vertx vertx = Vertx.vertx(new VertxOptions().setMetricsOptions(options));
// Then deploy verticles with this vertx instance
  • Or Prometheus with the /metrics end­point bound to an ex­ist­ing HTTP server:
// Deploy without embedded server: we need to "manually" expose the prometheus metrics
MicrometerMetricsOptions options = new MicrometerMetricsOptions()
  .setPrometheusOptions(new VertxPrometheusOptions().setEnabled(true))
  .setEnabled(true);
Vertx vertx = Vertx.vertx(new VertxOptions().setMetricsOptions(options));

Router router = Router.router(vertx);
PrometheusMeterRegistry registry = (PrometheusMeterRegistry) BackendRegistries.getDefaultNow();
// Setup a route for metrics
router.route("/metrics").handler(ctx -> {
  String response = registry.scrape();
  ctx.response().end(response);
});
vertx.createHttpServer().requestHandler(router::accept).listen(8080);
  • JMX ex­am­ple:
// Default JMX options will publish MBeans under domain "metrics"
MicrometerMetricsOptions options = new MicrometerMetricsOptions()
  .setJmxMetricsOptions(new VertxJmxMetricsOptions().setEnabled(true))
  .setEnabled(true);
Vertx vertx = Vertx.vertx(new VertxOptions().setMetricsOptions(options));
// Then deploy verticles with this vertx instance

Setup the backend server

  • In­fluxDB, by de­fault, is ex­pected to run on localhost:8086 with­out au­then­ti­ca­tion, data­base “de­fault”. It is con­fig­urable in VertxInfluxDbOptions. If you don’t have a run­ning in­stance of In­fluxDB, the short­est way to start is cer­tainly with docker, just run:
docker run -p 8086:8086 influxdb
  • Prometheus needs some con­fig­u­ra­tion since it pulls met­rics from the sources. Once it is in­stalled, con­fig­ure the scrape end­points in prometheus.yml:
- job_name: 'vertx-8081'
  static_configs:
    - targets: ['localhost:8081']

or, when using /metrics end­point bound to an ex­ist­ing HTTP server:

- job_name: 'vertx-8080'
  static_configs:
    - targets: ['localhost:8080']
  • For JMX there is noth­ing spe­cial to con­fig­ure.

Collecting Vert.x metrics

From now on, all Vert.x met­rics will be col­lected and sent to the con­fig­ured back­end. In our Vert.x ex­am­ple, we setup an HTTP server met­rics:

Router router = Router.router(vertx);
router.get("/").handler(ctx -> {
  ctx.response().end("Hello Micrometer from HTTP!");
});
vertx.createHttpServer().requestHandler(router::accept).listen(8080);

And some event bus ping-​pong:

// Producer side
vertx.setPeriodic(1000, x -> {
  vertx.eventBus().send("greeting", "Hello Micrometer from event bus!");
});
// Consumer side
vertx.eventBus().<String>consumer("greeting", message -> {
  String greeting = message.body();
  System.out.println("Received: " + greeting);
  message.reply("Hello back!");
});

To trig­ger some ac­tiv­ity on the HTTP server, we can write a small bash script:

while true
do curl http://localhost:8080/
    sleep .8
done

Viewing the results

Grafana can be used to dis­play the In­fluxDB and Prometheus met­rics. The vertx-examples repos­i­tory con­tains two dash­boards for that: for In­fluxDB and for Prometheus.

HTTP server metrics

HTTP server metrics

Event bus metrics

Event bus metrics

Going further

We’ve seen the basic setup. There is a good bunch of op­tions avail­able, de­tailed in the doc­u­men­ta­tion: how to dis­able some met­rics do­mains, how to fil­ter or re­arrange la­bels, how to ex­port met­rics snap­shots to Json ob­jects, how to add JVM or proces­sor in­stru­men­ta­tion, etc.

Be­fore we fin­ish, there is one im­por­tant point that we can cover here: defin­ing cus­tom met­rics. Be­cause the mod­ule gives you ac­cess to its Mi­crom­e­ter reg­istry, you can add your cus­tom met­rics there.

Get­ting the de­fault reg­istry is straight­for­ward:

MeterRegistry registry = BackendRegistries.getDefaultNow();

Then you have plain ac­cess to the Mi­crom­e­ter API.

For in­stance, here is how you can track the ex­e­cu­tion time of a piece of code that is reg­u­larly called:

MeterRegistry registry = BackendRegistries.getDefaultNow();
Timer timer = Timer
  .builder("my.timer")
  .description("Time tracker for my extremely sophisticated algorithm")
  .register(registry);

vertx.setPeriodic(1000, l -> {
  timer.record(() -> myExtremelySophisticatedAlgorithm());
});

Since it is using the same reg­istry, there is no extra back­end con­fig­u­ra­tion to do.

What’s next?

The vertx-micrometer-metrics mod­ule will con­tinue to be im­proved, with al­ready two planned en­hance­ments:

Would you miss any fea­ture, please ask on GitHub. Con­tri­bu­tions and bug fixes are also wel­come!

Now is time to enter the Met­rics.

Next post

New community channels

In order to better support the community, we (the core team and module maintainers) now also provide help on Stack Overflow and Gitter.

Read more
Previous post

Google Summer of Code 2018

It's this time of year again! Google Summer of Code 2018 submission period has just started!

Read more
Related posts

Eclipse Vert.x meets GraphQL

In this blog post, we will look at an example application written in Vert.x that uses the new GraphQL API of Gentics Mesh.

Read more

Combine vert.x and mongo to build a giant

This blog post is part of the introduction to Vert.x series. We are now going to replace this JDBC client by the vertx-mongo-client, and thus connect to a Mongo database.

Read more

Getting started with new fabric8 Vert.x Maven Plugin

The all new fabric8 Vert.x Maven Plugin allows you to setup, package, run, start, stop and redeploy easily with a very little configuration resulting in a less verbose pom.xml.

Read more