📜 ⬆️ ⬇️

Monitoring and Metrics for the Play Framework with Dropwizard Metrics

At some point in the development of the application, each of us comes to the fact that we need more information about what is happening inside the application or the ability to monitor the application. In the case of the Play Framework, there is already a ready-made solution in the form of an excellent open source library Kamon paired with the kamon-play module.


But today we are going to take a look at the alternative solution, integration and use of Drowizard Metrics formerly known as Codahale Metrics with the Play Framework.



Integration


So I started looking for ready-made solutions that could help me in integrating these two tools.


I found some incomplete solutions:



Unfortunately, the metrics-play module provides only basic functionality from all that is available in the Dropwizard Metrics environment. This may be enough if you need simple metrics that are accessible through REST api, but I had higher requirements and I decided to add functionality to this module by writing the following modules:



Actually, we will discuss this further.


Metrics Reporters Support in the Play Framework


Metrics provides powerful tools for monitoring the behavior of critical components in a production environment. It also provides a means of sending measured data through reporters. Metrics Reporters is a great way to send data from the application itself to your preferred storage and visualization of metrics.


At the time of this writing, the supported reporters are:



Dropwizard Metrics and the community are also provided by other reporters, for example Ganglia Reporter , CSV Reporter , InfluxDB Reporter , ElasticSearch Reporter and others.


Adding factories to support reporters in the library is an easy task.


Metrics annotation support for the Play Framework via the Guice AOP


By default, in order to use metrics, you need to call the Metric Registry to create metrics, create a context and manually manage it. For example:


 def doSomethingImportant() = { val timer = registry.timer(name(classOf[WebProxy], "get-requests")) val context = timer.time() try // critical business logic finally context.stop() } 

To keep all DRY annotations in place, the metrics-annotation-play module will create and properly call Timer for @Timed , Meter for @Metered , Counter for @Counted and Gauge for @Gauge . @ExceptionMetered also supported; it creates a Meter that measures the frequency of throwing exceptions.


The previous example can be rewritten as follows:


 @Timed def doSomethingImportant = { // critical business logic } 

or you can depreciate the whole class, which will create metrics for all explicit methods:


 @Timed class SuperCriticalFunctionality { def doSomethingImportant = { // critical business logic } } 

This functionality is only supported for classes created through Guice, there are also some AOP restrictions.


Usage example


Let's try to use the library in a real application and consider how everything works. The source code of the application can be found here .


I use the activator play-scala template with sbt plugin . We need to add JCenter to the resolvers and dependencies list:


 name := """play_metrics_example""" version := "1.0-SNAPSHOT" lazy val root = (project in file(".")).enablePlugins(PlayScala) scalaVersion := "2.11.8" resolvers += Resolver.jcenterRepo libraryDependencies ++= Seq( "de.khamrakulov.metrics-reporter-play" %% "reporter-core" % "1.0.0", "de.khamrakulov" %% "metrics-annotation-play" % "1.0.2", "org.scalatestplus.play" %% "scalatestplus-play" % "1.5.1" % Test ) 

For example, I'm using the onsole reporter, let's add the configuration to application.conf .


 metrics { jvm = false logback = false reporters = [ { type: "console" frequency: "10 seconds" } ] } 

As you can see, I deactivated the jvm and logback so as not to lose our metrics and added a reporter who will output the metrics to stdout with a periodicity of 10 seconds.


Now we can start using annotations, I will decorate the index method of the HomeController controller:


 @Singleton class HomeController @Inject() extends Controller { @Counted(monotonic = true) @Timed @Metered def index = Action { Ok(views.html.index("Your new application is ready.")) } } 

In fact, you should not use all the annotations at once, because @Timed combines Counter and Meter , but I did it to demonstrate the possibilities.


After starting the application and requesting the , the reporter should output the metrics to stdout :


 -- Counters -------------------------------------------------------------------- controllers.HomeController.index.current count = 1 -- Meters ---------------------------------------------------------------------- controllers.HomeController.index.meter count = 1 mean rate = 0.25 events/second 1-minute rate = 0.00 events/second 5-minute rate = 0.00 events/second 15-minute rate = 0.00 events/second -- Timers ---------------------------------------------------------------------- controllers.HomeController.index.timer count = 1 mean rate = 0.25 calls/second 1-minute rate = 0.00 calls/second 5-minute rate = 0.00 calls/second 15-minute rate = 0.00 calls/second min = 14.59 milliseconds max = 14.59 milliseconds mean = 14.59 milliseconds stddev = 0.00 milliseconds median = 14.59 milliseconds 75% <= 14.59 milliseconds 95% <= 14.59 milliseconds 98% <= 14.59 milliseconds 99% <= 14.59 milliseconds 99.9% <= 14.59 milliseconds 

Of course, you can still view the metrics via REST api, for this you need to add a configuration to the routes file:


 GET /admin/metrics com.kenshoo.play.metrics.MetricsController.metrics 

What's next?


Automatic health check of the application (Health Checks)

Metrics also supports the ability to use automatic health checks for the application (health checks). More information can be found in the official documentation .


More reporters

Creating a proper metric usage environment requires the support of more reporters. This should be another direction in the development of the library.


Proper Future Support

At the moment, in order to measure the execution time of the Future you need to manually perform all actions. Proper support for Future can help in an asynchronous Play Framework environment and can be a good addition.


HdrHistogram support

Hdrhistogram provides an alternative implementation of a high-quality reservoir (reservoir) that can be used for Histogram and Timer .


')

Source: https://habr.com/ru/post/315554/


All Articles