Skip to main content
Version: 1.3.2

Monitoring

Jadice flow provides application metrics which can be observed.

Configuration

To enable metrics, use the appropriate flag in the helm charts or the application.yaml:

# Metrics
management:
metrics:
enable:
all: true
export.prometheus.enabled: true
export.influx.enabled: false
export.jmx.enabled: false
endpoints:
web.exposure.include: health,info,bindings,metrics,prometheus

This example enables the prometheus exposure of the metrics, but not InfluxDB or JMX.

Access metrics

Prometheus

If prometheus is enabled, the metrics will be propagated under the http://localhost:8080/actuator/prometheus endpoint and is scraped by the Prometheus server.

The metrics can be visualized, e.g. with Grafana by adding the prometheus adapter.

JMX

The JMX exposure exposes the metrics via a default Java MBean server handling the jmx request. Depending on the JVM Vendor there are specific Java Mission Control Derivates which can be used to access the JVM and read the metrics. By default, most JDKs come with the jconsole-Tool, which provides the most basic access to the metrics.

InfluxDB

When using influx, application metrics are sent from the controller to the InfluxDB.

Configuration:

# Metrics
management:
metrics:
enable.all: true
export.prometheus.enabled: false
export.influx.enabled: true
export.jmx.enabled: true
export:
influx:
api-version: v2
bucket: ${jf.influxdb.bucket} # Specifies the destination bucket for writes
org: my-org #Specifies the destination organization for writes.
token: ${jf.influxdb.token}
uri: ${jf.influxdb.uri} # The URI for the Influx backend. (Default: http://localhost:8086/api/v2)
# compressed: true # Whether to enable GZIP compression of metrics batches published to Influx. (Default: true)
# enabled: true # Whether exporting of metrics to this backend is enabled. (Default: true)
step: 5s # Step size (i.e. reporting frequency) to use. (Default: 1m)
# connect-timeout: 1s # Connection timeout for requests to this backend. (Default: 1s)
# read-timeout: 10s # Read timeout for requests to this backend. (Default: 10s)
endpoints:
web.exposure.include: health,info,bindings,metrics,prometheus

The metrics can then be viewed e.g. directly in the InfluxDB Web App or they can be visualized with Grafana like before, by adding an InfluxDB Adapter.

Metrics overview

The jadice flow controller will provide basic application metrics like CPU usage, JVM memory and the like via the default actuator.

In addition, several jadice flow application metrics are available to get more information:

Metric nameTypeTagsDescription
jadice-flow.step.timerTimerfullStepName (canonical step name)Time for a step.
jadice-flow.itemProcessor.part.addedCounterfullStepName (canonical step name)Added part count for a step
jadice-flow.jobLauncher.runningJobsGaugeTotal amount of running jobs
jadice-flow.jobLauncher.queuedJobsGaugeTotal amount of queued jobs
jadice-flow.jobLauncher.queuedJobsByTypeGaugejob (Job name)The amount of queued jobs per JobTemplate

Tracing

The controller writes application traces which can be accessed later on using jaegertracing.

Configuration

To enable tracing, use the appropriate flag in the helm charts or the application.yaml:

# Tracing
opentracing:
jaeger:
enabled: true
udp-sender.host: localhost
udp-sender.port: 6831
log-spans: false
probabilistic-sampler.sampling-rate: 1

A Trace-ID will be provided in each job. E.g. when inspecting the Job in the UI, the trace-ID can be copied and entered in the Tracing UI to show the job trace.

Access traces

The traces can be accessed in the Tracing UI. The jaeger UI is linked in the jadice flow UI, linking to the jaeger UI URL (e.g. http://localhost:16686/). The URL is configured in the UI's application.yaml.

Available Worker Metrics

Analyzer Worker metrics

metricdescriptionlabels (examples)
jadice_server_analyzer_mimetypenumber of incoming documents per mime typetype="image/tiff"
jadice_server_analyzer_mimetype_beforenumber of incoming documents per mime type (before analysis)type="image/tiff"
jadice_server_analyzer_mimetype_afternumber of processed documents per mime type (after analysis)type="text/plain"

Decompress Worker

metricdescriptionlabels (examples)
jadice_server_decompress_mimetypenumber of processed documents per mime typetype="application/x-rar-compressed"

Libre Office Worker

metricdescriptionlabels (examples)
jadice_server_libreoffice_mimetypenumber of incoming documents per mime typetype="application/msword"
jadice_server_libreoffice_mimetype_totalnumber of incoming documents per mime typetype="application/msword"
jadice_invocationnumber of incoming Jobs for this worker

OCR Worker

metricdescriptionlabels (examples)
jadice_server_ocr_mimetypenumber of incoming documents per mime typetype="image/tiff"
jadice_server_ocr_mimetype_totalnumber of processed documents per mime typetype="image/tiff"
jadice_server_ocr_pagesegmentationmodenumber of processed documents per page segmentation modemode="3"

Grafana

Grafana is one of many supported monitoring solutions.

Pre-configured Grafana dashboards are supposed to be delivered with the Helm Charts as part of the products.

Health

A standard actuator health check is provided via /actuator/health/liveness.

Sample response for GET /actuator/health/liveness
{
"status": "UP"
}