Skip to main content
Version: 1.3.2

Application Logging

jadice flow follows the standard approach of containerized applications and writes all logs to stdout and stderr. The container engine redirects these streams to a logging driver. For easier access and to make the logs independent of the lifecycle of containers, pods and nodes, the logs from each node can be collected and send to a central location. This concept is called cluster-level logging or central logging.

Logback

Out of the box jadice flow uses the zero configuration logging of Spring Boot, which is based on logback.

You can customize the logging via Spring Boot application properties or by providing your own logback configuration file as described in Spring Boot - Logging.

Depending on the deployment method, there are different ways to provide the logging configuration:

Helm Configuration

The jadice flow Helm charts allow the following configurations to logging:

  • you can choose whether to output log messages in JSON format by setting the parameter enablejsonLogging to true or false
  • you can override the default logging levels for a worker by setting the parameter worker.logging.overrideLogLevel
  • you can add logging level settings with the parameter worker.logging.additionalLogLevel

Example:

worker:
logging.additionalLogLevel:
org.apache.tika.parser: "warn"
com.jadice.flow.worker.afpconverter: "debug"

Docker Compose Configuration

Put a logback configuration file (e.g. spring-logback.xml) in your configuration directory. Then specify the location of this log configuration file in your application.yaml file, for example as follows:

logging:
config: "/config/spring-logback.xml"

See also docker-compose tutorial.

Central Logging

See Kubernetes cluster-level logging.

Depending on your cluster manager there may already be some integration of a cluster-level logging solution. Examples: