Skip to main content
Version: Next

Java Client API

The flow-client is a Java client library for jadice flow controller.

Installation

You can add flow-client to your Maven project by specifying the following dependency to your pom.xml:

<dependency>
<groupId>com.jadice.flow</groupId>
<artifactId>controller-client</artifactId>
</dependency>

<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.jadice.flow</groupId>
<artifactId>jf-controller-libs</artifactId>
<version>0.25.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

The artifact can be found at https://levigo.de/maven2. Access to this repository can be granted on request. Please contact jadice-support@levigo.de.

Create the FlowClient

Note: The FlowClient currently utilizes SpringBoot to initialize the internal WebClient to send the requests. The following steps can be used in a SpringBootApplication - in the near future, the Spring dependency will be replaced by a Builder pattern to allow easier usage outside of Spring applications.

The following properties are read from the application.yaml file:

jadice-flow:
server-url: http://flow-controller-server:8080/
s3ProxyURL: https://s3-host/
keycloakBaseURL: https://keycloak-server/
securityToken: security-token-abcdefg
client:
http-timeout: 1m

publisher:
s3:
bucket: s3-bucket-name
endpoint: s3-host
access-key: jadice-minio
secret-key: secret
protocol: https

The FlowClient also provides the utility class StorageService to upload or retrieve data from the storage.

When the configuration is set, the client libraries can be autowired inside the application:

// Flow job client to create / start / monitor jobs (the main client class)
@Autowired
private FlowJobClient flowJobClient;

// Flow admin client to change server configuration (e.g. job templates)
@Autowired
private FlowAdminClient flowAdminClient;

// S3 Upload Service (or use the S3Client directly) to access the S3 data
@Autowired
private StorageService storageService;

Run a job

In this example, we simply convert a single PDF document into a TIFF document.

Please don't be confused as the available products may not comprise TIFF generation. This sample serves as a generic use case and the required steps can easily be adapted to other installations.

Assumptions

  • The JobTemplate "ToTiff" is configured in the jadice flow controller, containing one step "toTiff" which converts the document into Tiff.

Steps

1. Create a JobRequest:

We need to create a JobRequest which contains the items to be processed. An item can consist of 0-n parts. A part is a single data stream like PDF. But an item (=document) may consist of multiple parts.

To override the default configuration of a JobTemplate, you can pass configuration properties by setting ProcessingPropertes on a part or item. The part properties are only effective for a single part and will be passed as StreamDescriptor properties to the worker, while the item properties are effective for all parts of this item and wil be passed as the "configuration" map to the workers. For available properties check the documentation of the individual workers.

// Create request
JobRequest request = new JobRequest();
request.setJobTemplateName("ToTiff");

// Create an item
Item item = new Item();

// Create a part
String url;

// Here, we upload the file "C:/myFile.pdf" to the S3 - if the data is already present
// in the S3, this step could be omitted.
try(FileInputStream fis = new FileInputStream("C:/myFile.pdf")) {
url = storageServiceuploadToS3("myFile.pdf", "application/pdf", fis);
}

Part p = new Part();
p.setMimeType("application/pdf");
p.setUrl(url);

// optional: set properties for a single part
p.getProcessingProperties().put("isEncryptedPdf","false");
// optional: set properties on an item for all parts
item.getProcessingProperties().put("pdfaConformanceLevel","PDFA2b");

// Add the part to the item
item.addParts(Collections.singletonList(p));

// Add the item to the request
request.getItems().add(item);

2. Create job:

We pass this JobRequest instance to the flowJobClient and receive a JobInformation result, providing us with a unique jobId and some further infos.

// Send request
JobInformation info = flowJobClient.createJob(request);
Long jobID = info.getJobExecutionID();
int queuePosition = info.getCurrentQueuePosition();

The jobID can also be used in later queries to flowjobClient.getJobInformation(jobID, addAuditInfo). In basic queries for status, audit info should not be needed. The audit info contains detailed information for each items process in the job flow and results in a larger result.

The queuePosition indicates the current position of this job in the controller job queue. This position might be used by job creating services as backpressure indicator to pause job creation for a while, until the queue size shrinks.

3. Wait until finished:

With the jobId returned in step 2, we wait until the job has finished.

This will check every 5 sec for 1hour for a job to reach a final state (COMPLETED or FAILED):

// Wait for finish
long awaitTimeout = 1;
TimeUnit awaitTImeoutUnit = TimeUnit.HOURS.
long periodicJobCheckIntervalMS = 5000;

JobInformation info = flowJobClient.awaitFinalJobState(jobID, awaitTimeout, awaitTimeoutUnit, periodicJobCheckIntervalMS);

boolean success = info.getExitStatus().equals(ExitStatus.COMPLETED.toString();

4. Retrieve result parts:

If the status COMPLETED is reached within the timeout period, we can retrieve the resulting parts.

// Option 1: Get result parts for all items of the job
Part[] resultParts = jobClient.getResultParts(jobID);

// Option 2: Get items from job and iterate over parts:
Item[] items = jobClient.getItemsForJob(jobID);
for(Item item : items) {
for(Part part : item.getParts()) {
// We only consider "BASE_PART" type here in this example.
// The input parts for the conversion are also available
// in the list of parts, but marked as META type
// and can be skipped for the result in this case.

if(part.getType().equals(Part.Type.BASE_PART)) {
String url = part.getURL();
}
}
}

5. Access URL of conversion result:

The URLs obtained from the parts can be used to download the document via the StorageService:

Part[] resultParts = jobClient.getResultParts(jobID);

for(Part part : resultParts) {
try(InputStream is = storageServicedownloadFromS3(part.getURL())) {
// process the result data
}
}