Skip to content

Simple execution

Running an execution

The POST/api/process/job endpoint can be used for starting a job execution. The example given here uses a platform project for inputs and outputs. The endpoint requires the following parameters:

  • runtime: This specifies the code that should be used as well as additional setup of e.g. resources to be used such as cpu and memory requirements. This simple example will not show usage of these last but will just set up a simple container to be run.
    • containerRuntimeSpec: This is presently the only runtime available. It consists of a list of containers. These have docker image name required. However it is also possible to provide a command array if needed. Furher options can be found here
  • inputData: A list of locations where data should be taken from. At the moment the job service only supports taking data from the platform, but this will be expanded in the future.
    • platformInputLocation: This type of location consists of a projectId, a localpath, as well as a list of excluded filenames. This will recursively download all files in the given projectId except those given in excluded files. These will be downloaded to the localpath given.
  • outputData: A list of locations where data should be uploaded to once execution of the job code has finished. As with the input only platform location is supported at the moment.
    • platformOutputLocation: This type of location consists of a localPath as well as a relativePlatformPath. All files in the local path will be uploaded to the relative platform path using the project used in the query header.
  • retryLimit: An optional parameter to control the retry policy of the job. The failed job action is not run more than the retryLimit. The default value is 3.
Click to show simple example to start a job
{
    "Runtime": {
        "type": "ContainerRuntimeSpec",
        "Containers": [
            {
                "Image": "busybox",
                "Command": ["/bin/sh", "-c", "ls"]
            }
        ] 
    },
}
var job = _computeClient
        .CreateJob(_projectId, "busybox",new string [] {"/bin/sh", "-c", "ls"]})
        .ExecuteAsync();

Console.WriteLine($"Job id {job.JobId}");
Click to show example with platform data input/output
{
    "Runtime": {
        "type": "ContainerRuntimeSpec",
        "Containers": [
            {
                "Image": "busybox",
                "Command": ["/bin/sh", "-c", "cd /work/input/; ls > /work/output/workls.txt"]
            }
        ] 
    },
    "InputData": {
        "Locations": [
            {
                "type": "PlatformInputLocation",
                "ProjectId": "00000000-0000-0000-0000-000000000000",
                "LocalPath": "input",
                "ExcludedFiles": ["somefile.txt"]
            }
        ]
    },
    "OutputData": {
        "Locations": [
            {
                "type": "PlatformOutputLocation",
                "LocalPath": "output",
                "RelativePlatformPath": ""
            }
        ]
    },
    "RetryLimit": "1"
}
var job = _computeClient
        .CreateJob(_projectId, "busybox",new string [] {"/bin/sh", "-c", "cd /work/input/; ls > /work/output/workls.txt"]})
        .WithPlatformInput(_projectId,"input")
        .WithPlatformOutput("output",string.Empty)
        .WithRetryLimit(1)
        .ExecuteAsync();

Console.WriteLine($"Job id {job.JobId}");
Click to show example shell script to start a job

projectId="<replacewithprojectid>"
openapikey="<replacewithopenapikey>"

# create execution
curl -L -X POST "https://api.mike-cloud-test.com/api/process/job" \
  -H 'api-version: 3' \
  -H 'dhi-service-id: job' \
  -H "dhi-project-id: $projectId" \
  -H "dhi-open-api-key: $openapikey" \
  -H 'Content-Type: application/json' \
  --data-raw '{
    "Runtime": {
        "type": "ContainerRuntimeSpec",
        "Containers": [
            {
                "Image": "busybox",
                "Command": ["/bin/sh", "-c", "cd /work/input/; ls > /work/output/workls.txt"]
            }
        ] 
    },
    "InputData": {
        "Locations": [
            {
                "type": "PlatformInputLocation",
                "ProjectId": "00000000-0000-0000-0000-000000000000",
                "LocalPath": "input",
                "ExcludedFiles": ["somefile.txt"]
            }
        ]
    },
    "OutputData": {
        "Locations": [
            {
                "type": "PlatformOutputLocation",
                "LocalPath": "output",
                "RelativePlatformPath": ""
            }
        ]
    },
    "RetryLimit": "1"
}'

The endpoint returns:

  • jobId: the id of the current job execution.

The above job will start a very lightweight linux distro, call bash with a command to change to the '/work/input' directory and output the contents of the folder to the file '/work/output/workls.txt'.

Note

One thing to note above is that all jobs of type ContainerRuntime operate with a base storage unit mounted to /work. Hence all paths where files are uploaded and downloaded to will be prefixed with this folder. This means that if a job needs to access files at the input local path given as 'input' it will have to refer to /work/input in the code. The /work volume mount is the only guaranteed volume with read and write access. Other locations may be read-only.

Click to show an example of the endpoint's response

{
  "jobId": "97437242-f0f7-433c-94cb-8082bef1d138"
}

Tracking an execution

Details regarding the jobs lifecycle status will be sent in events. The containerRuntime jobs can also be found in the kubernetes cluster where it is running.

Show SDK example subscribing to job events.
var jobMonitor = _computeClient
          .CreateJob(_projectId, "busybox",new string [] {"/bin/sh", "-c", "l.txt"]})
          .WithMessageHandler(onData)
          .ExecuteAndMonitorAsync();
...
Task onData(JobMessage message)
{
    //process message
    return Task.CompletedTask;
}

Obtaining the execution result files

Once the execution finishes, the result of the job will be uploaded into the platform if on outputlocation was provided at the beginning. This also included log files from all user container.

Cancelling an execution

It is possible to cancel a job execution after it has been created.

To cancel an execution, call PUT/api/process/job/{jobId}/cancel.

If the job was running at the time of cancellation, the job details and logs will still be available for some time. If the job was pending at the time of cancellation, then no job details or logs will be available.

A cancelled job is considered finished. Cancelling a finished execution will do nothing and cancelling an execution more that once will also do nothing.