Skip to content

Overview

The File storage service is where clients can store raw files in their original form. The files are still registered as a dataset in the metadata service, as any other type of dataset. It is up to the client to use the files appropriately as the format or internal structure of the files are not known. Clients can interact with the files through the web api and by using the transfer pipeline.

Endpoints

Internal endpoints for interaction with the transfer/conversion pipeline

  • GET​/api​/raw​/prepare-download​/{datasetId} Prepare download of data for dataset.
  • GET​/api​/raw​/prepare-upload​/{datasetName} Prepare upload of data for dataset.

File synchronization - for intergaction primarily with desktop clients

  • GET/api​/filesync​/{id}​/blocks Get file blocks with checksums.
  • GET​/api​/filesync​/checksums Get checksums for selected or all files in a folder.
  • GET​/api​/filesync​/prepare-download Get download info for selected or all files in a folder.
  • PUT​​/api​/filesync​/upload Upload file datasets from the staging area or another storage. Files in the staging area moved, all other files are copied. Check the copy operations with POST/api/filesync/upload/status.
  • POST​/api​/filesync​/upload​/status Get upload copy operations status, copy operations started in "PUT /api/filesync/upload".

Main endpoint for accessing an individual file

  • GET​/api​/raw​/dataset​/{id} Prepare download of data for dataset. When you call this service directly you need to provide a SAS token for the given dataset with read privilege to download data for a dataset.

Encpoints for staging files during transfer or conversion

  • POST​/api​/raw​/move-staged-url Move data from staging storage to dataset.
  • GET​/api​/raw​/staging-url Prepare staging blob storage for data upload.
  • POST​/api​/raw​/staging-url Check if given url is a staging url
  • GET​/api​/raw​/staging-urls Prepare staging blob storages for data upload.