Skip to content

The most straight forward use of any SDK or REST API may lead to repeated requests for the same data. Here we describe two optimization techniques that can improve the performance of your application and reduce the load on the Platform.

Caching

For instance, consider a system where your application has a backend that uses DHI.Platform.SDK to get data from the Platform. Your frontend application has a map where you want to show raster tiles you generated from DFS2 files into the Tiling service. The users of your application use mostly data from a handful of datasets and only occasionally display data from other datasets. Consider caching the tiles metadata (response from /api/tiles/dataset/{id}/metadata) for the datasets.

Cache can use any storage type. One light-weight and easy to setup cache is in-memory cache. Here is an example of a custom client that caches the tiles metadata in memory.

public class TilesMetadataClient : ITilesMetadataClient
{
    private readonly ITilesClient _tilesClient;
    private readonly IMemoryCache _memoryCache;
    private const int ExpirationInHours = 12;

    public TilesMetadataClient(ITilesClient tilesClient, IMemoryCache memoryCache)
    {
        _tilesClient = tilesClient;
        _memoryCache = memoryCache;
    }

    public async Task<string> GetTilesMetadataAsync(Guid projectId, Guid datasetId)
    {
        var key = _GetKey(projectId, datasetId);

        return await _memoryCache.GetOrCreateAsync(key, async cacheEntry =>
        {
            cacheEntry.SetAbsoluteExpiration(TimeSpan.FromHours(ExpirationInHours));

            return await _tilesClient.GetTilesMetadata(projectId, datasetId);
        });
    }

    private string _GetKey(Guid projectId, Guid datasetId) => $"{nameof(TilesMetadataClient)}:{projectId}:{datasetId}";
}

Don't forget to add MemoryCache and other classes to service collection in Startup.cs or Program.cs

services.AddPlatformClients(o => {...});
services.AddMemoryCache();
services.AddSingleton<ITilesMetadataClient, TilesMetadataClient>();

If you use this kind of client in your application, you may save many requests to the platform and improve performance of you application.

It is crucial to think about cache invalidation! Some records in your cache may get out of date when the underlying data in the Platform gets updated or deleted. In our example, we use SetAbsolutExpiration method to remove all entries older than 12 hours. So, when a user requests that dataset metadata again after 12 hours, the TilesMetadataClient will fetch fresh data for that dataset from the Platform and stores it for the next 12 hours. For resources that get updated more often, it may be necessary to invalidate cache by reacting to a specific Event. See Cache in-memory in ASP.NET Core for more details about MemoryCache.

A limitation of a in-memory cache is that it lives only in a single instance of your application. If your application is scaled out to multiple instances (such as to multiple pods in a Kubernetes cluster), different instances would have different data in their cache. In some use cases, this can introduce problems and you should start thinking about other cache storage types, such as a database or Redis

Multidimensional data fetching

The Multidimensional service is a good example of an API where you can save requests and also transfer data volume if used correctly.

All time series in a multidimensional dataset have the same temporal domain so the multidimensional dataset details with the temporalDomain property can be called once even for many subsequent time series data queries. Also, all data in a multidimensional dataset have the same spatial domain.

Another case is when fetching multiple time steps of a multidimensional dataset. For the first query, set returnGeometry=true if you want to fetch the coordinates of relevant elements. For subsequent queries where you don't change the spatialFilter set returnGeometry=false. The response will not include the coordinates of relevant elements, which will save significant volume of data transferred.

Understanding the conceptual part of the documentation will help you make more informed decisions about how to structure your code.