This article describes the available interfaces between third-party systems and thingsHub Asset Tracking.

Exporting location data from tracked assets to external third party systems

An Asset’s localization information can be retrieved via the thingsHub’s REST API by querying the api/v3/things endpoint (or just a single Asset with api/v3/things/:id). This will return a structure such as the following:

{
  "collection": [
    {
      "name": "Linde Forklifter #27",
      "id": "tracked-asset-27",
      "created_at": "2020-06-05T08:22:37.320166Z",
      "external_id": "w1a4530911y257548",
      "geolocation": {
        "latitude": 48.9346048,
        "longitude": 8.4398592,
        "margin": 9.8,
        "moved_at": "2021-02-07T08:22:11.70074Z",
        "site": "Produktionswerk Hessen",
        "zone": "Wareneingang Süd"
        "site_external_id": "MY-SITE_ID12345"
      },
      "metadata": {
        "manufacturer": "Linde",
        "type": "Forklifter",
      },
      "type": "trackinghub",
      "updated_at": "2020-06-05T08:23:28.063969Z"
    },
    {
      "name": "Linde Forklifter #28",
      "id": "tracked-asset-28",
      ...
    }
  ]
}
CODE

The location of the tracked asset is stored in the geolocation.latitude and geolocation.longitude fields and as a human readable “address” in geolocation.zone and geolocation.site fields. The margin (if available) provides a rough estimate of the localization’s error margin in meters. Metadata added during importing shows up as metadata here, too, while the field id can be used to identify the Asset and the field external_id can be used to identify the Asset Tag.

Importing third party data into the Asset Inventory

Pull-synchronization

The Asset Inventory is typically synchronized with an external system in regular intervals. This synchronization mechanism is configured in the service’s primary config file.

The algorithm’s synchronization interval can be configured using the syntax for cron jobs. Every time the algorithm is executed, it retrieves a JSON object (or array) from a collection resource which is located at a given URL using a HTTP GET request. The JSON objects in the collection are expected to be flat (i.e. do not have objects as fields). For each object in the collection, the field named like the configuration option primary_key is used as the asset’s primary id. All other fields are converted to strings and added as metadata to the asset. If the object does not have a field for the primary id, then the object will be skipped.

The algorithm can be configured with the following set of options:

[synchronization]
schedule = <cron job description for synchronization ('@every 5m' is default)>
url = <the url serving the collection of assets>
user = <optional: the user name (for basic authentication only)>
password = <optional: the password (for basic authentication only)>
bearer_token = <optional: the token (for bearer authentication only)>
envelope = <optional: the json element in the top-level object that contais the list>
primary_key = <the name of the json field which is to be used as the asset's id>
CODE

Example

Given for the following configuration as an example:

[synchronization]
schedule = @every 15m
url = https://my-server.com/api/v1/assets
user = admin
password = secret
envelope = collection
primary_key = uid
CODE

The synchronization algorithm would do an HTTP GET request on the url http://admin:secret@my-server.com/api/v1/assets. Assuming this request returns the following response:

{
  "collection": [
    {
      "uid": "12345",
      "manufacturer": "Bosch",
      "type": "power drill"
    },
    {
      "uid": "67890",
      "manufacturer": "Black&Decker",
      "type": "power drill"
    },
    {
      "manufacturer": "Hilti",
      "model": "power cut"
    }
  ]
}
JSON

Then this will result in the following two assets (note that the power cut lacks the required uid field for the primary id):

{
  "id": "12345",
  "metadata": {
    "manufacturer": "Bosch",
    "type": "power drill"
  }
}
CODE
{
  "id": "67890",
  "metadata": {
    "manufacturer": "Black&Decker",
    "type": "power drill"
  }
}
CODE

Any assets that were synchronized before will be removed from the system on synchronization.

For SaaS installations, contact support@smartmakers.de if changes to the configuration are required here.

Push-synchronization

For push synchronization, simply POST a full listing of the assets to REST endpoint api/v3/assets-import. The endpoint expects an array of flat JSON objects.

During Asset synchronization, the thingsHub expects a specific field, the primary_key to be present in all assets. This key is set during installation, so ask your admin for this information. Also refer to the section “Pull-synchronization” above.

All other keys of the JSON objects will be added to the Asset as metadata and can be used in the user interface for searching a matching Asset.

For authentication, generate an API key first and put this in the authorization HTTP header, e.g. Authorization: Bearer <token>.

The POST request will return immediately once it has been fully parsed, while the actual synchronization happens in the background. The current status of the synchronization can be queried for now using the GET verb. This will return a brief summary of everything that happened during synchronization, including the number of successfully imported Assets and the failed ones:

{
  "counts": {
    "duplicated": 0,
    "failed": 0,
    "imported": 0,
    "in_progress": 0,
    "total": 0,
    "without_primary_id": 0
  },
  "errors": [
    "string"
  ],
  "ignored": {
    "duplicated": [
      {}
    ],
    "without_primary_id": [
      {}
    ]
  },
  "running": true
}
CODE

We strongly suggest limiting the number of metadata fields to 6-10 fields and not import data more frequently than every 15 minutes.