Asset tracking data exchange with third party systems
This article describes the available interfaces between third-party systems and thingsHub Asset Tracking.
Exporting location data from tracked assets to external third party systems
An Asset’s localization information can be retrieved via the thingsHub’s REST API by querying the api/v3/tracked-assets
endpoint (or just a single Asset with api/v3/tracked-assets/:id
). This will return a structure such as the following:
{
"collection": [
{
"name": "Linde Forklifter #27",
"id": "tracked-asset-27",
"created_at": "2020-06-05T08:22:37.320166Z",
"external_id": "w1a4530911y257548",
"geolocation": {
"latitude": 48.9346048,
"longitude": 8.4398592,
"margin": 9.8,
"moved_at": "2021-02-07T08:22:11.70074Z",
"site": "Produktionswerk Hessen",
"zone": "Wareneingang Süd"
"site_external_id": "MY-SITE_ID12345"
},
"metadata": {
"manufacturer": "Linde",
"type": "Forklifter",
},
"type": "trackinghub",
"updated_at": "2020-06-05T08:23:28.063969Z"
},
{
"name": "Linde Forklifter #28",
"id": "tracked-asset-28",
...
}
]
}
The location of the tracked asset is stored in the geolocation.latitude
and geolocation.longitude
fields and as a human readable “address” in geolocation.zone
and geolocation.site
fields. The margin
(if available) provides a rough estimate of the localization’s error margin in meters. Metadata added during importing shows up as metadata
here, too, while the field id
can be used to identify the Asset and the field external_id
can be used to identify the Asset Tag.
Importing third party data into the Asset Inventory
Pull-synchronization
The Asset Inventory is typically synchronized with an external system in regular intervals. This synchronization mechanism is configured in the service’s primary config file.
The algorithm’s synchronization interval can be configured using the syntax for cron jobs. Every time the algorithm is executed, it retrieves a JSON object (or array) from a collection resource which is located at a given URL using a HTTP GET request. The JSON objects in the collection are expected to be flat (i.e. do not have objects as fields). For each object in the collection, the field named like the configuration option primary_key
is used as the asset’s primary id. All other fields are converted to strings and added as metadata to the asset. If the object does not have a field for the primary_key
or the primary_key
format is invalid, then the object will be skipped.
The algorithm can be configured with the following set of options:
[synchronization]
schedule = <cron job description for synchronization ('@every 5m' is default)>
url = <the url serving the collection of assets>
user = <optional: the user name (for basic authentication only)>
password = <optional: the password (for basic authentication only)>
bearer_token = <optional: the token (for bearer authentication only)>
envelope = <optional: the json element in the top-level object that contais the list>
primary_key = <the name of the json field which is to be used as the asset's id>
Example
Given for the following configuration as an example:
[synchronization]
schedule = @every 15m
url = https://my-server.com/api/v1/assets
user = admin
password = secret
envelope = collection
primary_key = uid
The synchronization algorithm would do an HTTP GET
request on the url http://admin:secret@my-server.com/api/v1/assets
. Assuming this request returns the following response:
{
"collection": [
{
"uid": "12345",
"manufacturer": "Bosch",
"type": "power drill"
},
{
"uid": "67890",
"manufacturer": "Black&Decker",
"type": "power drill"
},
{
"manufacturer": "Hilti",
"model": "power cut"
},
{
"uid": "New Asset",
"manufacturer": "Phillips",
"type": "power grinder"
}
]
}
Then this will result in the following two assets (note that the power cut
lacks the required uid field for the primary id and power grinder
has invalid primary_key
):
{
"id": "12345",
"metadata": {
"manufacturer": "Bosch",
"type": "power drill"
}
}
{
"id": "67890",
"metadata": {
"manufacturer": "Black&Decker",
"type": "power drill"
}
}
Any assets that were synchronized before will be removed from the system on synchronization.
For SaaS installations, contact support@smartmakers.de if changes to the configuration are required here.
Push-synchronization
For push synchronization, simply POST
a full listing of the assets to REST endpoint api/v3/assets-import
. The endpoint expects an array of flat JSON objects.
During Asset synchronization, the thingsHub expects two specific fields, the primary_id
to be present in all assets. And there’s an optional field name
can be provided, from which the name of the asset will be populated if the value is found from the asset data during synchronization. If not provided, the system will look for name
or Name
keys in the asset data. If none of them are found for name of the asset, primary_id
will be used as the name of the assets. These keys are set during installation, so ask your admin for this information. Also refer to the section "Pull-synchronization" above.
All other keys of the JSON objects will be added to the Asset as metadata and can be used in the user interface for searching a matching Asset.
For authentication, generate an API key first and put this in the authorization HTTP header, e.g. Authorization: Bearer <token>
.
The POST
request will return immediately once it has been fully parsed, while the actual synchronization happens in the background. The current status of the synchronization can be queried for now using the GET
verb. This will return a brief summary of everything that happened during synchronization, including the number of successfully imported Assets and the failed ones:
{
metadata: {
"counts": {
"failed": 2,
"imported": 5,
"in_progress": 1,
"total": 8,
"duplicated": 0,
"invalid_id_format": 0,
"without_primary_id": 0,
"without_name": 0
},
"status": "running"
},
"last_import": {
"ended": "2024-03-06T02:02:37.437Z",
"requested": "2024-03-06T02:02:37.437Z",
"started": "2024-03-06T02:02:37.437Z"
},
"ignored": [
{
"entry": {
},
"row_number": 10,
"reasons": [ strings ]
},
{
"entry": {
},
"row_number": 12,
"reasons": [ strings ]
}
]
}
We strongly suggest limiting the number of metadata fields to 6-10 fields and not import data more frequently than every 15 minutes.