Bosch IoT Insights

Pipelines

This menu item is not available for Free plan users.

In Bosch IoT Insights, pipelines are the key functionality to process data. A pipeline can be seen as a configurable piece of logic between your incoming data and the database.

Pipelines provide a basic set of building blocks that you can use to tailor the data to your specific needs. Data sets can be decoded using decoder formats, such as ODX, FIBEX, A2L, and DBC. Each data set is checked for errors, omissions, and inconsistencies. Before processing, the data sets are cleaned up to ensure the validity and suitability for further processing. Data sets are then transformed to a uniform structure.

images/confluence/download/attachments/1222396190/list_of_pipelines-version-1-modificationdate-1719312336000-api-v2.png

This chapter details the following topics:

Prerequisites

  • You are assigned to the Power User role.

  • For Pay As You Go plan projects: If dedicated pipelines are required, these must be activated by an Admin:

    images/confluence/download/attachments/1222396190/processing_pipelines_activate-dedicated-pipelines_2021-09-30-1-version-1-modificationdate-1687420338000-api-v2.png

Pipeline components

A pipeline consists of three components:

  • Pipeline App

  • Pipeline versions

  • Processing steps

Pipeline App

The Pipeline App is a container in which code is stored and can be executed. It has a set of instances, RAM, and disc space. The processing speed can be influenced by scaling it up or down. A higher memory limit also can increase the CPU performance. The reason for this is that if the memory limit is increased, more CPU cycles are reserved for the related Pipeline.

Pipeline versions

A pipeline version holds a specific configuration of the pipeline. Such a version is created automatically by the system every time when a configuration is saved or the pipeline information is updated. For each pipeline, multiple pipeline versions can be stored but only one can be set to active. It is also possible to revert to former versions of a stopped pipeline. Refer to Managing pipeline versions.

Processing steps

Processing steps are a set of checkpoints that your data will pass until stored in the database. Several processing steps are provided that can be used as templates and may be extended.

Bosch IoT Insights provides a Default Pipeline that consists of a pipeline version and the three processing steps Input Step, Parser Step, Output Step. The incoming data passes through each of these steps.

Currently the following steps are available:

  • Input Step
    The first step of the pipeline. A MongoDB filter can be defined that allows to choose a time for processing incoming data.

  • Parser Step
    A parser transforms data into exchange data. A set of functions is available, such as JSON, ZIP file, CSV, etc.

  • Custom Step
    A code artifact can be uploaded that controls the data processing. A code artifact is an assembly of code and project files that form the business logic. As of now support for Java and Python is provided.
    In the Examples chapter, you can find information on how to configure the custom step, refer to Pipelines: Configuring the custom step.

  • Output Step
    The last step of the pipeline. The collection to store the processed data in can be specified.

Processing architecture

The basic data flow for a pipeline is shown in the following. Data is sent from a device or via an HTTP call to Bosch IoT Insights. At first, the raw data is stored into the Input Data collection. The pipeline's input step will now decide how to process the data. You can then upload custom code or use other steps.

images/confluence/download/attachments/1222396190/Processing-Pipeline-Architecture.drawio-version-131-modificationdate-1731655958000-api-v2.png