Attention

You are viewing an older version of the documentation. The latest version is v3.3.

Intel® Edge Insights for Industrial

Important

Intel® Edge Insights for Industrial features must be enabled in the ECI image before Intel® Edge Insights for Industrial can be used. Creating an ECI image that contains the Intel® Edge Insights for Industrial features can be accomplished by selecting the Intel® EII feature option during image setup. See section Building ECI for more information.

../_images/option_eii.png

Intel® Edge Insights for Industrial (EII) enables the rapid deployment of solutions aimed at finding and revealing insights on compute devices outside data centers. The title of the software itself alludes to its intended purposes:

  • Edge – systems existing outside of a data center, and

  • Insights – understanding relationships.

This software consists of a set of pre-integrated ingredients optimized for Intel® architecture. It includes modules that enable data collection, storage, and analytics for both time-series and video data, as well as the ability to act on these insights by sending downstream commands to tools or devices.

Attention

Intel® Edge Insights for Industrial is an independent product. For help and support with Intel® Edge Insights for Industrial, please refer to the following:

Intel® EII Overview

By way of analogy, Intel’s EII includes both northbound and southbound data connections. As shown in the figure below, supervisory applications, such as Manufacturing Execution Systems (MES), or Work In Progress (WIP) management systems, Deep Learning Training Systems, and the cloud (whether on premise or in a data center) make northbound connections to the EII. Southbound connections from the EII are typically made to IoT devices, such as a programmable logic controller (PLC), camera, or other sensors and actuators.

The typology of the northbound relationship with MES, WIP management systems, or cloud applications is typically a star or hub and spoke shaped, meaning that multiple Edge Insight nodes communicate with one or more of these supervisory systems.

Incremental learning, for example, takes full advantage of this northbound relationship. In solutions that leverage deep learning, offline learning is necessary to fine-tune, or to continuously improve the algorithm. This can be done by sending the Insight results to an on premise or cloud-based training framework, and periodically retraining the model with the full dataset, and then updating the model on the EII. The figure below provides an example of this sequence.

../_images/destinations.png

Note

It is important to use the full dataset, including the original data used during initial training of the model; otherwise, the model may over-fit. For more information on model development best practices, please see https://www.intel.com/content/www/us/en/developer/learn/course-machine-learning.html

Intel® EII Architecture

It’s best to think about Industrial EII from Intel as a set of containers. The figure below depicts these containers as dashed lines around the components held by each container. The high-level functions of EII are data ingestion (video and time-series), data storage, data analytics, as well as data and device management.

../_images/architecture.png

This EII configuration is designed for deployment near to the site of data generation (for example, a PLC, robotic arm, tool, video camera, sensor, etc.). The EII Docker* containers perform video and time-series data ingestion, real-time 1 analytics, and allow closed-loop control 2 .

Note

Real-time measurements, as tested by Intel, are as low as 50 milliseconds.

Note

EII closed-loop control functions do not provide deterministic control.

Intel® EII Prerequisites

The following section is applicable to:

../_images/target2.png

See the Intel® EII README.md for more information on prerequisites.

Note

A proxy will need to be configured if running the EII device behind a HTTP proxy server. It is recommended to consult Docker’s instructions, which can be found at: https://docs.docker.com/network/proxy/.

Intel® EII Minimum System Requirements

  • At least 16GB of RAM

  • At least 64GB of available hard drive space

  • A working internet connection

Note

Please note that, depending on your unique analytics use case, you may want to run with more memory. For example, if you are running with multiple camera streams connecting to a single EII box, we recommend running at least 16GB of memory. However, if you are running only time series analytics with a low sample rate, 2GB may be sufficient.

Preparing Intel® EII

The following section is applicable to:

../_images/target2.png
  1. Ensure the Intel® EII Prerequisites are met.

  2. Build an ECI image with the Intel® Edge Insights for Industrial feature option enabled. Creating an ECI image that contains the Intel® Edge Insights for Industrial feature option can be accomplished by selecting the Intel® EII feature option during image setup. See section Building ECI for more information.

  3. After building an ECI image with the Intel® EII feature option enabled, install the ECI image to a target system by following section: Installing ECI-B/R/X.

Deploying Intel® EII

The following section is applicable to:

../_images/target2.png
  1. Ensure section Preparing Intel® EII is completed.

  2. Login to the target system, navigate to /opt/Intel-EdgeInsights and verify that Docs, IEdgeInsights, and README.md exist.

    # cd /opt/Intel-EdgeInsights
    # ls
    
  3. Navigate to the build directory and install python requirements:

    # cd ./IEdgeInsights/build
    # pip3 install --ignore-installed -r requirements.txt
    

    Attention

    If deploying behind a network proxy, be sure to update the system configuration accordingly. Typical configuration files include:

    • ~/.docker/config.json

    • /etc/systemd/system/docker.service.d/https-proxy.conf

    • /etc/systemd/system/docker.service.d/http-proxy.conf

    It may also be necessary to configure Docker to use a specific DNS by modifying file /etc/docker/daemon.json:

    {
      "dns": ["*", "8.8.8.8"]
    }
    

    See the Intel® EII README.md for more information on deploying behind a network proxy: /opt/Intel-EdgeInsights/IEdgeInsights/README.md

    If connection issues persist, it may be necessary to update the resolver configuration. Execute the following command to link the resolver configuration:

    # ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
    

    Attention

    Failed tasks may occur such as Failed building wheel for pyrsistent. These occur because some packages already exist on the system outside of the /usr environment.

  4. Build microservice configurations:

    EII uses docker-compose and config files to deploy microservices. These files are auto-generated from a scenario file, which determines which microservices are deployed. Users may provide their own scenario, or select an example scenario (see tables below).

    Main Scenarios

    Scenario File

    Video + Timeseries

    video-timeseries.yml

    Video

    video.yml

    Timeseries

    time-series.yml

    Video Pipeline Scenarios

    Scenario File

    Video Streaming

    video-streaming.yml

    Video streaming with history

    video-streaming-storage.yml

    Video streaming with Azure

    video-streaming-azure.yml

    Video streaming & custom UDFS

    video-streaming-all-udfs.yml

    Build microservice configurations for a specific scenario by performing the following command:

    # python3 builder.py -f <scenario_file>
    

    For this example, the Video Streaming scenario was used:

    # python3 builder.py -f video-streaming.yml
    

    Note

    Additional capabilities for scenarios exist. For example, users may modify/add microservices or deploy multiple instances of microservices. For more information, please refer to the EII documentation: https://www.intel.com/content/www/us/en/developer/topic-technology/edge-5g/edge-solutions/industrial-recipes.html

  5. Provision EII:

    Provisioning must be completed before deploying EII onto any node. This process will start ETCD as a container and load it with the configuration required to run EII.

    Note

    By default, EII is provisioned in Secure mode. There is an optional Developer mode, which disables security. To provision EII in Developer mode, edit the environment file at /opt/Intel-EdgeInsights/IEdgeInsights/build/.env and change DEV_MODE=false to DEV_MODE=true.

    The following actions will be performed as part of provisioning: * Load initial ETCD values from the configuration located at build/provision/config/eii_config.json. * [Secure mode] Generate ZMQ secret/public keys for each app and loading them into ETCD. * Generating required X509 certs and putting them in etcd. * Generate server certificates with values 127.0.0.1, localhost, and HOST_IP loaded from build/.env.

    Note

    If HOST_IP is not defined in build/.env, then HOST_IP will be automatically populated based on the current system network at the time of certificate generation.

    To provision EII, perform the following commands:

    # cd ./provision
    # ./provision.sh ../docker-compose.yml
    

    Attention

    If the following ERROR is received: “Cannot uninstall ‘<package>’. It is a distutils installed project …”, install the listed packages manually by performing the following command (where <package> is the failed package, and <version> is the required package version):

    # pip3 install --ignore-installed <package>==<version>
    
  6. Deploy EII scenario:

    Navigate to the build directory and invoke docker-compose to build and deploy microservices for the EII scenario.

    # cd ..
    # docker-compose up --build -d
    

    Note

    Building the microservices make require longer than 30 minutes. The time required is primarily impacted by network and CPU performance.

    Attention

    If build errors occur, they can usually be mitigated by cleaning the existing images and containers and attempting the build again. Perform the following commands to clean all Docker images and containers:

    # docker kill $(docker ps -q)
    # docker rmi $(docker images -a -q)
    # docker system prune -a
    
  7. Verify correct deployment:

    After deploying the EII scenario, there should a number of microservices actively running on the system. To check the status of these microservices, perform the following command:

    # docker ps
    

    The command will output a table of currently running microservices. Examine the STATUS column, and verify that each microservice reports (healthy).

    If running on a graphical desktop environment, run the following command to allow the microservices to access the X11 windowing server on the system. If any visual microservices are active, they may begin to output to a window after this command.

    # xhost +
    

    Note

    When deployed on a headless (non-graphical) system, microservices which execute graphical windowing API calls may fail.