Attention

You are viewing an older version of the documentation. The latest version is v3.3.

Intel® Edge Insights for Industrial

Intel® Edge Insights for Industrial (EII) enables the rapid deployment of solutions aimed at finding and revealing insights on compute devices outside data centers. The title of the software itself alludes to its intended purposes: Edge – systems existing outside of a data center and Insights – understanding relationships.

This software consists of a set of pre-integrated ingredients optimized for Intel architecture. It includes modules that enable data collection, storage, and analytics for both time-series and video data, as well as the ability to act on these insights by sending downstream commands to tools or devices.

Intel® EII Overview

By way of analogy, EII includes both northbound and southbound data connections. As shown in the figure below, supervisory applications, such as Manufacturing Execution Systems (MES), or Work In Progress (WIP) management systems, Deep Learning Training Systems, and the cloud (whether on premise or in a data center) make northbound connections to the EII. Southbound connections from the EII are typically made to IoT devices, such as a programmable logic controller (PLC), camera, or other sensors and actuators.

The typology of the northbound relationship with MES, WIP management systems, or cloud applications is typically a star or hub and spoke shaped, meaning that multiple Edge Insight nodes communicate with one or more of these supervisory systems.

Incremental learning, for example, takes full advantage of this northbound relationship. In solutions that leverage deep learning, offline learning is necessary to fine-tune, or to continuously improve the algorithm. This can be done by sending the Insight results to an on premise or cloud-based training framework, and periodically retraining the model with the full dataset, and then updating the model on the EII. The figure below provides an example of this sequence.

../_images/destinations.png

Note: It is important to use the full dataset, including the original data used during initial training of the model; otherwise, the model may over-fit. For more information on model development best practices, refer to the Introduction to Machine Learning Course <https://www.intel.com/content/www/us/en/developer/learn/course-machine-learning.html>_.

Intel® EII Architecture

Consider EII as a set of containers. The following figure depicts these containers as dashed lines around the components held by each container. The high-level functions of EII are data ingestion (video and time-series), data storage, data analytics, as well as data and device management.

../_images/architecture.png

This EII configuration is designed for deployment near to the site of data generation (for example, a PLC, robotic arm, tool, video camera, sensor, and so on). The EII Docker* containers perform video and time-series data ingestion, real-time 1 analytics, and allow closed-loop control 2 .

Note: Real-time measurements, as tested by Intel, are as low as 50 milliseconds. EII closed-loop control functions do not provide deterministic control.

Intel® EII Prerequisites

For more information on the prerequisites, refer to the EII README.md.

Note: You need to configure a proxy if running the EII device behind a HTTP proxy server. It is recommended to refer to Docker instructions.

Intel® EII Minimum System Requirements

  • At least 16 GB of RAM

  • At least 64 GB of available hard drive space

  • A working internet connection

Note: Depending on your unique analytics use case, you may want to run with more memory. For example, if you are running with multiple camera streams connecting to a single EII box, it is recommended to run at least 16 GB of memory. However, if you are running only time series analytics with a low sample rate, 2 GB may be sufficient.

Install Intel® EII

  1. Make sure that the Intel® EII Prerequisites are met.

  2. Intel® EII requires the Docker Engine. Install docker-ce on the target system:

  3. Setup the ECI APT repository, then perform either of the following commands to install this component:

    Install from meta-package
    $ sudo apt install eci-inference-eii
    
    Install from individual Deb packages
    $ sudo apt install eii
    

Deploy Intel® EII

The following section is applicable to:

../_images/target_generic1.png
  1. Make sure that you have installed EII.

  2. Login to the target system, navigate to /opt/Intel-EdgeInsights/IEdgeInsights

    $ cd /opt/Intel-EdgeInsights/IEdgeInsights
    
  3. Navigate to the build directory and install python requirements:

    $ cd ./build
    $ pip3 install --ignore-installed -r requirements.txt
    

    Attention

    If deploying behind a network proxy, update the system configuration accordingly. Typical configuration files include:

    • ~/.docker/config.json

    • /etc/systemd/system/docker.service.d/https-proxy.conf

    • /etc/systemd/system/docker.service.d/http-proxy.conf

    It may also be necessary to configure Docker to use a specific DNS by modifying file /etc/docker/daemon.json:

    {
      "dns": ["*", "8.8.8.8"]
    }
    

    For more information on deploying behind a network proxy, refer to the EII README.md at /opt/Intel-EdgeInsights/IEdgeInsights/README.md.

    If connection issues persist, update the resolver configuration. Run the following command to link the resolver configuration:

    $ ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
    

    Attention

    Failed tasks such as Failed building wheel for pyrsistent might occur. These occur because some packages already exist on the system outside of the /usr environment.

  4. Build microservice configurations.

    EII uses docker-compose and config files to deploy microservices. These files are auto-generated from a scenario file, which determines the microservices that are to be deployed. You could provide your own scenario, or select an example scenario (see the following tables).

    Main Scenarios

    Scenario File

    Video + Timeseries

    video-timeseries.yml

    Video

    video.yml

    Timeseries

    time-series.yml

    Video Pipeline Scenarios

    Scenario File

    Video Streaming

    video-streaming.yml

    Video streaming with history

    video-streaming-storage.yml

    Video streaming with Azure

    video-streaming-azure.yml

    Video streaming & custom UDFS

    video-streaming-all-udfs.yml

    Build microservice configurations for a specific scenario:

    $ python3 builder.py -f ./usecases/<scenario_file>
    

    This example uses the Video Streaming scenario:

    $ python3 builder.py -f ./usecases/video-streaming.yml
    

    Note: Additional capabilities for scenarios exist. For example, you may modify/add microservices or deploy multiple instances of microservices. For more information, refer to the EII documentation.

  5. Provision EII.

    Complete provisioning before deploying EII onto any node. This process will start ETCD as a container and load it with the configuration required to run EII.

    Note: By default, EII is provisioned in Secure mode. There is an optional Developer mode, which disables security. To provision EII in Developer mode, edit the environment file at /opt/Intel-EdgeInsights/IEdgeInsights/build/.env and change DEV_MODE=false to DEV_MODE=true.

    The following actions will be performed as part of provisioning: * Load initial ETCD values from the configuration located at build/provision/config/eii_config.json. * [Secure mode] Generate ZMQ secret/public keys for each app and loading them into ETCD. * Generating required X509 certs and putting them in etcd. * Generate server certificates with values 127.0.0.1, localhost, and HOST_IP loaded from build/.env.

    Note: If HOST_IP is not defined in build/.env, then HOST_IP will be automatically populated based on the current system network at the time of certificate generation.

    To provision EII, run the following commands:

    $ cd ./provision
    $ ./provision.sh ../docker-compose.yml
    

    Attention

    If the following ERROR is received: “Cannot uninstall ‘<package>’. It is a distutils installed project …”, run the following command to install the listed packages manually (where <package> is the failed package, and <version> is the required package version):

    $ pip3 install --ignore-installed <package>==<version>
    
  6. Deploy EII scenario.

    Navigate to the build directory and invoke docker-compose to build and deploy microservices for the EII scenario.

    $ cd ..
    $ docker-compose up --build -d
    

    Note: Building the microservices may require longer than 30 minutes. The time required is primarily impacted by network and CPU performance.

    Attention

    You can mitigate build errors, if any, by cleaning the existing images and containers and attempting the build again. Run the following commands to clean all Docker images and containers:

    $ docker kill $(docker ps -q)
    $ docker rmi $(docker images -a -q)
    $ docker system prune -a
    
  7. Verify correct deployment.

    After deploying the EII scenario, there should a number of microservices actively running on the system. To check the status of these microservices, run the following command:

    $ docker ps
    

    The command will output a table of currently running microservices. Examine the STATUS column, and verify that each microservice reports (healthy).

    If running on a graphical Desktop Environment (see Install Linux Desktop Environment), run the following command to allow the microservices to access the windowing server on the system. If any visual microservices are active, they may begin to output to a window after this command.

    $ xhost +
    

    Note: When deployed on a headless (non-graphical) system, microservices which execute graphical windowing API calls may fail.