Microservice Architecture¶
The features of Intel® Edge Controls for Industrial (Intel® ECI or ECI) are compatible with a microservice architecture. A microservice encapsulates a service into a lightweight unit of computation with defined resources and interfaces so that they can be orchestrated. The implementation in Intel ECI relies on Docker.
Docker is a set of platform-as-a-service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries, and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and are thus more lightweight than virtual machines.
Docker* is used in the ECI image to allow containerization of various applications and services.
For more information, refer to the Docker documentation.
Docker¶
Terminology¶
The following terms are used in this document:
<image>
: The name of a container image, which is built withdocker build -t <image>
and can be instantiated.<tag>
: Particular version of an image, forming its complete name as<image>:<tag>
.<container>
: The name of a container instantiated from an image withdocker run --name <container> <image>:<tag>
.
Install Docker¶
For any system that will use Docker, ensure that Docker (version 19.03.15 or higher) is installed and the user is part of the docker
group. For information on installing and configuring Docker, refer to the following:
Useful Docker Commands¶
Description
Commands
To build a Docker container from a directory containing a Dockerfile
$ docker build . -t <image>:<tag>To save a Docker image into an archive
$ docker save -o <image>.tar <image>:<tag>To load a Docker image
$ docker load < <image>.tarTo run a Docker container
$ docker run --name <container> <image>:<tag>To list the containers that are currently running
$ docker psTo kill a container
$ docker kill <container>
Build and Deploy Docker Images¶
With this procedure, a Docker <image>
is built on a build machine, exported into a tar archive, and then loaded onto the target. The Docker image can then be instantiated on the target as <container>
.
Build Docker Image¶
The following section is applicable to:

Navigate to the directory containing the
Dockerfile
. From a terminal, run the following command, replacing<image>
and<tag>
with values of your choice:$ docker build -t <image>:<tag> .
Note: The “.” at the end of the command is intentional.
Run the following command to save the Docker
<image>
as a tar archive; replace<image>
and<tag>
with the values used in the previous step:$ docker save -o <container>.tar <image>:<tag>
Deploy Docker Image¶
The following section is applicable to:

Transfer the Docker image tar archive to the target system.
Run the following command to load the
<image>
from the tar archive; replace<container>
with the name of the tar archive:$ docker load < <container>.tar
Run the following command to check whether the image has been loaded correctly:
$ docker images
Optimizations for Docker Containers¶
The following section is applicable to:

There are many possible optimizations that can be performed to improve the determinism of a container. The following script applies a general set of optimizations which are typically applicable to most use cases.
Note
The script below assumes that the
pqos
tool has been installed. Refer to the following section for more information about thepqos
tool and installation instructions: Intel® Resource Director Technology (Intel® RDT)# Disable Machine Check echo 0 > /sys/devices/system/machinecheck/machinecheck0/check_interval # Disable RT runtime limit echo -1 > /proc/sys/kernel/sched_rt_runtime_us output=$(cpuid | grep 'cache allocation technology' | grep 'true') if [[ -z ${output} ]]; then echo "This processor does not support Cache Allocation Technology (CAT)." echo "Performance will be degraded!" else # Configure cache partitioning # Reset allocation pqos -R # Define the allocation classes for the last-level-class (llc) # Class 0 is allocated exclusive access to the first half of the llc # Class 1 is allocated exclusive access to the second half of the llc pqos -e 'llc:0=0x0f;llc:1=0xf0' # Associate core 0 with class 0, and cores 1-3 with class 1 pqos -a 'llc:0=0;llc:2=1,2,3' fi # Change affinity of all interrupts to core 0 for i in `cat /proc/interrupts | grep '^ *[0-9]*[0-9]:' | awk {'print $1'} | sed 's/:$//' `; do # Timer if [ "$i" = "0" ]; then continue fi # cascade if [ "$i" = "2" ]; then continue fi echo setting $i to affine for core 0 echo 1 > /proc/irq/$i/smp_affinity done # Offload RCU for i in `pgrep rcu`; do taskset -pc 0 $i > /dev/null ; done # Start container with Docker Daemon, where <name> and <image> are changed per application # Assign container to cores 1-3 and add capability to run tasks at highest priority docker run -d --name <name> --cpuset 1-3 --ulimit rtprio=99 --cap-add=sys_nice <image>
See further Docker Runtime Optimizations.
Microservice Dockerfile Templates for ECI¶
The ECI release provides microservice Dockerfile templates for convenience. The ECI release archive (release-eci_#.#.zip
) within the Edge-Controls-for-Industrial
directory contains the Dockerfiles.tar.gz
archive. This archive contains the files required to build the ECI microservices as containers and their dependencies.
The following section is applicable to:

Prepare Build System for Microservices¶
Do the following to prepare the build system to create the microservice Dockerfile images:
Install Docker 1.17+ on the Linux* build system. Run the following command:
$ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io
Use the Intel edgesoftware utility to download the ECI release archive, if not done already.
Copy the
Dockerfiles.tar.gz
archive from the ECI release archive (release-eci_#.#.zip
) to the Linux build system. This archive is located in the ECI release archive within theEdge-Controls-for-Industrial
directory as follows:└── Edge-Controls-for-Industrial ├── Codesys_Example_Applications.zip ├── Dockerfiles.tar.gz └── eci-release.tar.gz
Open a terminal on the Linux build system and change to the directory containing the
Dockerfiles.tar.gz
archive.From the terminal, run the following command to extract the
Dockerfiles.tar.gz
archive:$ tar -xzvf ./Dockerfiles.tar.gz
Build Microservice Docker Images¶
Explore the following topics to learn about the various microservice templates that ECI offers:
Section |
Description |
---|---|
Dockerfile template for Edge Control Protocol Bridge |
|
Dockerfile template for CODESYS Linux Runtime |
Docker Sanity Checks¶
Sanity Check #1: Docker Daemon¶
The following section is applicable to:

Run the following command:
docker ps
The expected results are as follows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Sanity Check #2: Build TSN Image¶
The following section is applicable to:

Create or modify a file named ~/Dockerfile, and copy the following text into the file:
FROM ubuntu:18.04 RUN apt update RUN apt -y install build-essential iputils-ping iproute2 git RUN git clone https://github.com/intel/iotg_tsn_ref_sw.git RUN cd iotg_tsn_ref_sw && git checkout tags/MR4rt-B-01 && cd sample-app-taprio && make
Save the Dockerfile and run the following commands:
$ cd ~ $ docker build -t tsn:ubuntu . $ docker image ls
The expected results are as follows:
REPOSITORY TAG IMAGE ID CREATED SIZE tsn ubuntu 97d0e4730406 11 days ago 456MB ubuntu latest ccc6e87d482b 4 weeks ago 64.2MB
Sanity Check #3: Bridge Virtual Ethernet to Physical Ethernet¶
The following section is applicable to:

Run the following commands:
$ docker network create -d macvlan --subnet=192.168.0.0/24 -o macvlan_mode=bridge -o parent=${interface_name} bridge_phy $ docker network ls
Note:
${interface_name}
is any physical Ethernet card interface name.The expected results are as follows:
NETWORK ID NAME DRIVER SCOPE 86b23e5bdc5d bridge bridge local 3a2c6855f291 bridge_phy macvlan local 40b8ac2b5880 host host local 1cf678554442 none null local
Run the following commands:
$ ip link add bridge.docker link ${interface_name} type macvlan mode bridge $ ip address add 192.168.0.100/24 dev bridge.docker $ ip link set dev bridge.docker up $ ip address show dev bridge.docker
The expected results are as follows:
116: bridge.docker@${interface_name}: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 36:73:72:85:b2:1f brd ff:ff:ff:ff:ff:ff inet 192.168.0.100/24 brd 192.168.0.255 scope global bridge.docker valid_lft forever preferred_lft forever inet6 fe80::3473:72ff:fe85:b21f/64 scope link valid_lft forever preferred_lft forever
Run the following commands:
$ docker run --net=bridge_phy --ip=192.168.0.101 -it --name tsn tsn:ubuntu $ ping ${host_ip}
Note:
${host_ip}
is any host IP on local network.The expected results are as follows:
PING ${host_ip} (${host_ip}) 56(84) bytes of data. 64 bytes from ${host_ip}: icmp_seq=1 ttl=64 time=1.27 ms 64 bytes from ${host_ip}: icmp_seq=2 ttl=64 time=0.897 ms 64 bytes from ${host_ip}: icmp_seq=3 ttl=64 time=0.434 ms 64 bytes from ${host_ip}: icmp_seq=4 ttl=64 time=0.436 ms
Sanity Check #4: Join a Kubernetes (k8s) Cluster¶
The following section is applicable to:

Make sure Docker is configured properly to pull images from external servers. If needed, include the proxy configuration as described in the Control Docker with systemd section.
Collect the following information from the Kubernetes master:
connection string
in the formIP:PORT
token
discovery token hash
Run the following command:
$ kubeadm join ``connection string`` --token ``token`` --discovery-token-ca-cert-hash ``discovery token hash``
When the command completes, the node becomes visible to the master. The expected results are as follows:
[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: missing optional cgroups: hugetlb [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
Certificate signing request was sent to api-server and a response was received.
The Kubelet was informed of the new secure connection details.
Run
kubectl get nodes
on the control-plane to see this node join the cluster.