Microservice Architecture¶
The features of ECI are compatible with a microservice architecture. A microservice encapsulates a service into a lightweight unit of computation with defined resources and interfaces so that they can be orchestrated. The implementation in ECI relies on Docker.
Docker is a set of platform-as-a-service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating-system kernel and are thus more lightweight than virtual machines. Documentation for Docker can be found from the following link: https://docs.docker.com Docker is used in the ECI image to allow containerization of various applications and services.
Docker¶
Terminology
The following terms are used in this document:
<image>
the name of a container image which is built withdocker build -t <image>
and can be instantiated<tag>
particular version of an image, forming its complete name as<image>:<tag>
<container>
the name of a container instantiated from an image withdocker run --name <container> <image>:<tag>
Docker installation¶
The following section is applicable to:

Install the following software on the Linux system which will be used to build the containers:
Docker 1.17+
$ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io
Useful Docker commands¶
To build a Docker container from a directory containing a Dockerfile:
$ docker build . -t <image>:<tag>
To save a Docker image into an archive:
$ docker save -o <image>.tar <image>:<tag>
To load a Docker image:
$ docker load < <image>.tar
To run a Docker container:
$ docker run --name <container> <image>:<tag>
To list currently running containers:
$ docker ps
To kill a container:
$ docker kill <container>
Building and deploying Docker images¶
With this procedure, a Docker <image>
is built on a build machine, exported into a tar archive, then loaded onto the target. The Docker image can then be instantiated on the target as <container>
.
The following section is applicable to:

Building a Docker image¶
Navigate to the directory containing the
Dockerfile
of interest. From a terminal, perform the following command, replacing<image>
and<tag>
with values of your choice:$ docker build -t <image>:<tag> .
Note
The “.” at the end of the command is intentional.
Save the Docker
<image>
as a tar archive by performing the following command, replacing<image>
and<tag>
with the values used in the previous step:$ docker save -o <container>.tar <image>:<tag>
Deploying a Docker image¶
The following section is applicable to:

Transfer the Docker image tar archive to the target system.
Load the
<image>
from the tar archive by performing the following command, replacing<container>
with the name of the tar archive:$ docker load < <container>.tar
Check that the image has been loaded correctly by performing the following command:
$ docker images
Optimizations for Docker Containers¶
The following section is applicable to:

There are many possible optimizations that can be performed to improve the determinism of a container. The script presented below applies a general set of optimizations which are typically applicable to most use cases.
# Disable Machine Check echo 0 > /sys/devices/system/machinecheck/machinecheck0/check_interval # Disable RT runtime limit echo -1 > /proc/sys/kernel/sched_rt_runtime_us output=$(cpuid | grep 'cache allocation technology' | grep 'true') if [[ -z ${output} ]]; then echo "This processor does not support Cache Allocation Technology (CAT)." echo "Performance will be degraded!" else # Configure cache partitioning # Reset allocation pqos -R # Define the allocation classes for the last-level-class (llc) # Class 0 is allocated exclusive access to the first half of the llc # Class 1 is allocated exclusive access to the second half of the llc pqos -e 'llc:0=0x0f;llc:1=0xf0' # Associate core 0 with class 0, and cores 1-3 with class 1 pqos -a 'llc:0=0;llc:2=1,2,3' fi # Change affinity of all interrupts to core 0 for i in `cat /proc/interrupts | grep '^ *[0-9]*[0-9]:' | awk {'print $1'} | sed 's/:$//' `; do # Timer if [ "$i" = "0" ]; then continue fi # cascade if [ "$i" = "2" ]; then continue fi echo setting $i to affine for core 0 echo 1 > /proc/irq/$i/smp_affinity done # Offload RCU for i in `pgrep rcu`; do taskset -pc 0 $i > /dev/null ; done # Start container with Docker Daemon, where <name> and <image> are changed per application # Assign container to cores 1-3 and add capability to run tasks at highest priority docker run -d --name <name> --cpuset 1-3 --ulimit rtprio=99 --cap-add=sys_nice <image>
Microservice Dockerfile Templates for ECI¶
The ECI release provides a variety of microservice Dockerfile templates for convenience. In the ECI release archive (release-eci_#.#.zip
) within the Edge-Controls-for-Industrial
directory, is an archive named Dockerfiles.tar.gz
. This archive contains the files required to build the ECI microservices as containers and their dependencies.
The following section is applicable to:

Prepare the Build System for Microservices¶
Follow the steps below to prepare the build system to create the microservice Dockerfile images:
Install Docker 1.17+ on the Linux build system by performing the following commands:
$ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io
Copy the
Dockerfiles.tar.gz
archiveDockerfiles.tar.gz
from the ECI release archive (release-eci_#.#.zip
) to the Linux build system. This archive is located in the ECI release archive within theEdge-Controls-for-Industrial
directory as follows:└── Edge-Controls-for-Industrial ├── Codesys_Example_Applications.zip ├── Dockerfiles.tar.gz └── eci-release.tar.gz
Open a terminal on the Linux build system and change to the directory containing the
Dockerfiles.tar.gz
archive.From the terminal, extract the
Dockerfiles.tar.gz
archive by performing the following command:$ tar -xzvf ./Dockerfiles.tar.gz
Building Microservice Docker Images¶
Explore the following topics to learn about the various microservice templates that ECI offers:
Section |
Description |
---|---|
Dockerfile template for CODESYS Linux Runtime |
|
Dockerfile template for Edge Control Protocol Bridge |
|
Dockerfile template for Display Server using Xorg |
|
Dockerfile template for Firefox web browser |
|
Dockerfile template for the Qt5 toolkit |
|
Dockerfile template for Time Sensitive Networking |
|
Yocto poky based container image for Benchmarking |
|
Yocto poky based container image with TSN tools to run over veth |
|
Containerized BPF Compiler Collection (BCC) https://github.com/iovisor/bcc |
Resource management with cgroups¶
cgroups¶
Linux control groups or cgroups provide resource management on Linux. The hierarchy of cgroup controllers and active cgroups is mounted as a virtual file system in the /sys/fs/cgroup
directory. Resources can be limited or throttled per cgroup.
QoS classes¶
cgroups allow the definition of QoS classes with different resource allocation strategy for each microservice depending on its requirements and relevance for proper system operation. There are many ways to create custom cgroups, one of them is by using cgroup utils which are provided by ECI. A configuration file sets values for each cgroup limitation and throttle provided by the kernel. They are listed and documented in the kernel: https://www.kernel.org/doc/Documentation/cgroup-v1/
As an example, create a configuration file under /etc/cgconfig.conf
with the following content:
group qos1 {
cpuset {
cpuset.cpus = 0-2;
}
net_cls {
net_cls.classid = 0x100001;
}
memory {
memory.limit_in_bytes="512M";
memory.memsw.limit_in_bytes="512M";
}
}
group qos2 {
cpuset {
cpuset.cpus = 3;
}
net_cls {
net_cls.classid = 0x100002;
}
memory {
memory.limit_in_bytes="64M";
memory.memsw.limit_in_bytes="64M";
}
}
This will create 2 new cgroups called qos1
and qos2
, each of them defining a different set of CPU and memory resource policies. It also defines custom network classes which will enable class-based filtering rules.
In order to read this file and create cgroups from the definitions, run the following command:
$ cgconfigparser -l /etc/cgconfig.conf
After completion, the cgroups can be found in the mounted cgroup virtual file system:
$ find /sys/fs/cgroup/ -name qos[12] /sys/fs/cgroup/cpu,cpuacct/qos2 /sys/fs/cgroup/cpu,cpuacct/qos1
Setting the cgroup of a container¶
The cgroup (here a QoS class) of a container (here a microservice) can be specified when it is instantiated with docker run
by adding the --cgroup-parent
argument. The processes running in the container will inherit the cgroup properties. For example to instantiate a new container bound to the cgroup qos1
defined above, run:
$ docker run --cgroup-parent=qos1 <image>:<tag>
Docker Sanity-Check Testing¶
Sanity-Check #1: Docker daemon¶
The following section is applicable to:

Step 1: Perform the following command
docker ps
The expected results are as follows:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Sanity-Check #2: Build TSN Image¶
The following section is applicable to:

Step 1: Create or modify a file named ~/Dockerfile, and copy the following text into the file:
FROM ubuntu:18.04 RUN apt update RUN apt -y install build-essential iputils-ping iproute2 git RUN git clone https://github.com/intel/iotg_tsn_ref_sw.git RUN cd iotg_tsn_ref_sw && git checkout tags/MR4rt-B-01 && cd sample-app-taprio && make
Step 2: Save the Dockerfile, and perform the following commands:
$ cd ~ $ docker build -t tsn:ubuntu . $ docker image lsThe expected results are as follows:
REPOSITORY TAG IMAGE ID CREATED SIZE tsn ubuntu 97d0e4730406 11 days ago 456MB ubuntu latest ccc6e87d482b 4 weeks ago 64.2MB
Sanity-Check #3: Bridge Virtual Ethernet to Physical Ethernet¶
The following section is applicable to:

Step 1: Perform the following commands:
$ docker network create -d macvlan --subnet=192.168.0.0/24 -o macvlan_mode=bridge -o parent=${interface_name} bridge_phy $ docker network lsNote:
${interface_name}
is any physical ethernet card interface name. The expected results are as follows:NETWORK ID NAME DRIVER SCOPE 86b23e5bdc5d bridge bridge local 3a2c6855f291 bridge_phy macvlan local 40b8ac2b5880 host host local 1cf678554442 none null local
Step 2: Perform the following commands:
$ ip link add bridge.docker link ${interface_name} type macvlan mode bridge $ ip address add 192.168.0.100/24 dev bridge.docker $ ip link set dev bridge.docker up $ ip address show dev bridge.dockerThe expected results are as follows:
116: bridge.docker@${interface_name}: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 36:73:72:85:b2:1f brd ff:ff:ff:ff:ff:ff inet 192.168.0.100/24 brd 192.168.0.255 scope global bridge.docker valid_lft forever preferred_lft forever inet6 fe80::3473:72ff:fe85:b21f/64 scope link valid_lft forever preferred_lft forever
Step 3: Perform the following commands:
$ docker run --net=bridge_phy --ip=192.168.0.101 -it --name tsn tsn:ubuntu $ ping ${host_ip}Note:
${host_ip}
is any host IP on local network. The expected results are as follows:PING ${host_ip} (${host_ip}) 56(84) bytes of data. 64 bytes from ${host_ip}: icmp_seq=1 ttl=64 time=1.27 ms 64 bytes from ${host_ip}: icmp_seq=2 ttl=64 time=0.897 ms 64 bytes from ${host_ip}: icmp_seq=3 ttl=64 time=0.434 ms 64 bytes from ${host_ip}: icmp_seq=4 ttl=64 time=0.436 ms
Sanity-Check #4: Join a Kubernetes (k8s) cluster¶
The following section is applicable to:

Step 1: Make sure Docker is configured properly to pull images from external servers. If needed, this includes the proxy configuration as described here: https://docs.docker.com/config/daemon/systemd/.
Step 2: Collect the following information from the Kubernetes master:
connection string
in the formIP:PORT
token
discovery token hash
Step 3: Perform the following command:
$ kubeadm join ``connection string`` --token ``token`` --discovery-token-ca-cert-hash ``discovery token hash``When the command completes, the node becomes visible to the master. The expected results are as follows:
[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: missing optional cgroups: hugetlb [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to api-server and a response was received. * The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.