Attention

You are viewing an older version of the documentation. The latest version is v3.3.

Microservice: TSN with VNET using TxTime-Assisted Mode

This container demonstrates how to use VNET for transferring Time-Critical traffic. For this we utilize the so called txtime-assisted mode of the TAPRIO queuing discipline, which will automatically add the launchTime to incoming packets, according to their designated time-slot in the network cycle. In addition, we configure the VNET, created by the Docker runtime environment, to have a physical interface (Intel i210) as a master-bridge device. This will also be the interface, on which we configure the TAPRIO queuing discipline in txtime-assisted mode with four ETF qdiscs enabled on each TX-queue (for managing the proper sequential transmission of the frames according to the launchTime).

The following section describes how to integrate a container utilizing TSN. These instructions assume an existing ECI installation and familiarity with building Docker containers and Intel® Ethernet Controllers TSN Enabling and Testing frameworks.

Building: TSN with VNET

The following section is applicable to:

../../_images/linux4.png

The following steps detail how to build a Docker image which utilize TSN with VNET.

  1. If not already completed, follow section Prepare the Build System for Microservices to prepare the build system.

  2. Open a terminal on the build system and navigate to the extracted Dockerfiles directory. The contents of this directory should be as follows:

    $ ls
    application-containers  bpf  display-containers  softplc-containers
    
  3. Navigate to the application-containers directory. The contents of this directory should be as follows:

    $ ls
    ec-protocol-bridge  tsn-vnet-txtime-assisted  web-browser
    
  4. Navigate to the tsn-vnet-txtime-assisted directory. The contents of this directory should be as follows:

    $ ls
    bootstrap.sh  check_clocks.c  config  configuration.sh  docker_config  Dockerfile  meta-intel-tsn-recipes-connectivity.tar.gz
    
  5. Build the TSN with VNET container by performing the following command:

    $ docker build -t tsn-vnet-txtime-assisted:v1.5 .
    

    Note

    The “.” at the end of the command is intentional.

  6. Save the Docker image as a tar archive by performing the following command:

    $ docker save -o tsn-vnet-txtime-assisted.tar tsn-vnet-txtime-assisted:v1.5
    

    After the save has completed successfully, there will be a tarballed Docker image:

    Docker Image archive name

    Description of Docker Image

    tsn-vnet-txtime-assisted.tar

    Docker image which utilizes TSN with VNET using TxTime-Assisted mode.

Executing: TSN with VNET

The following section is applicable to:

../../_images/target5.png
  1. Ensure that the Docker daemon is active. Run the following command to restart the Docker daemon.

    Warning

    All running Docker containers will also restart.

    $ systemctl restart docker
    

    The status of the Docker daemon can be verified with the following command:

    $ systemctl status docker
    
  2. Copy the Docker image created earlier to the target system.

  3. Load the copied Docker image by performing the following command:

    $ docker load < tsn-vnet-txtime-assisted.tar
    
  4. Check which Docker images are present on the target system with the following command:

    $ docker images
    

    The TSN with VNET image that was loaded should be present in the list. Note the name and tag of the image for use in the following steps.

    For example, on our system the output is as follows:

    $ docker images
    REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE
    tsn-vnet-txtime-assisted   v1.5                6af6a1b620d3        13 seconds ago      1.08GB
    
  5. The setup expects the network interfaces INTERFACE1 and INTERFACE2 to be connected with a cable (loopback) or over a switch. The scripts located in the host_scripts directory can be used to configure the system. The following will configure the network interfaces, create the Docker bridge and instantiate the container which will send packets over INTERFACE1:

    export INTERFACE1="enp1s0"
    export INTERFACE2="enp2s0"
    ./host_scripts/prepare.sh
    

    Now that the container is running and configured to send packets, the output can be monitored from the host with:

    ./host_scripts/uadp_eth_cap.sh -i ${INTERFACE2} -a 1
    

    This will produce a *.his file with time values.