Attention

You are viewing an older version of the documentation. The latest version is v3.3.

Real-Time Systems Hypervisor (RTH)

Real-Time Systems* Hypervisor (RTH) is a Type I (bare-metal) Hypervisor commercially available for 10+ years to meet the most stringent Industrial real-time performance and Workload consolidated usecase across Intel® Xeon® processors, Intel® Core™ processors, and Intel Atom® processors.

When focusing on Determinism, the RTH Privileged-Mode allows 0-us added latencies and a tight resource partitioning over a wide range of real-time operating systems:

  • Direct Hardware Access

  • Use of standard drivers

  • Secure separation

  • Temporal isolation of Workloads

When focusing on desktop rendering and composition UX, the RTH Virtualization-mode brings simple and flexible graphics passthrough over a wide range of desktop operating systems:

  • Not tied to any OS or hardware

  • No customization required

  • Easy installation

../_images/capture_02.png

Intel® Edge Controls for Industrial (Intel® ECI or ECI) supports Linux* kernel images for RTS Hypervisor R5.5.00.28607 as |DebianCurrent| packages.

Attention

The RTH releases are independent of the ECI release lifecycle. Current version of ECI supports RTS Hypervisor R5.5.00.28607.

The RTH officially supports Intel® Time Coordinated Computing on 12th and 11th Gen Intel® Core™ processors and Intel Atom® x6000E Series processors (For more information, read Software SRAM).

For more information on the product, contact info@real-time-systems.com.

RTH with ECI - Get Started

The RTH is a ready-to-use, out-of-the-box hypervisor that can be installed easily for enabling industry-graded virtualization over a very large set of Intel® platforms across processors families (Intel Atom® processors, Intel® Core™ processors, and Intel® Xeon® processors).

RTH Prerequisites

Important

For the two-month renewable evaluation license of RTH (RTH-Release_R5.5.00.xxxxx.zip), contact info@real-time-systems.com.

RTH Terminology

Term

Description

RTH

Real-Time Systems Hypervisor refers to as Type I (bare-metal) rthx86 Hypervisor runtime.

GPOS

Hypervised operating system (OS) are referred to as Virtualized-Mode operating system or GPOS. This is equivalent to VM or Guest OS in other hypervisor terminology.

POS

Since RTH is a Type 1 hypervisor, the OS runtime in which the RTH allows “Direct Hardware Access” is referred to as Privileged-Mode operating system (POS).

RTH Resources

For help and support, refer to the following:

Note: If you are using or planning to use an alternative GPOS, RTOS, or both supported by RTS Hypervisor , send an email to info@real-time-systems.com for further directions.

Install RTH with ECI

The following section is applicable to:

../_images/target_generic1.png
  1. Implement the Recommended ECI BIOS Optimizations.

  2. Setup the ECI repository.

  3. Install Linux Desktop Environment of your choice.

  4. Copy the RTH License to /boot/rth/license.txt.

    Important

    For the two-month renewable evaluation license of RTH (RTH-Release_R5.5.00.xxxxx.zip), contact info@real-time-systems.com.

  5. List the RTH Deb packages available for installation:

    $ sudo apt list | grep -e rts-hypervisor -e ^rth-.* -e ^librth-.*
    

    The following is the expected output for the Debian ECI repository:

    rth-pos-rt/unknown,now 5.5.00.28607-eci-bullseye amd64 [installed]
    rth-pos-xe/unknown,now 5.5.00.28607-eci-bullseye amd64 [installed]
    rth-tools/unknown,now 2.3.01 amd64 [installed,automatic]
    rth-virt-dkms/unknown,now 2.3.01 all [installed]
    rts-hypervisor/unknown,now 5.5.00.28607-bullseye-r2 amd64 [installed]
    
  6. Install the RTH Virtualized-Mode GPOS drivers and tools :

    $ sudo apt install rth-virt-dkms
    

    RTH Virtual Ethernet Network and RTH Base kernel modules are automatically compiled and installed for Linux GPOS to bind to RTH on every boot. The following is the expected output for the Debian ECI repository:

    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    The following packages were automatically installed and are no longer required:
    iptables-persistent netfilter-persistent
    Use 'apt autoremove' to remove them.
    The following additional packages will be installed:
    rth-tools    The following NEW packages will be installed:
    rth-tools rth-virt-dkms
    0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded.
    Need to get 65.8 kB of archives.
    After this operation, 376 kB of additional disk space will be used.
    Do you want to continue? [Y/n]
    Get:1 https://eci.intel.com/repos/bullseye isar/main amd64 rth-virt-dkms all 2.3.01 [37.4 kB]
    Get:2 https://eci.intel.com/repos/bullseye isar/main amd64 rth-tools amd64 2.3.01 [28.4 kB]
    Fetched 65.8 kB in 0s (0 B/s)
    debconf: delaying package configuration, since apt-utils is not installed
    Selecting previously unselected package rth-virt-dkms.
    (Reading database ... 222663 files and directories currently installed.)
    Preparing to unpack .../rth-virt-dkms_2.3.01_all.deb ...
    Unpacking rth-virt-dkms (2.3.01) ...
    Selecting previously unselected package rth-tools.
    Preparing to unpack .../rth-tools_2.3.01_amd64.deb ...
    Unpacking rth-tools (2.3.01) ...
    Setting up rth-virt-dkms (2.3.01) ...
    Loading new rth-virt-2.3.01 DKMS files...
    Building for 5.10.115-rt67-intel-ese-standard-lts-rt+
    Building initial module for 5.10.115-rt67-intel-ese-standard-lts-rt+
    Done.
    
    rthBaseDrvVirt.ko:
    Running module version sanity check.
    - Original module
    - No original module exists within this kernel
    - Installation
    - Installing to /lib/modules/5.10.115-rt67-intel-ese-standard-lts-rt+/updates/dkms/
    
    vnetDrvVirt.ko:
    Running module version sanity check.
    - Original module
    - No original module exists within this kernel
    - Installation
    - Installing to /lib/modules/5.10.115-rt67-intel-ese-standard-lts-rt+/updates/dkms/
    
    depmod...
    
    Backing up initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+ to /boot/initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+.old-dkms
    Making new initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+
    (If next boot fails, revert to initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+.old-dkms image)
    update-initramfs....
    
    DKMS: install completed.
    Setting up rth-tools (2.3.01) ...
    
  7. Check the package version corresponding to RTS Hypervisor binaries and ECI configuration templates:

    $ apt-cache policy rts-hypervisor
    

    The following is the expected output for the Debian ECI repository:

    rts-hypervisor:
    Installed: 5.5.00.28607-bullseye
    Candidate: 5.5.00.28607-bullseye-r2
    Version table:
        5.5.00.28607-bullseye-r2 1000
        1000 https://eci.intel.com/repos/bullseye isar/main amd64 Packages
    *** 5.5.00.28607-bullseye 1000
        1000 https://eci.intel.com/repos/bullseye isar/main amd64 Packages
         100 /var/lib/dpkg/status
    
  8. Install the version-specific RTS Hypervisor binaries and ECI configuration templates:

    $ sudo apt install rts-hypervisor=5.5.00.28607-bullseye-r2
    

    The following is the expected output for the Debian ECI repository:

    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    The following NEW packages will be installed:
    rts-hypervisor
    0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
    Need to get 1,174 kB of archives.
    After this operation, 1,226 kB of additional disk space will be used.
    Get:1 https://eci.intel.com/repos/bullseye isar/main amd64 rts-hypervisor amd64 5.5.00.28607-bullseye-r2 [1,174 kB]
    Fetched 1,174 kB in 0s (49.2 MB/s)
    debconf: delaying package configuration, since apt-utils is not installed
    Selecting previously unselected package rts-hypervisor.
    (Reading database ... 222710 files and directories currently installed.)
    Preparing to unpack .../rts-hypervisor_5.5.00.28607-bullseye-r2_amd64.deb ...
    Unpacking rts-hypervisor (5.5.00.28607-bullseye-r2) ...
    Setting up rts-hypervisor (5.5.00.28607-bullseye-r2) ...
    

    Note: You can modify the template file /boot/rth/Linux_Linux64.txt to match with the specific partitioning of Intel Atom® processors, Intel® Core™ processors, and Intel® Xeon® processors.

  9. Check the package version corresponding to RTS Hypervisor binaries and ECI configuration templates:

    $ apt-cache policy rth-pos-rt
    

    The following is the expected output for the Debian ECI repository:

    rth-pos-rt:
    Installed: 5.5.00.28607-eci-bullseye
    Candidate: 5.5.00.28607-bullseye-r3
    Version table:
        5.5.00.28607-bullseye-r3 1000
        1000 https://eci.intel.com/repos/bullseye isar/main amd64 Packages
    *** 5.5.00.28607-eci-bullseye 1000
        1000 https://eci.intel.com/repos/bullseye isar/main amd64 Packages
         100 /var/lib/dpkg/status
    
  10. Install the RTH Privileged-Mode RTOS ECI Linux image and GRUB entries:

    $ sudo apt install rth-pos-rt=5.5.00.28607-bullseye-r3
    

    The following is the expected output for the Debian ECI repository:

    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    The following NEW packages will be installed:
    rth-pos-rt
    0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
    Need to get 288 MB of archives.
    After this operation, 293 MB of additional disk space will be used.
    Get:1 https://eci.intel.com/repos/bullseye isar/main amd64 rth-pos-rt amd64 5.5.00.28607-bullseye-r3  [288 MB]
    Fetched 288 MB in 5s (56.9 MB/s)
    debconf: delaying package configuration, since apt-utils is not installed
    Selecting previously unselected package rth-pos-rt.
    (Reading database ... 222722 files and directories currently installed.)
    Preparing to unpack .../rth-pos-rt_5.5.00.28607-bullseye-r3_amd64.deb ...
    Unpacking rth-pos-rt (5.5.00.28607-bullseye-r3) ...
    Setting up rth-pos-rt (5.5.00.28607-bullseye-r3) ...
    Generating grub configuration file ...
    Found background image: /usr/share/images/desktop-base/desktop-grub.png
    Found linux image: /boot/vmlinuz-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found initrd image: /boot/initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found linux image: /boot/vmlinuz-5.10.115-linux-intel-acrn-sos+
    Found initrd image: /boot/initrd.img-5.10.115-linux-intel-acrn-sos+
    Found linux image: /boot/vmlinuz-5.10.100-intel-ese-standard-lts-dovetail+
    Found initrd image: /boot/initrd.img-5.10.100-intel-ese-standard-lts-dovetail+
    Found linux image: /boot/vmlinuz-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found initrd image: /boot/initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found linux image: /boot/vmlinuz-5.10.115-linux-intel-acrn-sos+
    Found initrd image: /boot/initrd.img-5.10.115-linux-intel-acrn-sos+
    Found linux image: /boot/vmlinuz-5.10.100-intel-ese-standard-lts-dovetail+
    Found initrd image: /boot/initrd.img-5.10.100-intel-ese-standard-lts-dovetail+
    Adding boot menu entry for UEFI Firmware Settings ...
    done
    
  11. Modify the GRUB entries manually in GRUB template files /etc/grub.d/42-rth-pos-rt-linux, /etc/grub.d/43-rth-pos-xe-linux, or using the rth-grub-setup.sh helper scripts:

    $ sudo /usr/libexec/rth-pos-updates/rth-grub-setup.sh pos-rt
    

    The following is the expected output for rth-pos-rt:

    Start updating pos-rt RTS Hypervisor configuration files
    SOURCE
    /dev/nvme0n1p2
    /boot/rth/pos-rt/initrd-mbLinux64-no-sfs.gz not present.  Skipping.
    update-grub config on RTH GPOS. done.
    Generating grub configuration file ...
    Found background image: /usr/share/images/desktop-base/desktop-grub.png
    Found linux image: /boot/vmlinuz-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found initrd image: /boot/initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found linux image: /boot/vmlinuz-5.10.115-linux-intel-acrn-sos+
    Found initrd image: /boot/initrd.img-5.10.115-linux-intel-acrn-sos+
    Found linux image: /boot/vmlinuz-5.10.100-intel-ese-standard-lts-dovetail+
    Found initrd image: /boot/initrd.img-5.10.100-intel-ese-standard-lts-dovetail+
    Found linux image: /boot/vmlinuz-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found initrd image: /boot/initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found linux image: /boot/vmlinuz-5.10.115-linux-intel-acrn-sos+
    Found initrd image: /boot/initrd.img-5.10.115-linux-intel-acrn-sos+
    Found linux image: /boot/vmlinuz-5.10.100-intel-ese-standard-lts-dovetail+
    Found initrd image: /boot/initrd.img-5.10.100-intel-ese-standard-lts-dovetail+
    Adding boot menu entry for UEFI Firmware Settings ...
    done
    Grub entries updates. Done. (ONLY visible on the next RTH system REBOOT/SHUTDOWN (e.g. sudo rth -sysreboot or sudo rth -sysshutdown)
    

    Important

    Note that, by default, ECI RTH POS package installs on Ramdisk Mounted Root Filesystem using root=/dev/ram and RTH system.sfs rootfs on [/OS/1/RUNTIME/0].

  12. Alternatively, install the eci-rth meta-package. This package provides the build environment for ECI to create and maintain a consistent set of RTH Deb packages and resolve runtime dependencies.

    $ sudo apt install eci-rth
    

    The following is the expected output for Debian ECI repository:

    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    The following additional packages will be installed:
    eci-rth-packages
    The following NEW packages will be installed:
    eci-rth eci-rth-packages
    0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded.
    Need to get 1176 kB of archives.
    After this operation, 1240 kB of additional disk space will be used.
    Do you want to continue? [Y/n]
    ...
    Selecting previously unselected package eci-rth-packages.
    Preparing to unpack .../eci-rth-packages_1.0_amd64.deb ...
    Unpacking eci-rth-packages (1.0) ...
    Selecting previously unselected package eci-rth.
    Preparing to unpack .../archives/eci-rth_1.0_amd64.deb ...
    Unpacking eci-rth (1.0) ...
    Setting up eci-rth-packages (1.0) ...
    Setting up eci-rth (1.0) ...
    
  13. Restart the system. From the GRUB menu, select the Intel® ECI and RTH Linux boot entries:

    $ reboot
    

    The following is the expected output for RTH boot log:

    ../_images/RTH_init.png

    Then, the GPOS kernel boot log is displayed:

    ../_images/GPOS_init.png
  14. Log in to the desktop UI as eci-user (passwd=eci-user):

    $ sudo apt install intel-gpu-tools mesa-utils
    $ glxgears
    

    The following is the expected output when running Intel® Graphics 3D-rendered applications, such as glxgears, on RTH Virtualization-mode GPOS UI Desktop environment:

    ../_images/Desktop_01.png

RTH Best Known Methods (BKM)

Section

Description

This section describes the procedure to configure ip-tables port-forwarding 8080, 1217, 4840, 11740, and 9022 on the GPOS to allow remote connection to various ECI application runtimes (HTTP, CODESYS Linux runtime/HMI/OPC UA server, SSH, and so on).

This section describes Microsoft Windows installation as virtualized-mode GPOS RTH OS/0 configuration.

RTH with ECI - Optimized Privileged-mode Linux

ECI and RTH offer out-of-the-box Debian GNU/Linux images IEC-61131-3 PLCopen control-ready environment.

With RTH privileged-mode Linux real-time runtime, automation engineers have direct hardware access without adding the extra latency caused by Hypervisor runtime (that is, no vmm entry/exit IA instruction) ,

The eci-rth meta-package makes generic assumptions about the desired POS PREEMPT-RT (rth-pos-rt) or Xenomai*/Cobalt 3.1 (rth-pos-xe) Linux OS. Modification to the RTH configuration templates /boot/rth/pos-rt/Linux_Linux64_<nvme0n1pY|sdY>.txt or /boot/rth/pos-xe/Linux_Linux64_<nvme0n1pY|sdY>.txt may be necessary for customizing the POS.

  • Linux PREEMPT-RT Linux kernel with patch allows for a fully preemptible kernel.

    In ECI privileged-mode default configuration, the Real-time OS (RTOS) 5.4-rt runtime is mounted on the RAM as ramdisk (that is, both kernel and rootfs) to improve execution latency performance and software resiliency. For more information, see the Linux Foundation Wiki.

  • Xenomai/Cobalt 3.1 Linux kernel allows for a fully preemptible and low-latency IRQ co-kernel approach.

    In ECI privileged-mode user configuration, the Xenomai real-time extension to Linux 5.4-rt runtime is mounted on RAM as ramdisk (that is, both kernel and rootfs) to improve execution latency performance and software resiliency.

  • PLCopen IEC 611131-3 Linux x86_64 control runtime system allows any IA platform to operate as an IEC 61131-3-compliant industrial controller.

    In ECI privileged-mode default configuration, the PLC control runtime is built in o the Linux RTOS runtime.

  • Time Sensitive Networking (TSN) Reference Software for Linux on Intel Ethernet Controller supporting IEEE 802.1AS, IEEE 802.1Q-2018 (Qbv, Qbu, and so on) hardware offloading.

    In ECI privileged-mode default configuration, RTH assigns i225-LM i210-IT discrete or integrated TSN GbE devices to Linux RTOS privileged OS runtime to enable TSN endpoint hardware capability.

  • RTH Virtual Ethernet Network - Virtual Network controller (veth) driver is emulated by the Real-time Hypervisor.

    In ECI privileged-mode default configuration, the virtual-Ethernet controller (vnet.ko) Linux kernel module is enabled automatically during boot in privileged Linux RTOS runtime and has to be installed manually on Virtualized OS runtime to communicate over standard Ethernet network protocols (UPD, TCP, ICMP, and so on) between both runtime operating systems.

The following section is applicable to:

../_images/target_generic.png

Helper Scripts to Update POS

ECI provides helper scripts, which simplify the administration of RTH POS images under a console. These scripts are in the /usr/libexec/rth-pos-updates/ directory.

Note: These scripts make generic assumptions about the desired POS configuration. You might need to modify the scripts to Modification to a script may be necessary to customize the POS. For more information, refer to POS Disks and Partitions.

The following table lists the scripts and their usage.

Launch Script Name

Use Case

Default Image Location

rth-pos-overlayfs-setup.sh  <pos-xe|pos-rt>

This script installs a Privileged-Mode OS rootfs. It is best suited for ECI targets.


Note:

pos-rt and pos-xe are dedicated partition namespaces to mount POS rootfs as overlayFS

/boot/rth/pos-rt/initrd-mbLinux64.gz /boot/rth/pos-xe/initrd-mbLinux64.gz

rth-grub-setup.sh <pos-xe|pos-rt>

This script launches grub updates and RTH configuration. It is best suited for ECI Privileged-OS targets.

/etc/grub.d/42-rth-pos-rt-linux /etc/grub.d/43-rth-pos-xe-linux

ECI built Deb packages rth-pos-xe, rth-pos-rt resolve runtime dependencies with grub configuration entries, RTH optimized Intel multiboot kernel and Debian rootfs images optimized in size:

  1. List the runtime dependencies:

    $ sudo apt-cache show rth-pos-xe
    

    The following shows the output:

    Package: rth-pos-xe
    Version: 5.5.00.28607-bullseye-r3
    Architecture: amd64
    Maintainer: ECI Maintainer <eci.maintainer@intel.com>
    Installed-Size: 189374
    Depends: rts-hypervisor, squashfs-tools, cpio, gzip, grub-efi
    Multi-Arch: foreign
    Priority: extra
    Section: lib
    Filename: pool/main/r/rth-pos-xe/rth-pos-xe_5.5.00.28607-bullseye-r3_amd64.deb
    Size: 192856376
    SHA256: 3cf5c3980643a5b34b5be276ec33b66a1824a2cd6ad2287f1ded18292f76a670
    SHA1: 4e52aa6f749937c34b33e8e2bd0299ca929e0d4e
    MD5sum: 5c2a7d51deabbe994eee4610a1415604
    Description: Real-Time System (RTS) Hypervisor rev >= 5.3 Linux RTH Linux IA64 privileged OS runtime
    Description-md5: 65df6357658f87761d466e01244a790e
    
  2. To view the contents of the Deb packages, download the packages:

    $ sudo apt-get download rth-pos-xe=5.5.00.28607-bullseye-r3 && dpkg -c ./rth-pos-xe*.deb
    
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./boot/
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./boot/rth/
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./boot/rth/pos-xe/
    -rw-r--r-- root/root 185577430 2022-12-10 05:21 ./boot/rth/pos-xe/initrd-mbLinux64.gz
    -rw-r--r-- root/root   8320272 2022-12-10 05:21 ./boot/rth/pos-xe/mbLinuz
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./etc/
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./etc/grub.d/
    -rwxr-xr-x root/root      1308 2022-12-10 05:21 ./etc/grub.d/43-rth-pos-xe-linux
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./root/
    -rwxr-xr-x root/root        29 2022-12-10 05:21 ./root/hello-rth-pos-xe.sh
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./usr/
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./usr/share/
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./usr/share/doc/
    drwxr-xr-x root/root         0 2022-12-10 05:21 ./usr/share/doc/rth-pos-xe/
    -rw-r--r-- root/root       186 2022-12-10 05:19 ./usr/share/doc/rth-pos-xe/changelog.Debian.gz
    
  3. Install the Deb packages:

    $ sudo apt install rth-pos-xe=5.5.00.28607-bullseye-r3
    
  4. To provision the rth-pos-xe image on to a dedicated RTH partition as overlayFS *rootfs*, check for a the partition:

    $ parted /dev/nvme0n1 print
    
    Model: INTEL SSDPEKKW128G8 (nvme)
    Disk /dev/nvme0n1: 128GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start   End     Size    File system  Name     Flags
    1      1049kB  39.8MB  38.8MB  fat16        boot     boot, esp
    2      59.8MB  73.5GB  73.4GB  ext4         root
    3      73.5GB  115GB   41.9GB               windows  msftdata
    4      115GB   120GB   4194MB  ext4         pos-rt
    5      120GB   124GB   4194MB  ext4         pos-xe
    

    Run the following command to provision:

    $ sudo /usr/libexec/rth-pos-updates/rth-pos-overlayfs-setup.sh pos-xe
    

    The following is the output of a successful installation:

    Start updating pos-xe RTS Hypervisor configuration files
    /dev/nvme0n1p5: LABEL="pos-xe" UUID="7766dd9c-3891-47b2-bcb9-c712f9722c6d" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="pos-xe" PARTUUID="af06ad62-d64c-426a-8305-0753486d7d8f"
    Found pos-xe ext4 partition /dev/nvme0n1p5...
    Extracting system.sfs from /boot/rth/pos-xe/initrd-mbLinux64.gz...
    578387 blocks
    5843 blocks
    pos-xe/system.sfs to partition /dev/nvme0n1p5. Done.
    
  5. Update the GRUB configuration:

    $ sudo /usr/libexec/rth-pos-updates/rth-grub-setup.sh pos-xe
    

    The following is the output:

    Start updating pos-xe RTS Hypervisor configuration files
    SOURCE
    /dev/nvme0n1p2
    /boot/rth/pos-xe/initrd-mbLinux64-no-sfs.gz not present.  Skipping.
    update-grub config on RTH GPOS. done.
    Generating grub configuration file ...
    Found background image: /usr/share/images/desktop-base/desktop-grub.png
    Found linux image: /boot/vmlinuz-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found initrd image: /boot/initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found linux image: /boot/vmlinuz-5.10.115-linux-intel-acrn-sos+
    Found initrd image: /boot/initrd.img-5.10.115-linux-intel-acrn-sos+
    Found linux image: /boot/vmlinuz-5.10.100-intel-ese-standard-lts-dovetail+
    Found initrd image: /boot/initrd.img-5.10.100-intel-ese-standard-lts-dovetail+
    Found linux image: /boot/vmlinuz-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found initrd image: /boot/initrd.img-5.10.115-rt67-intel-ese-standard-lts-rt+
    Found linux image: /boot/vmlinuz-5.10.115-linux-intel-acrn-sos+
    Found initrd image: /boot/initrd.img-5.10.115-linux-intel-acrn-sos+
    Found linux image: /boot/vmlinuz-5.10.100-intel-ese-standard-lts-dovetail+
    Found initrd image: /boot/initrd.img-5.10.100-intel-ese-standard-lts-dovetail+
    Adding boot menu entry for UEFI Firmware Settings ...
    done
    Grub entries updates. Done. (ONLY visible on the next RTH system REBOOT/SHUTDOWN (e.g. sudo rth -sysreboot or sudo rth -sysshutdown)
    

GPOS Initialization and Virtual Ethernet Networking

ECI makes generic assumptions regarding the desired Virtual Ethernet Networking between the Virtualized GPOS and Privileged RTOS pos-rt, pos-xe, or both.

RTH configuration templates /boot/rth/pos-rt/Linux_Linux64_<nvme0n1pY|sdY>.txt, /boot/rth/pos-rt/Linux_Xenomai64_<nvme0n1pY|sdY>.txt set the static IP addresses passed by the hypervisor during the OS runtime boot.

  1. Verify whether the RTH systemd service is loaded successfully:

    $ systemctl status rth-virt.service
    

    The following is the output of a successful systemd RTH boot service:

    ● rth-virt.service - Initialize RTH Functionality
        Loaded: loaded (/lib/systemd/system/rth-virt.service; enabled; vendor preset: enabled)
        Active: active (running) since Tue 2022-12-06 15:57:09 EET; 6 days ago
        Process: 401 ExecStartPre=/var/lib/rth/rthInitNetworkVirt start (code=exited, status=0/SUCCESS)
        Process: 445 ExecStart=/usr/bin/rthOsCtrlTask (code=exited, status=0/SUCCESS)
    Main PID: 446 (rthOsCtrlTask)
        Tasks: 2 (limit: 35561)
        Memory: 1.8M
            CPU: 33.473s
        CGroup: /system.slice/rth-virt.service
                └─446 /usr/bin/rthOsCtrlTask
    
    Dec 06 15:57:09 vecow02 systemd[1]: Starting Initialize RTH Functionality...
    Dec 06 15:57:09 vecow02 rthInitNetworkVirt[443]: sed: can't read /etc/network/interfaces.d/vnet0: No such file or directory
    Dec 06 15:57:09 vecow02 systemd[1]: Started Initialize RTH Functionality.
    Dec 06 15:57:09 vecow02 rthOsCtrlTask[446]: osCtrlThread: started
    Dec 06 15:57:09 vecow02 rthOsCtrlTask[446]: start_osctrl_thread: Created osctrl thread with priority (6)
    
  2. Verify whether the Privileged-mode OS is loaded successfully:

    The readtrace tool reports the hypervisor and privileged-mode OS (for example, OS/1 ) pos-xe kernel boot messages using RTH shm/1:

    $ readtrace -o 1
    

    The following is the output of a successful boot:

    Click to view the example

    For more details, refer to Real-Time Hypervisor Manual (Document: 101.224.211129). You would need to request for access to this document.

     Real-Time Hypervisor                                                                                                                                          Copyright 2006 - 2022 Real-Time Systems GmbH
     Version R5.5.02.29139
    
     Init CPU
     Init heap
     Init registry
     Read configuration
     Init memory
     start          - end            (size          ) type
     0 0x000000000000 - 0x00000000AFFF (0x00000000B000) 0x01
     1 0x00000000B000 - 0x00000009EFFF (0x000000094000) 0x01
     2 0x00000009F000 - 0x00000009FFFF (0x000000001000) 0x02
     3 0x0000000A0000 - 0x0000000FFFFF (0x000000060000) 0x02
     4 0x000000100000 - 0x0000008EFFFF (0x0000007F0000) 0x01
     5 0x0000008F0000 - 0x000000FFFFFF (0x000000710000) 0x01
     6 0x000001000000 - 0x00000129AFFF (0x00000029B000) 0x01
     7 0x00000129B000 - 0x00000132FFFF (0x000000095000) 0x01
     8 0x000001330000 - 0x00000FDBEFFF (0x00000EA8F000) 0x01
     9 0x00000FDBF000 - 0x000053D48FFF (0x000043F8A000) 0x01
     10 0x000053D49000 - 0x000074A0FFFF (0x000020CC7000) 0x01
     11 0x000074A10000 - 0x000074A4FFFF (0x000000040000) 0x01
     12 0x000074A50000 - 0x000082E09FFF (0x00000E3BA000) 0x01
     13 0x000082E0A000 - 0x000082FA5FFF (0x00000019C000) 0x01
     14 0x000082FA6000 - 0x000083141FFF (0x00000019C000) 0x01
     15 0x000083142000 - 0x000083211FFF (0x0000000D0000) 0x01
     16 0x000083212000 - 0x000083513FFF (0x000000302000) 0x01
     17 0x000083514000 - 0x00008352EFFF (0x00000001B000) 0x01
     18 0x00008352F000 - 0x00008355AFFF (0x00000002C000) 0x01
     19 0x00008355B000 - 0x00008355CFFF (0x000000002000) 0x01
     20 0x00008355D000 - 0x000084938FFF (0x0000013DC000) 0x01
     21 0x000084939000 - 0x00008494BFFF (0x000000013000) 0x01
     22 0x00008494C000 - 0x000084970FFF (0x000000025000) 0x01
     23 0x000084971000 - 0x000084987FFF (0x000000017000) 0x01
     24 0x000084988000 - 0x0000849AFFFF (0x000000028000) 0x01
     25 0x0000849B0000 - 0x0000849D9FFF (0x00000002A000) 0x01
     26 0x0000849DA000 - 0x0000849F1FFF (0x000000018000) 0x01
     27 0x0000849F2000 - 0x000084A0FFFF (0x00000001E000) 0x01
     28 0x000084A10000 - 0x000084A11FFF (0x000000002000) 0x01
     29 0x000084A12000 - 0x000084A1EFFF (0x00000000D000) 0x01
     30 0x000084A1F000 - 0x00008A880FFF (0x000005E62000) 0x01
     31 0x00008A881000 - 0x00008AA4CFFF (0x0000001CC000) 0x01
     32 0x00008AA4D000 - 0x00008B141FFF (0x0000006F5000) 0x01
     33 0x00008B142000 - 0x00008C057FFF (0x000000F16000) 0x02
     34 0x00008C058000 - 0x00008C4E4FFF (0x00000048D000) 0x01
     35 0x00008C4E5000 - 0x00008C586FFF (0x0000000A2000) 0x04
     36 0x00008C587000 - 0x00008CB73FFF (0x0000005ED000) 0x02
     37 0x00008CB74000 - 0x00008CC0EFFF (0x00000009B000) 0x02
     38 0x00008CC0F000 - 0x00008CC0FFFF (0x000000001000) 0x01
     39 0x00008CC10000 - 0x00008CEFFFFF (0x0000002F0000) 0x02
     40 0x00008CF00000 - 0x00008CFFFFFF (0x000000100000) 0x02
     41 0x00008D000000 - 0x00008FFFFFFF (0x000003000000) 0x02
     42 0x000100000000 - 0x00086BFFFFFF (0x00076C000000) 0x01
     Available RAM total:      32629 MB
     Available RAM below 4 GB: 2229 MB
     Check license
         company: Intel Deutschland GmbH
         project: 22-28001 Technology Partner valid until 2022-12-31
     Init system
     IOMMU detected (interrupt remapping supported)
     Intel(R) Core(TM) i7-8665UE CPU @ 1.70GHz, ID: 06_8EH, stepping: CH
     * 64-Bit
     * VT-x
     * TPR Shadowing
     * Virtualized APIC Access
     * VMCS Shadowing
     * Extended Page Tables (EPT)
     * Unrestricted Guest
     * Speculation Control
     CPU microcode revision 0xB8
     RaceToHalt CPU feature is enabled. This may affect real-time behavior.
     Consider disabling it in BIOS settings.
     Initial CPU frequency:  1700 MHz
     CPU C1E state disabled
     Detected 4 logical CPUs
     Detected 4 CPU cores
     Detected 1 physical package
     Cache topology:
         |---------------|
     CPU |  1|  2|  3|  4|
         |---------------|
     L1d |   |   |   |   |
         |---------------|
     L1i |   |   |   |   |
         |---------------|
     L2  |   |   |   |   |
         |---------------|
     L3  |               |
         |---------------|
     3C9934C742A2F2474FC5977DC6675E117C391620
     A3E1C67F0C928BA35CEDAB72454EEB2658FEF770
     141CED955F13969B661B697ADB30E5608A1507EA
     8CDBA269253C23B284066CBA061EE0B60DFA1926 mbLinuz64
     4F8B823CC7B0A71142A8CDEB775CB5D2765A220A initrd-mbLinux64.gz
     FFB77F1BAD77F13CCAD744F103C7345C93544358 vmlinuz.lnk
     3F96D4EC004442C78AE2B0F7B8A10971560DE867 initramfs.lnk
     OS ID  0 name: "GPOS"
             CPU: 1 2
     OS ID  1 name: "Privileged RTOS"
             CPU: 3 4
     Init shared memory
     Key "security_level" is set to zero.
     All guest operating systems have unrestricted API permissions.
     Init devices
     Date: 2022-12-06
     No device with Vendor ID 0x8086 and Device ID 0x15F2 present.
     No device with Vendor ID 0x1059 and Device ID 0xA100 present.
     No device with Vendor ID 0x8086 and Device ID 0x4BA0 present.
     No device with Vendor ID 0x8086 and Device ID 0x4B32 present.
     No device with Vendor ID 0x8086 and Device ID 0xA0AC present.
     No device with Vendor ID 0x8086 and Device ID 0x0D9F present.
     Init IOMMU
     PCI devices:
         bus dev func vendor device pin IRQ MSI mode | OS | description
         0   0  0   0x8086 0x3E34  -    -  n   --  |  0 | Host/PCI bridge
         0   2  0   0x8086 0x3EA0  A   16  y  INTx |  0 | Display controller
         0   4  0   0x8086 0x1903  A   16  y  INTx |  0 | Data acquisition controller
         0   8  0   0x8086 0x1911  A   16  y   --  |    | Base system peripheral
         0  18  0   0x8086 0x9DF9  A   16  y  INTx |  0 | Data acquisition controller
         0  20  0   0x8086 0x9DED  A   16  y  INTx |  0 | USB xHCI controller
         0  20  2   0x8086 0x9DEF  -    -  n   --  |  0 | Memory controller
         0  22  0   0x8086 0x9DE0  A   16  y  INTx |  0 | Simple comm. controller
         0  22  3   0x8086 0x9DE3  D   19  y  INTx |  0 | Serial Controller
         0  23  0   0x8086 0x9DD3  A   16  y  INTx |  0 | AHCI SATA controller
         0  25  0   0x8086 0x9DC5  A   32  n  INTx |  0 | Serial bus controller
         0  29  0   0x8086 0x9DB5  B    -  y   --  |    | PCIe Root Port to bus 1
         0  29  6   0x8086 0x9DB6  C    -  y   --  |    | PCIe Root Port to bus 2
         0  31  0   0x8086 0x9D84  -    -  n   --  |  0 | PCI/ISA bridge
         0  31  3   0x8086 0x9DC8  A   16  y  INTx |  0 | Audio device
         0  31  4   0x8086 0x9DA3  A   16  n  INTx |  0 | SMBus controller
         0  31  5   0x8086 0x9DA4  -    -  n   --  |  0 | Serial bus controller
         0  31  6   0x8086 0x15BD  A   16  y  INTx |  0 | Network controller
         1   0  0   0x8086 0x1533  A   17  y  MSI  |  1 | Network controller
         2   0  0   0x10B5 0x8603  A    -  y   --  |    | PCIe Root Port to bus 3
         3   1  0   0x10B5 0x8603  A    -  y   --  |    | PCIe Root Port to bus 4
         3   2  0   0x10B5 0x8603  A    -  y   --  |    | PCIe Root Port to bus 5
         4   0  0   0x8086 0x1533  A   19  y  MSI  |  1 | Network controller
         5   0  0   0x8086 0x1533  A   16  y  MSI  |  1 | Network controller
     PCI memory:
         0   2  0   BAR 0 0x0000A0000000 - 0x0000A0FFFFFF
         0   2  0   BAR 2 0x000090000000 - 0x00009FFFFFFF
         0   4  0   BAR 0 0x0000A1530000 - 0x0000A1537FFF
         0   8  0   BAR 0 0x0000A1548000 - 0x0000A1548FFF
         0  18  0   BAR 0 0x0000A1547000 - 0x0000A1547FFF
         0  20  0   BAR 0 0x0000A1520000 - 0x0000A152FFFF
         0  20  2   BAR 0 0x0000A153E000 - 0x0000A153FFFF
         0  20  2   BAR 2 0x0000A1546000 - 0x0000A1546FFF
         0  22  0   BAR 0 0x0000A1545000 - 0x0000A1545FFF
         0  22  3   BAR 1 0x0000A1544000 - 0x0000A1544FFF
         0  23  0   BAR 0 0x0000A153C000 - 0x0000A153DFFF
         0  23  0   BAR 1 0x0000A1543000 - 0x0000A15430FF
         0  23  0   BAR 5 0x0000A1542000 - 0x0000A15427FF
         0  25  0   BAR 0 0x0000FC800000 - 0x0000FC800FFF
         0  31  3   BAR 0 0x0000A1538000 - 0x0000A153BFFF
         0  31  3   BAR 4 0x0000A1000000 - 0x0000A10FFFFF
         0  31  4   BAR 0 0x0000A1540000 - 0x0000A15400FF
         0  31  5   BAR 0 0x0000FE010000 - 0x0000FE010FFF
         0  31  6   BAR 0 0x0000A1500000 - 0x0000A151FFFF
         1   0  0   BAR 0 0x0000A1400000 - 0x0000A147FFFF
         1   0  0   BAR 3 0x0000A1480000 - 0x0000A1483FFF
         2   0  0   BAR 0 0x0000A1300000 - 0x0000A1303FFF
         4   0  0   BAR 0 0x0000A1200000 - 0x0000A127FFFF
         4   0  0   BAR 3 0x0000A1280000 - 0x0000A1283FFF
         5   0  0   BAR 0 0x0000A1100000 - 0x0000A117FFFF
         5   0  0   BAR 3 0x0000A1180000 - 0x0000A1183FFF
     PCI I/O space:
         0   2  0   BAR 4 0x6000 - 0x603F
         0  22  3   BAR 0 0x60A0 - 0x60A7
         0  23  0   BAR 2 0x6090 - 0x6097
         0  23  0   BAR 3 0x6080 - 0x6083
         0  23  0   BAR 4 0x6060 - 0x607F
         0  31  4   BAR 4 0xEFA0 - 0xEFBF
         1   0  0   BAR 2 0x5000 - 0x501F
         4   0  0   BAR 2 0x4000 - 0x401F
         5   0  0   BAR 2 0x3000 - 0x301F
     HPET found. Possible IRQs: 20 21 22 23
     COM1: I/O port 0x03F8 IRQ  4 OS 0
     COM2: I/O port 0x02F8 IRQ  3 OS 0
     COM3: I/O port 0x03E8 IRQ  5 OS 0
     COM4: I/O port 0x02E8 IRQ  6 OS 0
     ACPI Device Assignment
     AHCI V1.31 controller (0/23/0) (Vendor:0x8086 Device:0x9DD3):
     Port 0: No device
     Port 1: No device
     Port 2: ATA Disk Drive - Link Speed: GEN3
         No Drive assignment for port 2.
     Assigned I/O APIC IRQs:
     IRQ trigger polarity OS ID
         0  edge     high    0
         1  edge     high    0
         2  edge     high    0
         3  edge     high    0
         4  edge     high    0
         5  edge     high    0
         6  edge     high    0
         7  edge     high    0
         8  edge     high    0
         9  edge     high    0
         10  edge     high    0
         11  edge     high    0
         12  edge     high    0
         13  edge     high    0
         14  edge     high    0
         15  edge     high    0
         16  level    low     0
         19  level    low     0
         32  level    low     0
     I/O port configuration:
     start  -  end   ( size ) OS
     0x0060 - 0x0060 (0x0001) 0
     0x0064 - 0x0064 (0x0001) 0
     0x02E8 - 0x02EF (0x0008) 0
     0x02F8 - 0x02FF (0x0008) 0
     0x03E8 - 0x03EF (0x0008) 0
     0x03F8 - 0x03FF (0x0008) 0
     0x0CF8 - 0x0CF8 (0x0001) -
     0x0CFC - 0x0CFF (0x0004) -
     0x3000 - 0x301F (0x0020) 1
     0x4000 - 0x401F (0x0020) 1
     0x5000 - 0x501F (0x0020) 1
     0x6000 - 0x603F (0x0040) 0
     0x6060 - 0x6083 (0x0024) 0
     0x6090 - 0x6097 (0x0008) 0
     0x60A0 - 0x60A7 (0x0008) 0
     0xEFA0 - 0xEFBF (0x0020) 0
     System Information:
         Manufacturer: " "
         Product Name: " "
     Baseboard Information:
         Manufacturer   : "Default string"
         Product Name   : "Default string"
         Product Version: "Default string"
         Serial Number  : "Default string"
     GPU frequency range from 300 MHz to 1150 MHz in increments of 50 MHz
     Init time synchronization
     Found OS "GPOS"
     Found OS "Privileged RTOS"
     Init Registry
     Setup Operating Systems
     Loading "Privileged RTOS"
     /OS/1 Virtual COM-to-log port 0x03F8
     Shared Memory Partitions
     0 0x00008C478000 - 0x00008C486FFF (0x00000000F000) "trace_0"
     1 0x00008C469000 - 0x00008C477FFF (0x00000000F000) "trace_1"
     /OS/1 Booting runtime 0
     /OS/1 Activating Virtual MMU
     /OS/1 COM: Linux version 5.10.140-rts2.3.01.15649-intel-ese-standard-lts-dovetail+ (builder
     /OS/1 COM: @6274bbb3e005) (gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for
     /OS/1 COM: Debian) 2.35.2) #1 SMP PREEMPT IRQPIPE Tue Dec 6 13:41:17 UTC 2022
     /OS/1 COM: Command line: root=/dev/ram ip=192.168.2.2:::::vnet0  debug=all verbose=all rand
     /OS/1 COM: om.trust_cpu=on console=tty0 console=ttyS0,115200n8 xenomai.allowed_group=1234 x
     /OS/1 COM: enomai.sysheap_size=256 xenomai.state=enabled xenomai.smi=detect xenomai.smi_mas
     /OS/1 COM: k=1 nosmap nosmt nohalt rthCtrl=0x17400000 rthCtrlSize=0xF000 acpi=off rthActive
     /OS/1 COM:  pci=conf1
    ...
    
  3. Verify whether the RTH virtual drivers are built and loaded successfully on the Virtualized-GPOS:

    At package installation on the Virtualized GPOS, the Linux RTH DKMS should compile vnetDrvVirt.ko Virtual Ethernet kernel module based of the current linux-headers-*.

    $ find /usr/ -name *DrvVirt.ko
    

    The output should result as followed when both vnetDrvVirt.ko and rthBaseDrvVirt.ko exist on all alternative kernel images :

    /usr/lib/modules/5.10.145-linux-intel-acrn-sos+/updates/dkms/vnetDrvVirt.ko
    /usr/lib/modules/5.10.145-linux-intel-acrn-sos+/updates/dkms/rthBaseDrvVirt.ko
    /usr/lib/modules/5.10.140-linux-intel-acrn-sos+/updates/dkms/vnetDrvVirt.ko
    /usr/lib/modules/5.10.140-linux-intel-acrn-sos+/updates/dkms/rthBaseDrvVirt.ko
    

    if nothing is found, you can force DKMS rebuilt for both vnetDrvVirt.ko and rthBaseDrvVirt.ko using the following command :

    $ /usr/lib/dkms/dkms_autoinstaller start
    

    Alternatively, install or re-install the linux-headers and linux-images packages :

    $ apt install linux-headers-5.10.0-18-amd64
    

    The expected output should be similar to the following:

    Click to view the sample expected output

    /etc/kernel/header_postinst.d/dkms:
    dkms: running auto installation service for kernel 5.10.0-18-amd64:
    Kernel preparation unnecessary for this kernel.  Skipping...
    
    Building module:
    cleaning build area...
    make -j2 KERNELRELEASE=5.10.0-18-amd64 KERNEL_DIR=/lib/modules/5.10.0-18-amd64/build.....
    cleaning build area...
    
    DKMS: build completed.
    
    rthBaseDrvVirt.ko:
    Running module version sanity check.
    - Original module
    - No original module exists within this kernel
    - Installation
    - Installing to /lib/modules/5.10.0-18-amd64/updates/dkms/
    
    vnetDrvVirt.ko:
    Running module version sanity check.
    - Original module
    - No original module exists within this kernel
    - Installation
    - Installing to /lib/modules/5.10.0-18-amd64/updates/dkms/
    
    depmod....
    
    Backing up initrd.img-5.10.0-18-amd64 to /boot/initrd.img-5.10.0-18-amd64.old-dkms
    Making new initrd.img-5.10.0-18-amd64
    (If next boot fails, revert to initrd.img-5.10.0-18-amd64.old-dkms image)
    update-initramfs.........
    
    DKMS: install completed.
    

    Resulting in new vnetDrvVirt.ko and rthBaseDrvVirt.ko under /usr/lib/modules/5.10.0-18-amd64/updates/dkms :

    $ find /usr/ -name *DrvVirt.ko
    

    The expected output should be similar to the following:

    /usr/lib/modules/5.10.145-linux-intel-acrn-sos+/updates/dkms/vnetDrvVirt.ko
    /usr/lib/modules/5.10.145-linux-intel-acrn-sos+/updates/dkms/rthBaseDrvVirt.ko
    /usr/lib/modules/5.10.0-18-amd64/updates/dkms/vnetDrvVirt.ko
    /usr/lib/modules/5.10.0-18-amd64/updates/dkms/rthBaseDrvVirt.ko
    /usr/lib/modules/5.10.140-linux-intel-acrn-sos+/updates/dkms/vnetDrvVirt.ko
    /usr/lib/modules/5.10.140-linux-intel-acrn-sos+/updates/dkms/rthBaseDrvVirt.ko
    
  4. Establish Virtual Network IPv4 data-link via Linux network manager user-settings :

    Kernel module auto-loading would mount the virtual Ethernet device interface, which is enp0s1f1 in the following example:

    $ lsmod | grep vnet && ethtool -i enp0s1f1
    

    The expected output should be similar to the following:

    vnetDrvVirt            20480  0
    driver: pci_vnet
    version: 5.10.140-linux-intel-acrn-sos+
    firmware-version:
    expansion-rom-version:
    bus-info: 0000:00:01.1
    supports-statistics: no
    supports-test: no
    supports-eeprom-access: no
    supports-register-dump: no
    supports-priv-flags: no
    

    Intel® ECI makes generic assumptions about the desired setting for Virtual Ethernet network link configurations /usr/lib/systemd/network/80-rth-vm-gpos.network.

    $ systemctl start systemd-networkd && networkctl status enp0s1f1
    

    The expected output should be similar to the following:

    ● 5: enp0s1f1
                        Link File: /usr/lib/systemd/network/99-default.link
                    Network File: /usr/lib/systemd/network/80-rth-vm-gpos.network
                            Type: ether
                            State: routable (configured)
                            Path: pci-0000:00:01.1
                            Driver: pci_vnet
                        HW Address: aa:aa:aa:00:00:00
                            MTU: 1500 (min: 68, max: 1500)
                            QDisc: pfifo_fast
    IPv6 Address Generation Mode: eui64
            Queue Length (Tx/Rx): 1/1
                        Address: 192.168.2.1
                                    fe80::a8aa:aaff:fe00:0
                            DNS: 192.168.1.1
                    Search Domains: ~.
                    Route Domains: .
                DHCP6 Client DUID: DUID-EN/Vendor:0000ab11c9921c6e6694f9dd0000
    
    Dec 06 15:57:09 vecow02 systemd-networkd[456]: enp0s1f1: Interface name change detected, enp0s1f1 has been renamed to vnet0.
    Dec 06 15:57:09 vecow02 systemd-networkd[456]: enp0s1f1: Link UP
    Dec 06 15:57:09 vecow02 systemd-networkd[456]: enp0s1f1: Gained carrier
    Dec 06 15:57:11 vecow02 systemd-networkd[456]: enp0s1f1: Gained IPv6LL
    

    Important

    Run ifconfig or ip link to verify whether the enp0s1f1 network has an IP address of 192.168.2.1.

    If the enp0s1f1 device does not have or an DHCP is actively change IP on Virtual-Ethernet IP address, manually assign static IPv4 using:

    $ echo "iface enp0s1f1 inet manual" > /etc/network/interfaces.d/vnet-rth-discarded
    $ sudo ip addr add 192.168.2.1/24 dev enp0s1f1
    
  5. Open a Linux Privileged-OS console over SSH from a Linux Virtualized-OS terminal:

    By default, the RTH Privileged-OS runtime expects the IPv4 address to be set from RTH configuration file /boot/rth/Linux_Linux64.txt as boot command line parameter ip=192.168.2.2/255.255.255.0. If required, you can change this default IPv4 value.

    $ ssh eci-user@192.168.2.2
    

    For a successful connection, the expected output should be similar to the following:

    eci-user@192.168.2.2's password:
    
    The programs included with the Debian GNU/Linux system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
    permitted by applicable law.
    eci-user@eci-bullseye:~$ uname -r
    5.10.140-rts2.3.01.15649-intel-ese-standard-lts-dovetail+
    
  6. Establish port-forwarding rules between the Linux Privileged-OS and Virtualized-OS runtimes.

    This step is optional. However, it will be helpful when you need to debug the Linux Privileged-OS runtime.

    For example, the CODESYS IDE downloads PLC logic to the CODESYS control runtime using port 4840/TCP.

    # +------------------------+        +-----------------------------+
    # |      RTH-POS           |        |         GPOS-VM-proxy       |
    # |                        |        |                             |
    # | codesyscontrol ..      |        | iptables -t nat -p tcp ...  |
    # |                        |        |                             |
    # | vnet0: 192.168.2.2 ----[tcp/4840]--> enp0s1f1: 192.168.2.1    |
    # |                    <---[tcp/4840]---                          |
    # |                        |        |  enp0s31f6:192.168.1.254 ---[udp/4840]--> ext Codesys IDE
    # |                        |        |                             |
    # +------------------------+        +-----------------------------+
    #
    

    Intel® ECI makes generic assumptions regarding the desired setting for the virtual Ethernet port-forwarding configurations using the /usr/libexec/rth-set-iptables.sh helper script and systemd boot service.

    $ systemctl start rth-vm-gpos-iptables@enp0s1f1.service && systemctl status rth-vm-gpos-iptables@enp0s1f1.service
    

    The expected output should be similar to the following:

    ● rth-vm-gpos-iptables@enp0s1f1.service - RTH: GPOS device enp0s1f1 port-fowarding (iptables) to POS over Hypervisor Virtual-Network
        Loaded: loaded (/lib/systemd/system/rth-vm-gpos-iptables@.service;s disabled; vendor preset: enabled)
        Active: active (exited) since Tue 2022-12-13 11:42:59 EET; 5s ago
        Process: 47665 ExecStart=/usr/libexec/rth-set-iptables.sh enp0s1f1 (code=exited, status=0/SUCCESS)
    Main PID: 47665 (code=exited, status=0/SUCCESS)
            CPU: 271ms
    
    Dec 13 11:42:59 vecow02 systemd[1]: Starting RTH: GPOS device enp0s1f1 port-fowarding (iptables) to POS over Hypervisor Virtual-Network...
    Dec 13 11:42:59 vecow02 systemd[1]: Finished RTH: GPOS device enp0s1f1 port-fowarding (iptables) to POS over Hypervisor Virtual-Network.
    

    To check the port-forwarding configuration, run the following command from the Virtualized-OS terminal:

    $ sudo iptables --list -t nat
    
    Chain PREROUTING (policy ACCEPT)
    target     prot opt source               destination
    DNAT       udp  --  anywhere             anywhere             udp dpt:1740 to:192.168.2.2:1740
    DNAT       udp  --  anywhere             anywhere             udp dpt:1741 to:192.168.2.2:1741
    DNAT       udp  --  anywhere             anywhere             udp dpt:1742 to:192.168.2.2:1742
    DNAT       udp  --  anywhere             anywhere             udp dpt:1743 to:192.168.2.2:1743
    DNAT       tcp  --  anywhere             anywhere             tcp dpt:11740 to:192.168.2.2:11740
    DNAT       tcp  --  anywhere             anywhere             tcp dpt:1217 to:192.168.2.2:1217
    DNAT       tcp  --  anywhere             anywhere             tcp dpt:4840 to:192.168.2.2:4840
    DNAT       tcp  --  anywhere             anywhere             tcp dpt:18000 to:192.168.2.2:8000
    DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL
    
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination
    DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL
    
    Chain POSTROUTING (policy ACCEPT)
    target     prot opt source               destination
    MASQUERADE  all  --  172.17.0.0/16        anywhere
    MASQUERADE  all  --  anywhere             anywhere
    MASQUERADE  all  -- !localhost            anywhere
    
    Chain DOCKER (2 references)
    target     prot opt source               destination
    RETURN     all  --  anywhere             anywhere
    

    The remaining rules, 1217, 11740, and 9022 (http in the screen output), allow any CODESYS IDE development machines connected on the Virtualized-OS network to get a remote access to CODESYS control runtime and CODESYS application (visualization, PLC logic, Remote debug, and so on) on the Privileged-OS.

    Important

    Both iptables, iptables-legacy and nftables, are supported. Manually, select an iptable:

    $ sudo update-alternatives --config  iptables
    
    There are 2 choices for the alternative iptables (providing /usr/sbin/iptables).
    
    Selection    Path                       Priority   Status
    ------------------------------------------------------------
    * 0            /usr/sbin/iptables-nft      20        auto mode
      1            /usr/sbin/iptables-legacy   10        manual mode
      2            /usr/sbin/iptables-nft      20        manual mode
    
    Press <enter> to keep the current choice[*], or type selection number:
    
  7. Set up the Linux Privileged-OS APT repository gateway.

    Linux Privileged-OS runtime may not allow physical networking interface to access external APT repository (for example, Debian.org).

    Intel® ECI makes generic assumptions regarding the desired setting for Linux Privileged-OS Gateway and DNS configurations using the systemd boot service.

    $ scp /etc/apt/sources.list.d/eci.list root@192.168.2.2:/etc/apt/sources.list.d/.
    $ ssh root@192.168.2.2
    

    For a successful APT repository update, the expected output should be similar to the following:

    root@192.168.2.2's password:
    Last login: Tue Dec  6 13:58:15 2022 from 192.168.2.1
    
    The programs included with the Debian GNU/Linux system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
    permitted by applicable law.
    WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
    
    root@eci-bullseye:~# networkctl status vnet0
    ● 7: vnet0
                        Link File: /usr/lib/systemd/network/99-default.link
                    Network File: n/a
                            Type: ether
                            State: routable (unmanaged)
                        HW Address: aa:aa:aa:00:00:01
                            MTU: 1500 (min: 68, max: 1500)
                            QDisc: pfifo_fast
    IPv6 Address Generation Mode: eui64
            Queue Length (Tx/Rx): 1/1
                        Address: 192.168.2.2
                                    fe80::a8aa:aaff:fe00:1
                        Gateway: 192.168.2.1
    
    Dec 06 13:56:49 eci-bullseye systemd-udevd[1299]: vnet0: Could not set coalesce settings, ignoring: Operation not supported
    Dec 06 13:56:49 eci-bullseye systemd-networkd[1417]: vnet0: Link DOWN
    Dec 06 13:56:49 eci-bullseye systemd-networkd[1417]: vnet0: Lost carrier
    Dec 06 13:56:49 eci-bullseye systemd-networkd[1417]: vnet0: Link UP
    Dec 06 13:56:49 eci-bullseye systemd-networkd[1417]: vnet0: Gained carrier
    Dec 06 13:56:51 eci-bullseye systemd-networkd[1417]: vnet0: Gained IPv6LL
    
    root@eci-bullseye:~# apt update
    Hit:1 http://deb.debian.org/debian bullseye InRelease
    Hit:2 http://deb.debian.org/debian bullseye-updates InRelease
    Ign:3 https://eci.intel.com/repos/eci-bullseye isar InRelease
    Hit:4 https://eci.intel.com/repos/eci-bullseye isar Release
    Ign:5 https://eci.intel.com/repos/eci-bullseye isar Release.gpg
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    10 packages can be upgraded. Run 'apt list --upgradable' to see them.
    

    Important

    systemd.resolv — Network Name Resolution manager uses the /run/systemd/resolved.conf.d/vnet_dns.conf configuration file created from the systemd boot services to use Virtualized-OS for DNS resolution:

    $ ssh root@192.168.2.2 "systemctl status systemd-resolved.service"
    $ ssh root@192.168.2.2 "systemctl status rthresolv.service"
    
    ● systemd-resolved.service - Network Name Resolution
        Loaded: loaded (/lib/systemd/system/systemd-resolved.service; enabled; vendor preset: enabled)
        Drop-In: /usr/lib/systemd/system/service.d
                └─10-override-protect-proc.conf
        Active: active (running) since Tue 2022-12-06 13:56:49 UTC; 6 days ago
        Docs: man:systemd-resolved.service(8)
                man:org.freedesktop.resolve1(5)
                https://www.freedesktop.org/wiki/Software/systemd/writing-network-configuration-managers
                https://www.freedesktop.org/wiki/Software/systemd/writing-resolver-clients
    Main PID: 1498 (systemd-resolve)
        Status: "Processing requests..."
            CPU: 11.978s
        CGroup: /system.slice/systemd-resolved.service
                └─1498 /lib/systemd/systemd-resolved
    
    ● rthresolv.service - Set systemd resolv DNS and gateway configuration
        Loaded: loaded (/lib/systemd/system/rthresolv.service; enabled; vendor preset: enabled)
        Drop-In: /usr/lib/systemd/system/service.d
                └─10-override-protect-proc.conf
        Active: active (exited) since Tue 2022-12-06 13:56:49 UTC; 6 days ago
    Main PID: 1466 (code=exited, status=0/SUCCESS)
            CPU: 0
        CGroup: /system.slice/rthresolv.service
    

POS Runtime Memory Size

ECI makes generic assumptions regarding the desired POS Memory setting. Modification to the RTH configuration templates /boot/rth/Linux_Linux64.txt or /boot/rth/WindowsEFI_Linux64.txt may be necessary for customizing the POS on [/SYSTEM].

The following code initializes the RTH parameters for POS device memory protection, physical memory size, and data locality:

[/SYSTEM]
     "IOMMU" = uint32: 1 # Activate IOMMU for all OSs
...
[/OS/1]

    "name"          = "Privileged-RTOS"
    "boot_priority" = uint32: 1
    "memory_size"   = uint64: 0x80000000      # 2048 MB
    "CPU"           = bytelist: 3,4           # e.g. two CPUs: bytelist: 3, 4
    "virtual_MMU"             = uint32: 1
    "restricted_IO"           = uint32: 1
...

Note: It might be impossible for RTH to activate IOMMU for all operating system due to the default BIOS setting VT-d = Disabled. Make sure that the setting is set to VT-d = Enabled on the machine.

$ readtrace -o 1
...
Init IOMMU
IOMMU configured but not present or disabled in BIOS.
Support for device memory protection and physical memory relocation disabled.
...

POS Disks and Partitions

ECI makes generic assumptions regarding the desired NVME or SATA disk drive partition assignment considering pos-rt and pos-xe namespaces. You might need to modify the RTH configuration templates /boot/rth/pos-rt/Linux_Linux64_<nvme0n1pY|sdY>.txt or /boot/rth/pos-rt/WindowsEFI_Linux64_<nvme0n1pY|sdY>.txt to customize POS on NVME or SATA disk drive partition assignment.

Important

NVMe Drive assignment feature is not supported if your hardware configures the NVMe controller bar above 4 GB. Sometimes, server grade hardware does not fulfill this requirement in its default configuration. Typically, this behavior can be configured within BIOS settings:

  • [On commercial UEFI BIOS vendor] under Advanced > PCI configuration > Memory Mapped I/O above 4GB [Disabled]

  • [On TGL UP3 RVP Intel UEFI BIOS] under Intel Advanced Menu > System Agent Configruation > Above 4G [Disabled]

RTH does not support eMMC Integrated controller partition assignment.

In the following configuration steps, it is assumed that the a POS installation running on NVME disk has created distinct EXT4 dedicated partitions:

  1. Check whether RTH has detected any NVME or SATA disk controller:

    $ readtrace -o 1
    

    The expected output should be similar to the following:

    ...
    NVMe (1/0/0) (Vendor:0x8086 Device:0xF1A5) - no drive sharing configured
    ...
    
  2. Locate GPOS rootfs / GPT partition and POS labels:

    $ blkid -o list
    

    In this example, pos-xe and pos-rt labels are set onto the /dev/nvme0n1p4 and /dev/nvme0n1p3 GPT partitions, respectively:

    device                    fs_type         label            mount point        UUID
    ------------------------------------------------------------------------------------------------------------------
    /dev/nvme0n1p1            vfat                             /boot/efi          D473-9F49
    /dev/nvme0n1p2            ext4            /                /                  ae1a66a5-ee9c-4573-9249-dd207ffa6248
    /dev/nvme0n1p4            ext4            pos-xe           (not mounted)      b3c0531b-2cec-47da-ab21-6a0a928e55c9
    /dev/nvme0n1p5            ext4            pos-rt           (not mounted)      a4b56194-8688-4b73-b8e1-58f6bc88b72b
    /dev/nvme0n1                                               (in use)
    /dev/nvme0n1p3                                             (not mounted)
    
  3. Identify the PCI path corresponding to the desired GPT partitions that need to be mapped to:

    $ ls -l /dev/disk/by-path/
    

    The expected output should be similar to the following:

    total 0
    lrwxrwxrwx 1 root root 13 Sep  7 14:58 pci-0000:01:00.0-nvme-1 -> ../../nvme0n1
    lrwxrwxrwx 1 root root 15 Sep  7 14:58 pci-0000:01:00.0-nvme-1-part1 -> ../../nvme0n1p1
    lrwxrwxrwx 1 root root 15 Sep  7 14:58 pci-0000:01:00.0-nvme-1-part2 -> ../../nvme0n1p2
    lrwxrwxrwx 1 root root 15 Sep  7 14:58 pci-0000:01:00.0-nvme-1-part3 -> ../../nvme0n1p3
    lrwxrwxrwx 1 root root 15 Sep  7 14:58 pci-0000:01:00.0-nvme-1-part4 -> ../../nvme0n1p4
    lrwxrwxrwx 1 root root 15 Sep  7 14:58 pci-0000:01:00.0-nvme-1-part5 -> ../../nvme0n1p5
    
  4. Edit the RTH configuration file to assign NVME disk partitions:

    ...
    [/DRIVE/0]
    
    "default"   = uint32: 0   # default is OS/0
    "bus"       = uint32: 1   # 1 0 0   0x8086 0xF1A5 NVMe controller
    "device"    = uint32: 0   #
    "function"  = uint32: 0   #
    ...
    
  5. Reboot the system after adding the extra keys to the RTH configuration file. Then, check whether the change is applied:

    $ readtrace -o 0
    

    The expected output should be similar to the following:

    NVMe (1/0/0) (Vendor:0x8086 Device:0xF1A5) - Write Cache: enabled
        Drive assignment for namespace 1 (GPT):
        Partition | OS | Start Sector |   End Sector | Type
                0 |  0 |         2048 |        38911 | 0xEF EFI System
                1 |  0 |        59392 |    152413430 | 0x83 Linux filesystem
                2 |  0 |    152414208 |    230539263 | 0x07 Microsoft Basic Data
                3 |  1 |    234444800 |    238350335 | 0x83 Linux filesystem
                4 |  0 |    238350336 |    246163455 | 0x83 Linux filesystem
    ...
    
  6. Decide the method to mount and boot POS rootfs:

This approach uses overlayfs kernel capability to allow a minimalist RTH initramfs that mounts a system.sfs located in [/DRIVE/0/PARTITION/3] as POS [/OS/1] diskroot filesystem diskroot=/dev/hda4, allowing Linux RAM initialization with a full system RAM size.

Important

The Privileged-mode kernel is built with CONFIG_OVERLAYFS=y for the ECI initramfs to contain Privileged-mode OS root filesystems (that is, rootfs) as single tree.

$ vi /boot/rth/pos-rt/Linux_Linux64_nvme0n1p4.txt
...
[/OS/1]

    "name"          = "Privileged RTOS"
    "boot_priority" = uint32: 2             # start second
    "memory_size"   = uint64: 0x80000000    # 2000 MB
    "virtual_MMU"   = uint32: 1             # set to 1 to restrict memory access
    "restricted_IO" = uint32: 1             # set to 1 to restrict I/O access
    "CPU"           = bytelist: 3,4           # e.g. two CPUs: bytelist: 3, 4

    "virtual_COM-to-log_port" = uint32 : 0x3F8
    "trace_partition_number" = uint32: 1    # write log to /SHM/1

    [/OS/1/RUNTIME/0]

        "bootline" = "diskroot=/dev/hda4 ip=192.168.2.2:::::vnet0  debug=all verbose=all random.trust_cpu=on console=tty0 console=ttyS0,115200n8"
        "image_0"  = "mbLinuz64"
        "image_1"  = "initrd-mbLinux64.gz"
...
[/DRIVE/0]

"default"   = uint32: 0   # default is OS/0
"bus"       = uint32: 1   # 1 0 0   0x8086 0xF1A5 NVMe controller
"device"    = uint32: 0   #
"function"  = uint32: 0   #

    [/DRIVE/0/PARTITION/3]
    "OS" = uint32: 1   # assigned to OS/1
...

Note:

The eci-rth meta-package makes generic assumptions regarding the desired NVME or SATA disk drive partition assignment considering pos-rt and pos-xe namespaces. Modification to the RTH configuration templates POS rootfs system.sfs or the predefined pos-rt EXT4 partition (for example /dev/nvme0n1p4) or manual update from /boot/rth/pos-rt/initrd-mbLinux64.gz to /boot/rth/pos-rt/initrd-mbLinux64-no-sfs.gz (approximately 850 KB in size) should proceed with the following steps:

$ cd /boot/rth/pos-rt/
$ gunzip initrd-mbLinux64.gz
$ mkdir initramfs-tmp
$ cd initramfs-tmp/
$ cpio -i -d -H newc --no-absolute-filenames < ../initrd-mbLinux64
$ cd ..
$ diskroot=nvme0n1p4
$ mkdir -p /mnt/tmp-`basename $diskroot`
$ mount /dev/$diskroot /mnt/tmp-`basename $diskroot`
$ dd if=initramfs-tmp/system.sfs of=/mnt/tmp-`basename $diskroot`/system.sfs  status=none
$ rm initramfs-tmp/system.sfs
$ gzip initrd-mbLinux64
$ cd initramfs-tmp/
$ find . | cpio -H newc -o > ../initrd-mbLinux64-no-sfs
$ cd ..
$ cat initrd-mbLinux64-no-sfs | gzip > initrd-mbLinux64-no-sfs.gz

Modification to the ECI GRUB configuration templates /etc/grub.d/42-rth-pos-rt-linux or /etc/grub.d/43-rth-pos-xe-linux may be necessary for customizing GRUB multiboot entries (for example, module2). If you change the name of the POS images, make sure that you follow the naming convention, for example, initrd-mbLinux64-no-sfs.gz , Linux_Linux64_<diskroot>.txt, and mbLinuz.

$ vi /etc/grub.d/42-rth-pos-rt-linux
...
menuentry "ECI-R (RTS Hypervisor) GPOS GNU/Debian Desktop and POS Preempt-RT Linux" {
    search --no-floppy --fs-uuid --set=root ae1a66a5-ee9c-4573-9249-dd207ffa6248
    multiboot2 /boot/rth/rthx86
    module2 /boot/rth/license.txt
    module2 /boot/rth/pos-rt/Linux_Linux64_nvme0n1p4.txt
    echo "Loading POS multi-boot Linux Preempt-RT bootable [OVERLAYFS] images..."
    module2 /boot/rth/pos-rt/mbLinuz mbLinuz64
    module2 /boot/rth/pos-rt/initrd-mbLinux64-no-sfs.gz initrd-mbLinux64.gz
    echo "Loading [ROOTFS] GPOS GNU/Debian Desktop bootable images ..."
    module2 /vmlinuz vmlinuz.lnk
    module2 /initrd.img initramfs.lnk
    ...
$ update-grub
$ rth -sysreboot && reboot

POS Configured with Intel® Time Coordinated Computing (TCC)

RTH provides built-in Intel® Time Coordinated Computing (Intel® TCC) and Intel® Resource Director Technology (Intel® RDT) support across Intel® Xeon® processors, Intel® Core™ processors, and Intel Atom® processors.

Attention

Intel® Time Coordinated Computing will not function with RTS Hypervisor.

You can configure all features from the RTH configuration file (for example, /boot/rth/pos-rt/Linux_Linux64_nvme0n1p4.txt).

../_images/TCC_RTS-overview.png

GPU and CPU Cores RTH Cache L2/L3 Partitioning

L2 or L3 Last Level of Cache (LLC) memory is shared between multiple IA64 CPUs, by default. Intel® Cache Allocation Technology (Intel® CAT) is IA technology available on Intel processors offering to divide and assign cache to individual CPUs.

../_images/TCC_CAT_L3_partitioned_seq-diag.png
[/SYSTEM]
"CPU_resource_partitioning" = uint32: 1
  1. Detect if an Intel® IA64 processor supports Intel® CAT. View the hypervisor log messages.

    $ readtrace -o 1
    

    For example, the Intel® Core™ i5-6500TE CPU processor features:

    • Four logical CPUs with a total of four Classes of Service (COS).

    • 12 cache segments that can be assigned to individual COS.

    Cache topology:
        |---------------|
    CPU |  1|  2|  3|  4|
        |---------------|
    L1d |   |   |   |   |
        |---------------|
    L1i |   |   |   |   |
        |---------------|
    L2  |   |   |   |   |
        |---------------|
    L3  |               |  inclusive
        |---------------|
    ...
    CAT supported on L3 Cache 1 (CPUs 1 - 4):
        Max. Class of Service (COS): 4
        Cache segments             : 12
    ...
    
  2. When RTH detects either Intel® TCC or Intel® CAT, add extra [/OS/1] key L3_cache_X_segments or L2_cache_X_segments in the RTH configuration file to change the CPU cache segment partitioning:

    $ vi /boot/rth/pos-rt/Linux_Linux64_nvme0n1p4.txt
    [/OS/1]
    
        "name"          = "Privileged RTOS"
        "boot_priority" = uint32: 1
        "memory_size"   = uint64: 0x20000000      # 512 MB
        "CPU"           = bytelist: 3,4           # e.g. two CPUs: bytelist: 3, 4
        "virtual_MMU"             = uint32: 1
        "restricted_IO"           = uint32: 1
        "L3_cache_1_segments" = bytelist: 1,2,3,4,5,6
    
        "virtual_COM-to-log_port" = uint32: 0x3F8
        "trace_partition_number" = uint32: 1    # write log to /SHM/1
    
        [/OS/1/RUNTIME/0]
    
            "bootline" = "diskroot=/dev/ram ip=192.168.2.2:::::vnet0  debug=all verbose=all random.trust_cpu=on console=tty0 console=ttyS0,115200n8"
            "image_0"  = "mbLinuz64"
            "image_1"  = "initrd-mbLinux64.gz"
    
  3. Reboot the system after adding the extra keys to the RTH configuration file. Then, check whether the changes are applied:

    $ readtrace -o 1
    

    The expected output should be similar to the following:

    ...
    Cache allocation:
    Package 1
        L3 #1  (12 segments)  6 MB
        OS  0 |      ******|  6 segments = 3 MB
        OS  1 |******      |  6 segments = 3 MB
    

Certain Intel® processors allow a sub-feature called GT_CLOS to restrict the GPU usage of the certain LLC segments by the GPU (available on RTH development_mode only on certain Intel® processors).

$ vi /boot/rth/pos-rt/Linux_Linux64_nvme0n1p4.txt
...
[/SYSTEM]
     "IOMMU" = uint32: 1 # Activate IOMMU for all OSs
     "development_mode" = uint32: 1
     "GPU_cache_restriction" = uint32: 1

$ readtrace -o 1
...
Supported GPU cache restriction settings:
  0: Cache segments: 1 2 3 4 5 6 7 8 9 10 11 12
  1: Cache segments: 7 8 9 10 11 12
  2: Cache segments: 11 12
  3: Cache segments: 12

Cache allocation:
  Package 1
    L3 #1  (12 segments)  6 MB
      OS  0 |      ******|  6 segments = 3 MB
      OS  1 |******      |  6 segments = 3 MB
      GPU   |      ******|  6 segments


.. figure:: assets/rts/overview/TCC_CAT_L3_partitioned.png
   :align: center

RTH Software SRAM /dev/shm

RTH provides built-in Software SRAM Intel® TCC feature available with 11th and 12th Gen Intel® Core™ processors and Intel Atom® x6000E Series processors for achieving low-latency L2 or L3 cache data locality using RTH Application Programming Interface (API) for Shared-Memory device /dev/shm.

../_images/TCC_L3_shared_seq-diag.png

Click to view the example

$ vi /boot/rth/pos-rt/Linux_Linux64_nvme0n1p4.txt
...
[/SHM/0]

    "name" = "trace_0"
    "size" = uint64: 0xF000

[/SHM/1]

    "name" = "trace_1"
    "size" = uint64: 0xF000

[/SHM/2]

    "name" = "low-latency"
    "size" = uint64: 0x100000   # one megabyte
    "cache_locked" = uint32: 3 # lock data in the fastest shared cache
...

$ readtrace -o 1
...
    L2 #1  (20 segments)  1.25 MB
      OS  0 |********************|  20 segments = 1.25 MB
    L2 #2  (20 segments)  1.25 MB
      OS  0 |********************|  20 segments = 1.25 MB
    L2 #3  (20 segments)  1.25 MB
      OS  0 |********************|  20 segments = 1.25 MB
    L2 #4  (20 segments)  1.25 MB
      OS  0 |********************|  20 segments = 1.25 MB
    L2 #8  (20 segments)  1.25 MB
      OS  1 |********************|  20 segments = 1.25 MB
    L3 #1  (12 segments)  24 MB
      OS  0   |      ***** | 5 segments = 10 MB
      OS  1   |******      | 6 segments = 12 MB
      SW SRAM |           *| 1 segment = 2 MB

Init time synchronization
Found OS "GPOS"
Found OS "Privileged RTOS"
Init Registry
Setup Operating Systems
Loading "Privileged RTOS"
Shared Memory Partitions
  0 0x00004086D000 - 0x00004087BFFF (0x00000000F000) "trace_0"
  1 0x00004085E000 - 0x00004086CFFF (0x00000000F000) "trace_1“
  2 0x000040080000 - 0x0000400FFFFF (0x000000080000) "low-latency"
/OS/1 Booting runtime 0
/OS/1 Activating Virtual MMU
/OS/1 COM: 000: Linux version 5.4.115-rt57-rts2.2.02.15407-intel-pk-standard+ (builder@f228
../_images/TCC_L3_shared.png

Note: For more information on RTS Shared-Memory (SHM) and RTH Application Programming Interface (API) for Linux, Microsoft Windows, QNX* Neutrino, refer to the RTH Resources or contact info@real-time-systems.com.

RTH Set #AC Split Lock

RTH provides built-in Alignment Check Fault (#AC)) on Split Lock Intel® TCC feature available on recent 12th and 11th Gen Intel® Core™ processors and Intel Atom® x6000E Series processors. To help detect CPU instructions with the LOCK prefix on an unaligned memory location that spans two cache lines, create timing overhead than those CPU instructions operating on aligned memory locations.

Executing an instruction with the LOCK prefix on an unaligned location results in an #AC fault. You can turn the LOCK ON or OFF from the RTH configuration file for an hypervised OS to enable fine-grained control of the feature.

$ vi /boot/rth/pos-rt/Linux_Linux64_nvme0n1p4.txt
...
[/OS/0]
...
"AC_on_split_lock" = uint32: 0
...
[/OS/1]
...
"AC_on_split_lock" = uint32: 1
...

POS Configured with Intel® Ethernet TSN

RTH enables latest Intel® Ethernet integrated or discrete controllers featuring IEEE 802.1Q-2018 Time Sensitive Networking (TSN)

../_images/TSN_RTS-overview.png

RTH-Mapped GbE PCI Device

ECI makes generic assumptions about the desired PCI assignment of Ethernet devices to pos-rt and pos-xe runtimes. Modification to the ECI RTH configuration templates /boot/rth/<pos-rt|pos-xe>/Linux_Linux64_<nvme0n1pY|sdY>.txt or /boot/rth/<pos-rt|pos-xe>/WindowsEFI_Linux64__<nvme0n1pY|sdY>.txt or /boot/rth/WindowsEFI_Linux64.txt or /boot/rth/Linux_Linux64.txt may be necessary for customizing the TSN Ethernet Device assignment.

../_images/TSN_assigned.png

RTH configuration file provides entries that assign exclusive access to PCI devices for POS, based of PCI bus topology position (bus and device number) or by the vendor_ID:device_ID key pair.

  1. Identify the RTH default PCI device assignments:

    $ readtrace -o 1
    

    In this example, GPOS default OS/0 gets all Ethernet PCI devices including Intel® Ethernet TSN i225-LM Ethernet Controller PCI devices (Device ID=0x15F2 and Vendor ID=0x8086) and i210 Ethernet Controller PCI devices (Device ID=0x1533 and Vendor ID=0x8086).

    PCI devices:
    bus dev func vendor device pin IRQ MSI mode | OS | description
    0   0  0   0x8086 0x191F  -    -  n   --  |  0 | Host/PCI bridge
    0   1  0   0x8086 0x1901  A    -  y   --  |    | PCIe Root Port to bus 1
    0   2  0   0x8086 0x1912  A   16  y  INTx |  0 | Display controller
    0   8  0   0x8086 0x1911  A   16  y   --  |    | Base system peripheral
    0  20  0   0x8086 0xA12F  A   16  y  INTx |  0 | USB xHCI controller
    0  20  2   0x8086 0xA131  C   18  y  INTx |  0 | Data acquisition controller
    0  22  0   0x8086 0xA13A  A   16  y  INTx |  0 | Simple comm. controller
    0  22  3   0x8086 0xA13D  D   19  y  INTx |  0 | Serial Controller
    0  23  0   0x8086 0xA102  A   16  y  INTx |  0 | AHCI SATA controller
    0  28  0   0x8086 0xA114  A    -  y   --  |    | PCIe Root Port to bus 2
    0  31  0   0x8086 0xA146  -    -  n   --  |  0 | PCI/ISA bridge
    0  31  2   0x8086 0xA121  -    -  n   --  |  0 | Memory controller
    0  31  3   0x8086 0xA170  A   16  y  INTx |  0 | Audio device
    0  31  4   0x8086 0xA123  A   16  n  INTx |  0 | SMBus controller
    0  31  6   0x8086 0x15B7  A   16  y  INTx |  0 | Network controller
    1   0  0   0x8086 0x15F2  A   16  y  MSI  |  0 | Network controller
    2   0  0   0x8086 0x1533  A   16  y  MSI  |  0 | Network controller
    ...
    
  2. Edit the RTH configuration template to set the desired PCI devices affinity to the POS:

    $ vi /boot/rth/pos-rt/Linux_Linux64_nvme0n1p4.txt
    

    Add the following entries to reassign [/PCI/*] devices to the [/OS/1] the Privileged-Mode OS:

    [/PCI]
        "default" = uint32: 0   # default is GPOS
    
    # Device assignment with vendor ID / device ID and MSI Mode
    [/PCI/0]
        "OS"         = uint32: 1
        "vendor_ID"  = uint32: 0x8086
        "device_ID"  = uint32: 0x15f2 # i225-LM Ethernet TSN endpoint
        "interrupt_mode" = uint32: 2
    
    [/PCI/1]
        "OS"         = uint32: 1
        "vendor_ID"  = uint32: 0x8086
        "device_ID"  = uint32: 0x1533 # i210-T1 Ethernet TSN endpoint
        "interrupt_mode" = uint32: 2
    ...
    [/PCI/6]
        "OS" = uint32: 1
        "vendor_ID" = uint32: 0x8086
        "device_ID" = uint32: 0x0D9F # i225-IT Ethernet TSN endpoint
        "interrupt_mode" = uint32: 2
    ...
    
  3. Reboot your system, then check whether PCI devices affinity is effectively set to [OS/1] the Privileged-Mode OS:

    $ readtrace -o 1
    

    The expected output should be similar to the following:

    PCI devices:
    bus dev func vendor device pin IRQ MSI mode | OS | description
      0   0  0   0x8086 0x191F  -    -  n   --  |  0 | Host/PCI bridge
      0   1  0   0x8086 0x1901  A    -  y   --  |    | PCIe Root Port to bus 1
      0   2  0   0x8086 0x1912  A   16  y  INTx |  0 | Display controller
      0   8  0   0x8086 0x1911  A   16  y   --  |    | Base system peripheral
      0  20  0   0x8086 0xA12F  A   16  y  INTx |  0 | USB xHCI controller
      0  20  2   0x8086 0xA131  C   18  y  INTx |  0 | Data acquisition controller
      0  22  0   0x8086 0xA13A  A   16  y  INTx |  0 | Simple comm. controller
      0  22  3   0x8086 0xA13D  D   19  y  INTx |  0 | Serial Controller
      0  23  0   0x8086 0xA102  A   16  y  INTx |  0 | AHCI SATA controller
      0  28  0   0x8086 0xA114  A    -  y   --  |    | PCIe Root Port to bus 2
      0  31  0   0x8086 0xA146  -    -  n   --  |  0 | PCI/ISA bridge
      0  31  2   0x8086 0xA121  -    -  n   --  |  0 | Memory controller
      0  31  3   0x8086 0xA170  A   16  y  INTx |  0 | Audio device
      0  31  4   0x8086 0xA123  A   16  n  INTx |  0 | SMBus controller
      0  31  6   0x8086 0x15B7  A   16  y  INTx |  0 | Network controller
      1   0  0   0x8086 0x15F2  A   16  y  MSI  |  1 | Network controller
      2   0  0   0x8086 0x1533  A   16  y  MSI  |  1 | Network controller
    PCI memory:
      0   2  0   BAR 0 0x0000DE000000 - 0x0000DEFFFFFF
      0   2  0   BAR 2 0x0000C0000000 - 0x0000CFFFFFFF
      0   8  0   BAR 0 0x0000DF450000 - 0x0000DF450FFF
      0  20  0   BAR 0 0x0000DF430000 - 0x0000DF43FFFF
      0  20  2   BAR 0 0x0000DF44F000 - 0x0000DF44FFFF
      0  22  0   BAR 0 0x0000DF44E000 - 0x0000DF44EFFF
      0  22  3   BAR 1 0x0000DF44D000 - 0x0000DF44DFFF
      0  23  0   BAR 0 0x0000DF448000 - 0x0000DF449FFF
      0  23  0   BAR 1 0x0000DF44C000 - 0x0000DF44C0FF
      0  23  0   BAR 5 0x0000DF44B000 - 0x0000DF44B7FF
      0  31  2   BAR 0 0x0000DF444000 - 0x0000DF447FFF
      0  31  3   BAR 0 0x0000DF440000 - 0x0000DF443FFF
      0  31  3   BAR 4 0x0000DF420000 - 0x0000DF42FFFF
      0  31  4   BAR 0 0x0000DF44A000 - 0x0000DF44A0FF
      0  31  6   BAR 0 0x0000DF400000 - 0x0000DF41FFFF
      1   0  0   BAR 0 0x0000DF200000 - 0x0000DF2FFFFF
      1   0  0   BAR 3 0x0000DF300000 - 0x0000DF303FFF
      2   0  0   BAR 0 0x0000DF000000 - 0x0000DF0FFFFF
      2   0  0   BAR 3 0x0000DF100000 - 0x0000DF103FFF
    PCI I/O space:
      0   2  0   BAR 4 0xF000 - 0xF03F
      0  22  3   BAR 0 0xF0A0 - 0xF0A7
      0  23  0   BAR 2 0xF090 - 0xF097
      0  23  0   BAR 3 0xF080 - 0xF083
      0  23  0   BAR 4 0xF060 - 0xF07F
      0  31  4   BAR 4 0xF040 - 0xF05F
      2   0  0   BAR 2 0xE000 - 0xE01F
    
  4. Confirm that the Privileged-Mode OS kernel modules, igc.ko and igb.ko, have successfully loaded onto the following, respectively:

    • Intel® i225-LM Ethernet Controller PCI memory bars region 0 0x0000DF200000 - 0x0000DF2FFFFF and region 3 0x0000DF300000 - 0x0000DF303FF

    • Intel® i210 Ethernet PCIe Controller (PCI memory bar region 0 0x0000DF000000 - 0x0000DF0FFFFF, bar region 3 0x0000DF100000 - 0x0000DF103FFFF and PCI I/O space bar region 2 0xE000 - 0xE01),

    $ ssh root@192.168.2.2 "lspci --vv"
    

    The expected output should be similar to the following:

    01:00.0 Ethernet controller [0200]: Intel Corporation Device [8086:15f2] (rev 01)
        Subsystem: Intel Corporation Device [8086:0000]
        Flags: bus master, fast devsel, latency 0
        Memory at df200000 (32-bit, non-prefetchable) [size=1M]
        Memory at df300000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Kernel driver in use: igc
        Kernel modules: igc
    
    02:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
        Flags: bus master, fast devsel, latency 0
        Memory at df000000 (32-bit, non-prefetchable) [size=1M]
        I/O ports at e000 [size=32]
        Memory at df100000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Kernel driver in use: igb
        Kernel modules: igb
    
  5. Confirm that the Linux network interface is mounted successfully on the Privileged-Mode OS.

    $ ssh root@192.168.2.2 "ip link show"
    

    The expected output should be similar to the following:

    4: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc taprio state DOWN mode DEFAULT group default qlen 1000
        link/ether 00:18:7d:be:11:e7 brd ff:ff:ff:ff:ff:ff
    5: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
        link/ether 00:a0:c9:00:00:00 brd ff:ff:ff:ff:ff:ff
    

RTH Emulated Intel® GbE MSI-X Interrupt

The RTS Hypervisor introduced a new Interrupt Mode "interrupt_mode" = uint32: 3 that enables MSI-X interrupt emulation for devices that support a single MSI interrupt for multiple Ethernet TX and RX hardware queues.

The setting is valid when:

  • The device has an MSI capability and no MSI-X capability (check with lspci -vvv)

  • The IOMMU is enabled from UEFI menu (that is, VT-d=enabled)

  • The PCI device is assigned to an OS running in the Privileged Mode with virtual_MMU and restricted_IO active.

Important

On the following Ethernet controller stmmac-pci.ko kernel module will fail to load with error stmmaceth 0000:00:1e.4 enp0s30f4: stmmac_request_irq: alloc mac MSI -22 (error: -22)

To avoid that "interrupt_mode" = uint32: 3 is mandatory for enabling multiple queue device IRQ and memory mapping .

  1. Edit the RTH configuration template to set PCI devices affinity to the POS:

    $ vi /boot/rth/pos-rt/Linux_Linux64_nvme0n1p4.txt
    ...
    # Device assignment with vendor ID / device ID and MSI-X emulationed Mode
    [/PCI/3]
        "OS" = uint32: 1
        "vendor_ID" = uint32: 0x8086
        "device_ID" = uint32: 0x4BA0  # EHL mGbE Ethernet TSN endpoint #1
        "interrupt_mode" = uint32: 3
    
    [/PCI/4]
        "OS" = uint32: 1
        "vendor_ID" = uint32: 0x8086
        "device_ID" = uint32: 0x4B32 # EHL mGbE Ethernet TSN endpoint #2
        "interrupt_mode" = uint32: 3
    
    [/PCI/5]
        "OS" = uint32: 1
        "vendor_ID" = uint32: 0x8086
        "device_ID" = uint32: 0xA0AC # TGL-U mGbE Ethernet TSN endpoint
        "interrupt_mode" = uint32: 3
    
  2. Reboot your system, then check PCI devices affinity is effectively set to [OS/1] the Privileged-Mode OS:

    $ readtrace -o 1
    

    The following is the expected output:

    PCI devices:
    bus dev func vendor device pin IRQ MSI mode | OS | description
      0   0  0   0x8086 0x452E  -    -  n   --  |  0 | Host/PCI bridge
      0   2  0   0x8086 0x4571  A   16  y  INTx |  0 | Display controller
      0   8  0   0x8086 0x4511  A   16  y   --  |    | Base system peripheral
      0  16  0   0x8086 0x4B44  A   16  n  INTx |  0 | Serial bus controller
      0  16  1   0x8086 0x4B45  B   17  n  INTx |  0 | Serial bus controller
      0  17  0   0x8086 0x4B96  A   16  y  INTx |  0 | Simple comm. controller
      0  17  1   0x8086 0x4B97  B   17  y  INTx |  0 | Simple comm. controller
      0  19  0   0x8086 0x4B84  A   16  y  INTx |  0 | Serial bus controller
      0  20  0   0x8086 0x4B7D  A   16  y  INTx |  0 | USB xHCI controller
      0  20  2   0x8086 0x4B7F  -    -  n   --  |  0 | Memory controller
      0  21  0   0x8086 0x4B78  A   27  n  INTx |  0 | Serial bus controller
      0  21  2   0x8086 0x4B7A  C   29  n  INTx |  0 | Serial bus controller
      0  21  3   0x8086 0x4B7B  D   30  n  INTx |  0 | Serial bus controller
      0  22  0   0x8086 0x4B70  A   16  y  INTx |  0 | Simple comm. controller
      0  23  0   0x8086 0x4B63  A   16  y  MSI  |    | AHCI SATA controller
      0  25  0   0x8086 0x4B4B  A   31  n  INTx |  0 | Serial bus controller
      0  25  2   0x8086 0x4B4D  C   33  n  INTx |  0 | Simple comm. controller
      0  26  0   0x8086 0x4B47  A   16  n  INTx |  0 | SD Host Controller
      0  26  1   0x8086 0x4B48  B   17  n  INTx |  0 | SD Host Controller
      0  27  0   0x8086 0x4BB9  A   16  y  INTx |  0 | Serial bus controller
      0  27  1   0x8086 0x4BBA  B   17  y  INTx |  0 | Serial bus controller
      0  27  6   0x8086 0x4BBF  C   18  y  INTx |  0 | Serial bus controller
      0  28  0   0x8086 0x4B38  A    -  y   --  |    | PCIe Root Port to bus 1
      0  29  0   0x8086 0x4BB3  A   16  y   --  |    | Base system peripheral
      0  29  1   0x8086 0x4BA0  A   16  y  emul |  1 | Network controller
      0  30  0   0x8086 0x4B28  A   16  n  INTx |  0 | Simple comm. controller
      0  30  1   0x8086 0x4B29  B   17  n  INTx |  0 | Simple comm. controller
      0  30  4   0x8086 0x4B32  A   16  y  emul |  1 | Network controller
      0  31  0   0x8086 0x4B00  -    -  n   --  |  0 | PCI/ISA bridge
      0  31  3   0x8086 0x4B58  A   16  y  INTx |  0 | Audio multimedia device
      0  31  4   0x8086 0x4B23  A   16  n  INTx |  0 | SMBus controller
      0  31  5   0x8086 0x4B24  -    -  n   --  |  0 | Serial bus controller
      1   0  0   0x8086 0x15F2  A   16  y  MSI  |  1 | Network controller
    

RTH Sanity Check

Sanity Check #1: Manage Privileged-ECI OS Runtime Lifecycle

The following section is applicable to:

../_images/target_generic.png

RTH comes with several ways to manage each operating system lifecycle independently at runtime (OS-reboot, edit kernel command line):

  • RTH Event API

  • HW CPU warm reset reset, CPU halt

  • OS images reload

  • OS boot command line edit

This section will explain how to use the RTH tools, which have been built into ECI POS runtime and also installed in your virtualized GPOS.

Step 1: Verify the rthbase driver status

Do the following in a console on the ECI privileged-OS:

  1. Run the lsmod command and verify whether the rthbasedrv module is loaded in kernel-space.

  2. Verify whether the sudo rth -h command returns the help menu ensuring that the x86_64 binaries /bin/rth and /lib/libRth.so are dynamically linked to user-space process.

Do the following in a console on Microsoft Windows (or Canonical® Ubuntu®) Virtualized-OS:

  1. Run the lsmod command and verify whether the virtrthbasedrv module is loaded in kernel-space.

  2. Verify whether the sudo rth -h command returns the help menu ensuring that the x86_64 binaries /bin/rth and /lib/libRth.so are dynamically linked to user-space process.

Step 2: Verify CPU core warm-reset

Do the following in a console on Microsoft Windows (or Canonical® Ubuntu®) Virtualized-OS:

  1. Verify whether the CPU warm-reset command, sudo rth -o 1 -reset returns successful OS/1 reboot of privileged RTOS:

    $ sudo rth -o 1 -reset
    

Step 3: Reboot ECI privileged-OS individually

  1. Verify whether the OS/1 shutdown command, sudo rth -o 1 -shutdown, returns graceful shutdown of privileged RTOS:

    $ sudo rth -o 1 -halt
    
  2. Verify whether the OS/1 boot command, sudo rth -o 1 -boot, returns success boot of privileged RTOS:

    $ sudo rth -o 1 -boot
    

Step 4: System reset of all CPUs: Hypervisor, virtualized OS, and privileged-OS

  1. Verify whether the system reboot command, sudo rth -sysreboot && reboot, return graceful shutdown of the x86_64 target machine (that is, PMIC cold-reset event):

    $ sudo rth -sysshutdown
    
  2. Verify whether the system reboot command, sudo rth -sysreboot && reboot, returns successful reboot of Hypervisor, privileged RT-OS, and virtualized OS:

    $ sudo rth -sysreboot
    

Sanity Check #2: Troubleshooting Privileged-ECI OS Runtime

RTH comes with several ways to debug each operating system runtime state:

  • Redirect Hypervisor and Kernel boot (printk) log messages to a dedicated Ethernet interface

  • Redirect Kernel boot (printk) log messages and a console to a dedicated COMx/ttySx port

  • Redirect Kernel debugger (KGDB) and a console to a dedicated COMx/ttySx port

This section will explain how to use the RTH debug tools, which have been built into ECI privileged-RT OS runtime and also installed in your virtualized-OS either Canonical® Ubuntu® or Microsoft Windows.

Step 1: Find the Ethernet controllers PCI ID

$ lspci | grep Ethernet
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection (2) I218-LM (rev 05)
06:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)

Step 2: Dedicate an Ethernet Port for RTH logging

Edit the RTH configuration file append the following section:

[/LOG]
"bus" = uint32: 0
"device" = uint32: 19
"function" = uint32: 0
"host_IP" = "192.168.100.48"
"target_IP" = "192.168.100.250"

Step 3: Read the log message remotely

From another Linux host machine, read the RTH boot message:

$ sudo apt-get install netcat
$ ip addr add 192.168.100.48/24  eth0
$ netcat -l -u 514