IIC TSN Testbed 802.1Q-2018 Interoperability testing¶
The interopapp TSN 802.1Q-2018 is an application and benchmarking methodology intended for use by administrators to perform data-collection and build End-to-End metrics of ECI node clusters interconnected across a Time Sensitive Network (TSN) domain. The interopapp was developed as a result of Intel’s participation in Industrial Internet Consortium (IIC) plugfest events and the TSN Testbed (https://hub.iiconsortium.org/time-sensitive-networks).
The interopapp supports up to 23 Intel® Industrial Ethernet Controller as TSN Endpoints and provides test & measurement of their ability to interoperate under IEEE 802.1Q-2018 Enhancements for Scheduled Traffic (EST) policy (formerly known as 802.1Qbv) through TSN switches such as Kontron PCIE-0400-TSN NIC/TSN switch
The current ECI release uses following hardware with compatibility :
[Endpoint ] Intel® Tiger Lake UP3 (TGL) Ethernet MAC Controller Time-Sensitive Networking (TSN) Reference Software
[Endpoint ] Intel® Ethernet Controller I225 Time-Sensitive Networking (TSN) Linux Reference upstream
[Endpoint ] Intel® Ethernet Controller I210 Time-Sensitive Networking (TSN) Linux Reference Software
interopapp Usage Examples¶
The following section is applicable to:
![]()
/opt/intel/interopapp/talker
TSN Endpoints Transmits a VLAN-tagged time-bounded synthetic UADP Packets following the IEEE 802.1Q-2018 EST-policy.# cd /opt/intel/interopapp # ./talker --h Usage: talker [OPTION...] Talker -- talker side of the interop app -c, --config=FILE_PATH Path to Configuration File -l, --log-file=FILE_PATH Path to the Log-File -v, --verbose=LEVEL Set verbosity level -?, --help Give this help list --usage Give a short usage message -V, --version Print program version Mandatory or optional arguments to long options are also mandatory or optional for any corresponding short options.
/opt/intel/interopapp/listener
TSN Endpoints Receives a VLAN-tagged time-bounded synthetic UADP Packets. It generates a statistics for every inbound packets from Traffic-class from scheduled LaunchTime to payload data effective arrival, then broadcast the statistics to another station for data-analytics.# cd /opt/intel/interopapp # ./listener --h Usage: listener [OPTION...] Listener -- listener side of the interop app -c, --config=FILE_PATH Path to Configuration File -l, --log-file=FILE_PATH Path to the Log-File -v, --verbose=LEVEL Set verbosity level -?, --help Give this help list --usage Give a short usage message -V, --version Print program version Mandatory or optional arguments to long options are also mandatory or optional for any corresponding short options.The
/opt/intel/interopapp/collector.py
Multicast collector of statistics coming from each/opt/intel/interopapp/listener
runtime providing the TSN domain data-analytics (ie. time-precision monitoring …), data-storage and data-visualization.
The default configuration uses OPC UA UADP frames to encode the information. Virtual LANs (VLANs) Traffic-class (ie. VLAN PCP = 3 or 5) tagging is directly mapped on each OSI Layer 2 Ethernet synthetic-frame, whereas multicast uses OSI Layer 4 UDP IPv4 synthetic-packets.
Note
interopapp methodology does not depends on any specific IEC-62541 OPC UA pubsub application framework or stack, but implements purposely built synthetic frames, PDU encoder and decoder.
The metrics allow to analyze collecting and multicasting statistic to a centralized recording stations
Adherence to schedule for scheduled traffic E.g. Tx timestamps and reference TxOffset values
End-to-End network behavior E.g. PHY to PHY latencies for all the paths covered by talker and listener streams
Time synchronization E.g. changes in the selected grandmaster, grandmaster to subordinate offset…
Reliability E.g. missed frames, missed application cycles…
Admin user can achieve parametric testing by editing or scripting the following configuration file:
/opt/intel/interopapp/listener.ini
# Listener configuration file for ETF test [global] test = true version = "0.4.1" debug = 4 [listener] id = 1 # <1-23> decimal id value vendorName = "<OEM>" # vendor-specific description deviceName = "<ECI node #1>" # device-specific description interfaceName = "eth3.3" # "<VID>" = Ethernet interface VLAN ID TSN Endpoint name for egress test-traffic timestampMode = "hardware" # "hardware" = Intel® Industrial Ethernet Controller L2/PTP hw-timestamp or "" = Linux clock_monotonic timestamp interval = 20000000 # Network cycletime Period in nanoseconds example 20ms = 20000000 ptp4lUdsPath = "/var/run/ptp4l" # ptp4l runtime GET request ptp4lLocalPath = "/var/run/test" # ptp4l runtime GET requests debug = 6 # runtime test verbosity level mcastInterfaceName = "eth3.3" # "<eth?>" = specify an alternative Ethernet interface for to multicast statics if it differs from the ingress TSN Endpoint name. mcastPort = 4840 # specify an port UADP UDP port mapping if it differs from the the ingress TSN Endpoint name. mcastGrpIp = 224.0.0.1 # specify an port UADP UDP multicast ip if it differs from the the ingress TSN Endpoint name.
/opt/intel/interopapp/talker.ini
# Talker configuration for ETF Test [global] test = true version = "0.4.1" debug = 4 # init. verbosity level # talker configuration section [talker] id = 1 # <1-23> decimal id value vendorName = "<OEM>" # vendor-specific label deviceName = "<ECI node #1>" # device-specific label interfaceName = "eth3.3" # "<VID>" = Ethernet interface VLAN ID TSN Endpoint name for egress test-traffic txBackend = "etf" # "none" or "etf" = tc qdisc etf etfDelta = 150000 # corresponds to tc qdisc etf timestampMode = "hardware" # "hardware" = i210 L2/PTP hw-timestamp or "" = Linux clock_monotonic timestamp ptp4lUdsPath = "/var/run/ptp4l" # ptp4l runtime GET request ptp4lLocalPath = "/var/run/test" # ptp4l runtime GET request interval = 20000000 # Network cycletime Period in nanoseconds. example 20ms = 20000000 delta = 600000 # SO_TXTIME L2 or L4 frame delta-Time in nanoseconds. example 600us = 600000 debug = 6 # runtime test verbosity level priority = 3 # VLAN PCP value txoffset = 500 # Intel® Industrial Ethernet Controller TX queues worstcase sw-latency from etf qdisc to MAC/PHY. Example worstcase 500ns = 500
/opt/intel/interopapp/vlan_helpers.sh
alternative utility helper provided to speed up the talker and listener pre-conditions
interopapp Sanity-Check Testing¶
interopapp pre-conditions applicable to:
Admin user may desire to configure manually [DUT Node #1] by editing /opt/intel/interopapp/gPTP.cfg
and /opt/intel/interopapp/vlan_helpers.sh
to apply vlan3000@enp5s0
VLAN ID and PCP and add the stream DMAC to the multicast filter by using scripts :
# cd /opt/intel/interopapp # source vlan_helpers.sh # create_vlan # source ./start_ptp.shNote
ECI node
ptp4l
andphc2sys
time-synchronization daemons might be already preset through Linuxsystemctl status
boot service :# systemctl status ptp4l@enp1s0.service ptp4l@enp1s0.service - Precision Time Protocol (PTP) service on Interface enp1s0 Loaded: loaded (/lib/systemd/system/ptp4l@.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2019-10-27 19:34:45 UTC; 8s ago Main PID: 10123 (ptp4l) Tasks: 1 (limit: 4915) Memory: 368.0K CGroup: /system.slice/system-ptp4l.slice/ptp4l@enp1s0.service └─10123 /usr/bin/ptp4l -i enp1s0 -f /usr/lib/systemd/user/ptp4l-i210.cfgplease stop see systemd Intel® Ethernet TSN Endpoint boot and runtime services for further details.
Similarly the [DUT Node #2] would edit /opt/intel/interopapp/vlan_helpers.sh
and /opt/intel/interopapp/gPTP.cfg
to apply to enp5s0
and /opt/intel/interopapp/etf_helpers.sh
to setup vlan3000@enp5s0
VLAN ID / PCP and replace tc qdisc etf
configuration by using the following scripts :
# cd /opt/intel/interopapp # source ./vlan_helpers.sh # create_vlan # source ./start_ptp.sh Create VLAN Interface (vlan3000) with egress mapping 15:5... Configured VLAN interface vlan3000: 9: vlan3000@enp5s0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 00:07:32:6b:a7:84 brd ff:ff:ff:ff:ff:ff # source ./etf_helpers.sh # configure_mqprio_etf Configuring mqprio plus etf on enp5s0... Cleaning up root qdisc on enp5s0... Configuring mqprio on enp5s0... Configuring etf on enp5s0... # print_etf Printing qdisc on enp5s0... qdisc mqprio 100: root tc 2 map 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 queues:(0:0) (1:3) Sent 11204 bytes 46 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc fq_codel 0: parent 100:4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 5530 bytes 15 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0 qdisc fq_codel 0: parent 100:3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 4298 bytes 15 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0 qdisc fq_codel 0: parent 100:2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 1376 bytes 16 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0 qdisc etf 8002: parent 100:1 clockid TAI delta 150000 offload on deadline_mode off skip_sock_check off Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0
Sanity-Check #1: TSN Endpoint smoketest egress and ingress traffic¶
This is a smoke test that runs interopapp test methodology through separate [MONITORING HOST] remote and Device under-test localhost [DUT localhost] in order to test verify communication basics.
Note
Time synchronization or setting the socket-level SO_TXTIME packets launchtime are disabled.
# systemctl stop ptp4l@enp5s0.service
# systemctl stop phc2sys@enp5s0.service
on [MONITORING HOST] Start the collector application:
# cd /opt/intel/interopapp # python3 collector.py
on [DUT localhost], Edit
/opt/intel/interopapp/listener.ini
to match thevlan3000@enp5s0
TSN Endpoint configuration before starting the talker application runtime from the Node #2 console terminal :# cd /opt/intel/interopapp # ./listener -c ./listener.ini
on [DUT localhost], Edit
/opt/intel/interopapp/talker.ini
to match thevlan3000@enp5s0
TSN Endpoint configuration before starting the talker application runtime from the Node #2 console terminal :# cd /opt/intel/interopapp # ./talker -c ./talker.ini
Step 5: on [DUT localhost] Check all ingress traffic
Open a new console :
cd /opt/intel/iotg_tsn_ref_sw/sample-app-taprio/ ./tsn_perf.sh -i vlan3000@enp5s0 -a 1 -f "port 4840"The expected output is as follows:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp1s0, link-type EN10MB (Ethernet), capture size 64 bytesThe output indicates that the real-time packets are being captured.
Press Ctrl-C. Check the tcpdump
output.
Sanity-Check #2: TSN Endpoint P2P benchmark test egress and ingress traffic¶
This is a peer-to-peer test that runs interopapp test methodology configuration of two nodes running respectively talker and listener. In addition, the listener node runs the collector application.
Note
Time synchronization or setting the socket-level SO_TXTIME packets launchtime are disabled.
# systemctl stop ptp4l@enp5s0.service
# systemctl stop phc2sys@enp5s0.service
on [DUT Node #1] console terminal start the node stat collector application:
# cd /opt/intel/interopapp # python3 collector.py
Edit
/opt/intel/interopapp/listener.ini
to match thevlan3000@enp5s0
TSN Endpoint configuration before starting the listener application runtime from the [DUT Node #1] console terminal:# cd /opt/intel/interopapp # ./listener -c listener.ini
Edit
/opt/intel/interopapp/talker.ini
to match thevlan3000@enp5s0
TSN Endpoint configuration before starting the talker application runtime from the [DUT Node #2] console terminal :# cd /opt/intel/interopapp # ./talker -c ./talker.ini Configuration File: Global: test: true version: 0.4.1 debug: 4 Talker: id: 1 device: APL-I vendor: Intel interface: vlan3000 txBackend: etf etfDelta: 150000 timestampMode: hardware ptp4lUdsPath: /var/run/ptp4l ptp4lLocalPath: /var/run/test debug: 6 priority: 5 interval: 20000000 txOffset: 500 delta: 600000 ... PTPUdsRecv: Failed to receive message from UDS socket : Resource temporarily unavailable ReceiveResponses: Failed to receive GET request response : Resource temporarily unavailable PtpUpdateStatus: Failed to receive responses to GET requests : Resource temporarily unavailable Successfully sent 174 bytes TX Timestamp : 1572201806840129011 Successfully sent 174 bytes TX Timestamp : 1572201806859525859 Successfully sent 174 bytes TX Timestamp : 1572201806860026427
on [DUT Node #1] Press Ctrl-C. Check the collector output.