Manage Diverse Data Flows in a High-Performance, Safety-Critical Environment

Introduction

RTI Connext® provides core connectivity to autonomous applications in many industries, including autonomous vehicles. Connext has its roots in autonomous robotics and is widely adopted in the transportation, aviation and medical industries for mission-critical and safety critical systems. It has an innate ability to address the fundamental requirements of real-time systems, such as reliability, safety and performance.

Autonomous vehicles are one of the most demanding applications for autonomy. They rely on many data sources with diverse Quality of Service (QoS) requirements. They also incorporate many applications with different levels of safety criticality. This is where the Connext Databus can dramatically simplify system architecture, integration and software development. It includes comprehensive control over real-time QoS, content filters, and resilience that can be tuned for each data flow and application.

This use case applies to the development of application for autonomous vehicles and Advanced Driver Assistance Systems (ADAS), including:

  • Light Detection and Ranging (LiDAR) point cloud
  • Vision processing
  • Sensor Fusion
  • Fault Tolerance
  • Path Planning
  • Vehicle Control
  • Warning systems
  • Logging systems

The following coding examples show how Connext can be used by Autonomous Vehicle developers to leverage RTI’s extensive experience with autonomous robotics, safety-critical systems and state-of-the-art architectures to simplify development, design and integration. This figure shows an example of an autonomous car system architecture. 

rti dds databus architecture

RTI Connext in Automotive Applications

What This Example Does

This example involves a system comprised of a simple collision avoidance system with input from different sensors. Warnings are displayed in an HMI (Human Machine Interface) application and sent to the vehicle control platform. The example shows sensors publishing data at varying rates and sizes. The different applications are connected through the RTI Connext Databus as shown in the following pictures.

sensor publishing data diagram

Six modules are shown that comprise a level 3 Autonomous System, combining sensor inputs into a sensor fusion module, a collision avoidance and detection system, an HMI, and an interface to a simulated CAN bus vehicle platform. The data sources include vision, LiDAR and the CAN bus. 

LiDAR (Lidar)

  • Publishes LiDAR data
  • An example of what we refer to as “large data”, data volume and rates are moderately high
  • Generates periodic random data for simulation purposes, or will render the Circle shapes published by RTI Shapes Demo as spheres in the 3D LiDAR space
  • Sample size of the data is user-configurable, defaults to 180 kiB
LiDAR data

Vision Sensor (VisionSensor)

  • Publishes vision sensor data
  • Sends a smaller amount of data (a few hundred bytes) at a lower rate than LiDAR
  • Sensor data is read from a CSV file and can be modified to publish different data; the filename is specified in the properties configuration for the vision sensor
  • An example of a generic sensor; multiple instances of the vision sensor can be run and each sensor has its own unique identifier
Vision Sensor data

Vehicle Platform (vehiclePlatform)

  • Publishes information about the vehicle platform to the collision avoidance system
  • Subscribes to control commands
  • Example implementation copies control data into the status data
  • Additional elements that can’t be derived from the control topic are read from a CSV file; the filename is specified in the properties configuration for the vehicle platform
Vehicle Platform

Sensor Fusion Module (sensorFusion)

  • Subscribes to LiDAR and vision sensor data
  • Publishes aggregated vision sensor information to the collision avoidance system for recognizing potential collisions
  • Displays the timestamp of received LiDAR data but does not otherwise process it
  • Can receive data from one or more vision sensors; the more that are connected, the larger the sensor object list will be (up to 128 objects and over 5k in size)
Sensor Fusion Module

Collision Avoidance System (collisionAvoidance)

  • Subscribes to platform status and sensor data
  • Publishes an alert if a collision is detected
    Minimalistic to demonstrate the data flow; does not implement an actual collision avoidance algorithm
  • Copies some elements of the first object into the platform status so that the platform status has changing values
  • Generates an alert if the size of the sensor object field has a value higher than 2; the pre-canned data will trigger the alert once every 200 samples
  • Ignores platform status information
Collision Avoidance System

HMI (hmi)

  • Subscribes to alerts from the collision avoidance system
  • Pops-up an alert box if one of the alert flags are set
HMI

Rear-view Camera (CameraImageData)

  • Provides large data samples for full-frame camera images
  • Data rate and sample size are adjustable, transfer performance of each sample is measured.
  • Includes options for the new “Flat Data” and “Zero Copy” features of Connext 6, which greatly improve performance when transferring large data samples.

Rear_view_camera

At a Glance

Running the Example

Download the Example

Download the example files [ Linux | Windows ] and extract them into a location of your choice. We'll refer to this location in this document as EXAMPLE_HOME.

To view and download the example source code without pre-built executables, visit the RTI Community DDS Use Cases repository in GitHub.

This download includes:

  • Pre-built example applications you can configure and run without rebuilding
  • Example source and project files for Windows and Linux

Download RTI Connext Professional

If you do not already have RTI Connext Professional installed, download and install it now. You can use a 30-day trial license to try out the product. Your download will include the libraries that are required to run the example, and tools you can use to visualize and debug your distributed system. For video tutorials that will walk you through these processes, visit http://www.rti.com/gettingstarted.

Run the Example

The example requires SDL2 to be installed in Linux systems. You can install the required library as follows:

sudo apt-get install libsdl2-dev

On Windows systems, navigate to the "EXAMPLE_HOME/ ExampleCode/scripts"directory. In this directory, there is a batch file "launch.bat". The script file will start 8 windows, one for each application.

On Linux systems, navigate to the "EXAMPLE_HOME/ExampleCode/scripts" directory. In this directory there is a script file launch.sh which will launch 8 terminals one for each application.

You can modify the script to run just a subset of the application in case you want to run it on multiple machines. To run it on multiple machines copy this example and modify the script accordingly. If you run them on the same machine, they will communicate over the shared memory transport. If you run them on multiple machines, they will communicate over UDP.

Notice you will see the platform commands on the vehicle platform application. The sensor fusion will print a message for each received LiDAR sample.

If you have access to multiple machines on the same network, start running these applications on separate machines. Note: The applications will use multicast for discovery. See below if your system does not have multicast.

3-D Visualization

If you have an available instance of the ‘RViz2’ 3D visualizer (included with a ROS2 distribution), it can visualize the LiDAR point cloud data of this example. Furthermore, if an instance of RTI Shapes Demo is launched in the same DDS domain as this example and set to publish “Circle” data, these shapes will be rendered in the LiDAR point cloud data as spheres from a central point-view, or from the position of a Yellow Circle if one is published.

Run Multiple Vision Sensors

The vision sensor application is acting as a unique sensor which publishes information. As we will discuss later in the Data Model Considerations section, each vision sensor application has a unique ID. So, if you run more than one vision sensor application, you should run it from a different directory with a different visions.properties file. The vision.properties file has an entry for the sensor id. Look for the following in the properties file:

config.sensorId=1

Run the Example with No Multicast

If your network doesn't support multicast, you can run this example using only unicast data. Edit the user_qos_profile.xml file to add the address of the machines that you want to contact. These addresses can be valid UDPv4 or UDPv6 addresses.

<discovery>
<initial_peers>
<!-- !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -->
<!-- Insert addresses here of machines you want -->
<!-- to contact -->
<!-- !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -->
<element>127.0.0.1</element>
<!-- <element>192.168.1.2</element>-->
</initial_peers>
</discovery>

Building the Example

Directory Overview

EXAMPLE_HOME
|-- Docs
`-- ExampleCode
   |-- make
   |-- win32
   |-- scripts
   |-- resources
   `-- src
       |-- Common
       |-- idl
       |-- Collision_Avoidance
       |-- HMI
       |-- Lidar
       |-- Sensor_Fusion
       |-- Vehicle_Platform
       |-- Vision
       |-- Generated

The source code is divided into:

  • make - Linux makefiles
  • win32 - Windows project and solution files
  • scripts - Scripts to run applications
  • resources - QoS configuration and properties files
  • src - Source code
    • Common – Common code to process data and config files
    • Generated - generated support files. Generated from IDL file
    • Idl - Describes the data types that are sent over the network
    • Other directories - Source code for specific applications

Build the Example

On all platforms, the first thing you must do is set an environment variable called NDDSHOME. This environment variable must point to the RTI Connext installation directory. For more information on how to set an environment variable, please see the RTI Core Libraries and Utilities getting started guide.

Windows Systems

On a Windows system, start by opening up the vs2015.sln file using Visual Studio.

This code is made up of a combination of libraries, source, and IDL files that represent the interface to the application. The Visual Studio solution files are set up to automatically generate the necessary code and link against the required libraries.

Linux Systems

To build the applications on a Linux system run:

make –f make/Makefile.x64Linux3gcc5.4.0

The platform you choose will be the combination of your processor, OS, and compiler version. Please review the selection of Makefiles in the make directory to find the closest match to your system.

Under the Hood

Data Model Considerations

When modeling data in DDS, one of the biggest considerations is, "what represents a single element or real-world object within my data streams?" In the case of sensor data, you can generally assume that each sensor should be represented as a unique object. In this example, we are modeling our data types separate for each topic. The idl directory has an "automotive.idl" file which contains definitions for all topics used in this example. RTI Connext uses IDL – the Interface Definition Language defined by the OMG – to define language-independent data types. More information about IDL can be found in the RTI Connext User’s Manual.

In DDS, these unique real-world objects are modeled as instances. Instances are described by a set of unique identifiers called keys, which are denoted by the //@key symbol.

So, in our example, we use a sensorId as a key field:

long sensorId; //@key

The datatypes published by the sensor fusion and collision avoidance do not have a key value since there is only one instance of each application.

QoS Considerations

The LiDAR samples are large and therefore the publisher has been configured as an asynchronous publisher. More about asynchronous publishers can be found in the RTI Connext Core Libraries User Manual.

The CameraImageData samples are larger than the LiDAR samples, with a default size of 4 MiB (but can be user-configured to be much larger). CameraImageData also uses an asynchronous publisher, but includes the buildable options of using the new “Flat Data” and/or “Zero Copy” features of RTI Connext 6. Flat Data improves performance by avoiding some data-copy operations during the transit from publisher to subscriber, enabled by the newer XCDR2 encoding of the data samples. Zero Copy is available for shared memory transport only, but offers a dramatic performance increase by passing pointers to the large data samples in memory, instead of passing the data samples themselves. More information about Flat Data and Zero Copy can be found in the RTI Connext Core Libraries User Manual.

Sensor data is expected to arrive periodically. Therefore, the QoS settings have a set deadline. The only exception is that the driver alerts do not have a deadline set since alerts do not happen periodically. The deadline for the writer needs to be set higher than or equal to the reader. In the QoS file the deadline for the writers is set to 11 seconds and the deadline for the reader is set to 10 seconds. This is much higher than what you would expect it to be in real life. However, you may want to run the example at a slower speed to see the output. Therefore, the deadline is set low to avoid a burst of deadline missed message when running it slower. You can learn more about deadlines in the RTI Connext Core Libraries User Manual.

The alert topic has the transient local property set. This means that when an HMI or other subscriber starts it will get the last published alert status. The history property is set to one which means that the subscriber will get a single alert status when starting. The liveliness property has been configured to 1 minute. This is so that the HMI or any subscriber will get a notification if the writer goes away. Otherwise, without a periodic ping, the reader would not know if the writer is still alive or not. You can learn more about liveliness in the RTI Connext Core Libraries User’s Manual. 

Configuration Details

Each application has a properties file with some basic configuration. Here is an example of a properties file:

dataFile=vision.csv
topic.Sensor=VisionTopic
qos.Library=BuiltinQosLibExp
qos.Profile=Generic.BestEffort
config.sensorId=1
config.domainId=0
config.pubInterval=2000

Not all applications have all properties. The datafile properties specify the file with the input data for sensors. The application will read the values from this file and use it to populate the sensor information. It will wrap around at the end and re-start with the first data line in the file.

The topic configuration defines the topic name to be used. The subscriber and publisher must have a matching topic name.

The QoS setting defines the profiles to be used. You can create your own USER_QOS_PROFILES.xml file in the directory and list the profile to be used in the properties file.

The domain id is the DDS domain on which the application runs.

The pubInterval is the time between samples. The tie is in milliseconds. This is the delay between sensor updates.

Lidar (C++)

The LiDAR application simulates data coming from LiDAR. The data type has a sequence of points and is compatible with the ROS2 “PointCloud2” data type. Each point consists of 4 float32 values (X, Y, Z, Color), and the sequence of points is from a circular-scanned perspective in polar and azimuth steps. The number and range of steps is configurable in the “lidar.properties” file, with a default of (180 * 64 steps = 11520 points) * (4 float32 values) = 184kB per packet. The data is pseudorandom within an [XYZ] range of [+/-5, +/-5, 0-10], unless an instance of RTI Shapes Demo is running in the same DDS domain as the LiDAR application. The LiDAR application will subscribe to the “Circle” topic data published by Shapes Demo, and render these as spheres in the 3D LiDAR point space. The LiDAR data can be visualized using the RViz2 visualizer included with ROS2.

struct PointCloud2_ { 
std_msgs::msg::dds_::Header_ header_;
unsigned long height_;
unsigned long width_;
sequence <PointField_, 4> fields_;
boolean is_bigendian_;
unsigned long point_step_;
unsigned long row_step_;
sequence <octet, 368640> data_;
boolean is_dense_;
}

Vision Sensor (C++)

The vision sensor sends a sequence of objects, a sensor id and the timestamp. The object is defined as

    struct VisionObject {
       classificationEnum classification;
       float position[3];
       float velocity[3];
       float size[3];
   };

The data for the object is read from a comma separated text (csv) file. The file wraps around. Once the last line of data is read it will start from the beginning again. The sensor id is set in the properties file. It is possible to run multiple instances of the sensor with each sensor having a different ID. Since the ID is the key field, samples from different sensors will be treated as a different instance. The sensor can send a maximum of 10 objects. The actual number of objects depends on how many objects are listed in the csv file. The last column has the number of objects to be sent. The first line of the text file is for information (column heading) and will be ignored.

Vehicle Platform (C++)

The vehicle platform receives platform command and responds with a platform status. The elements which are in the status and command type are copied from the command into the status. Additional status information is read from a csvfile. The platform application displays the received command.

Sensor Fusion (C++)

The sensor fusion application subscribes to the LiDAR and vision sensor topic. Each time a LiDAR topic is received, the timestamp of the received message is displayed. Other than that, the LiDAR data is ignored. The sensor data is aggregated. Each time the sensor fusion polls the vision sensor queue it reads values from all sensors and copies the elements from each sensor into the outgoing sensor topic. Aggregation can be achieved by running multiple vision sensors or by running one vision sensor at a higher interval than the sensor fusion application. The outgoing topic can have a maximum of 128 objects.

Collision Avoidance (C++)

The collision avoidance application takes the input from the sensor fusion and platform application, and generates the platform command. If a warning is detected, it sends an alert to the HMI. The application does not have a real collision avoidance algorithm implemented. Any improvements on the handling are welcome. If the received sensor information has at least one object and the size[1] filed is greater than 2.0, a driver alert is sent to the HMI. With the default data, this happens once a cycle, about every 200 samples. The alert message has the following fields:

If the received sensor information has at least one object and the size[1] field is greater than 2.0, a driver alert is sent to the HMI. With the default data, this happens once a cycle, about every 200 samples. The alert message has the following fields:

   struct DriverAlerts {
       boolean BlindSpotDriver;
       boolean BlindSpotPassenger;
       boolean FrontCollision;
       boolean BackCollision;
       boolean ParkingCollision;
       boolean DriverAttention;
   };

The only flag the collision avoidance sets is the driver attention flag. The other flags are always set to zero.
The first object if present is also used to set fields in the platform control topic, the assignment is completely random and has been done so the values change, the assignment is as follows:

control_instance->blinker_status = IndicatorStatus(sensor_data_seq[i].objects[0].classification % 4);
control_instance->speed = sensor_data_seq[i].objects[0].velocity[0];
control_instance->vehicle_steer_angle = sensor_data_seq[i].objects[0].position[0];

HMI (C++)

The HMI pops up an alert box with a text depending on the alert flag. The HMI uses a waitset to receive samples. This is because it can take a while until the user acknowledges the message box. Using a listener would block the receive thread while the message box is displayed.

Rear-view Camera (C++)

The rear-view camera is implemented as a pair of applications (publisher and subscriber). The publisher fills a configurable-size array with pseudorandom data, which is then timestamped and published. On reception at the subscriber, another timestamp is taken and the elapsed time is calculated to measure performance, and the array contents are verified. Note that this timing measurement will be accurate only on systems with the publisher and subscriber are both using the same clock, or highly-synchronized separate clocks. The publication rate is settable at program startup by modifying the “config.pubInterval” value in the “camera_image.properties” file. The size of the array, and the use of Flat Data and/or Zero Copy enhancements are build-time options that are set in the source and IDL files of the application. The applications must be rebuilt to use these enhancements.

Next Steps

Join the Community

Post questions on the RTI Community Forum.

Contribute to our Case + Code examples on RTI Community GitHub using these instructions.