DDS and TSN: The Future for Real-Time Data Exchange?
Written by Gerardo Pardo-Castellote
January 7, 2020
Recently, one of my colleagues took part in a symposium at Kepler University in Linz, Austria, and an interesting thing happened. Even though Time-Sensitive Networking (TSN) is a relatively new standard, he was surprised to be inundated with questions about how TSN and Data Distribution Service™ (DDS) can work together. This tells us something: Many system architects already seem convinced that TSN is going to play an increasingly significant role in system designs for the future.
TSN is a set of IEEE standards that aim to provide time-sensitive data transmission over Ethernet networks. TSN fits well in the communication requirements of real-time systems that need to send periodic information in a deterministic manner.
However, TSN can potentially do even more. The reason is simple: TSN can provide the basis for real-time communication for an entirely new range of applications that could not run on Ethernet before. And so, a new universe of efficiency, bandwidth and performance improvements becomes available to those applications.
But of course, nothing operates in a vacuum: TSN is a low-level network technology (Layer 1 and 2) that needs to be combined with higher-level network and connectivity technologies (Layers 3-6) to unlock its full potential. And in my opinion, DDS is the best way to make that happen. For example, with DDS sitting above TSN in the stack, suddenly you get applications that are able to use real-time Quality of Service (QoS) by simply configuring application-level QoS and using DDS for publish-subscribe communications. This brings determinism directly to Edge Autonomy applications, without the need to configure network switches or do custom programming.
Best for Real-Time
So why are DDS and TSN a natural fit? The key reason is that when combined, they enable integration of real-time application components using commercially-available and standard technologies. This is a distinct advantage, as custom-built or dedicated technologies tend to be expensive to build and hard to maintain. As technology evolves, custom technologies are not able to keep up and end up becoming silos, unable to integrate with new systems and technologies. The evolution of networking technology provides an example: In the early days, there were hundreds of networking technologies that did not interoperate with each other. Over time, all of those technologies became consolidated into various Ethernet standards (for Layers 1 and 2) and TCP/IP (for Layers 3 and 4). Other technologies became isolated islands that could not benefit from the technical advances of the Internet.
Today, Internet Applications are integrated using higher-level (Layer 5 and 6) middleware and “connectivity framework” technologies, such as, MQTT, DDS, OPC-UA, HTTP/REST, etc. These middleware technologies isolate the applications from the details of networking and make it possible to build distributed systems that are robust and can evolve over time. They facilitate the creating and sharing of data-models and provide support for communication patterns such as publish-subscribe and remote service invocation that ease application development.
However, many hard real-time systems have not been able to take advantage of these advances, because the middleware and networking technologies could not provide the level of performance and/or determinism (e.g., bounded latency and jitter) that the application needed. Therefore, they have been forced to use dedicated solutions, including specialized “Industrial Protocols” and custom networking hardware.
The combination of DDS and TSN could change this picture. To understand this, let’s dissect what each technology provides and how they can work together.
TSN provides a great technology to allow real-time traffic to be delivered over Ethernet. It allows the timing requirements of each flow to be defined and configures the network paths (including switches) to ensure the requirements are met. It also provides isolation for different flows, so real-time traffic is not perturbed by other communications occurring on the same network. However, because the technology sits at a low level in the configuration stack, applications must configure flows, packet sizes, frequencies, priorities, network endpoints and so on. While this can be done for simple applications for a few nodes and flows, it becomes intractable for more complex systems.
DDS provides a great technology to integrate applications built from separate components. It sits closer to the application. It provides a higher-level interface in terms of Topics, application data-types, application-relevant QoS (e.g., reliability, durability, priority, deadlines) and takes care of the lower-level details like discovering the endpoints and setting up the communication paths. However, while DDS tries to do its best by using efficient binary protocols, it cannot guarantee deterministic behavior because it does not control the lower-level network layers: It has to live with what the underlying network (e.g., UDP/IP) can provide.
Combining DDS and TSN provides the best of both worlds.
Enforcing Quality of Service
Crucial for real-time systems is the whole concept of QoS. Combining DDS and TSN makes it possible to both specify and enforce application-relevant QoS in distributed systems. This enables system administrators to specify things like which flows need to be reliable, the latency budgets, deadlines, durability requirements, data lifespan, etc. DDS can use this information to automatically configure the TSN network so that the QoS can be guaranteed.
Combining DDS with TSN also saves time through the QoS capabilities in DDS. The application doesn't have to do much beyond configuring their application-relevant DDS QoS. The tedious details of configuring the lower-level TSN concepts are automated, so application integration and deployment is greatly simplified.
The original driver for TSN was the audio and video community, as those systems send high bandwidth signals that need to be precisely coordinated. For example, the signals sent to multiple speakers need to be precisely synchronized to achieve proper sound perception for the listener. As that started evolving, a lot of other industries noted that they were having similar challenges. Multi-axis control machines, robots, 3D printing and automotive are good examples, as they involve coordinated action occurring at multiple endpoints.
So, rather than using multiple networks and multiple wires, all traffic can share a single real-time TSN. This provides significant savings in terms of complexity and hardware. It also future-proofs the design: new components can be added by simply plugging them into the common TSN network.
DDS solves that issue as well at the application layer. Application processes and components can also be “plugged into” the common DDS databus. In addition to improved interoperability, bandwidth and performance for distributed systems, you won’t be stuck with a system architecture that looks like a 1950s telephone switchboard.
There is a clear need for what DDS and TSN can accomplish together. Currently, there is even a proposal for the Object Management Group (OMG) to develop a new standard for DDS with TSN. This is work that RTI is involved in, and it is expected to become an OMG standard in 2020.
The ability to use standard technology and middleware to integrate hard real-time applications could be revolutionary. It would enable common tool chains to be developed across an ecosystem of commercial technology providers. This would lower the cost and significantly simplify the responsibilities of application developers and system integrators. Moreover, it would ensure that the resulting systems do not become siloed and retain the ability to easily incorporate future technology benefits. To learn more about this topic, I encourage you to read this blog by RTI’s Rajive Joshi.
About the author:
Gerardo Pardo - Castelotte is the Chief Technology Officer at Real-Time Innovations. He is responsible for RTI's technical direction, standardization efforts and product architecture. He received a Master's degree in computer science from Stanford University and a Ph.D. in Electrical Engineering from Stanford University.