Connext 6 includes new features such as Flat Data and Zero Copy designed to make your distributed systems perform better than ever. Learn what these features are, how they work and why you should use them to optimize your Industrial IoT system.

In Episode 36 of The Connext Podcast:

  • [0:50] What is Flat Data
  • [1:53] How does Flat Data benefit our customers 
  • [4:18] How to use Flat Data in an application
  • [6:35] What is Zero Copy over Shared Memory
  • [8:33] How Flat Data & Zero Copy compliment each other
  • [9:48] Example of how developers will utilize Zero Copy

Related Content:

Podcast Transcription:

Steven Onzo: Hello everybody and welcome to another episode of The Connext Podcast. In our previous episode, we announced the release of Connext 6 and its highlights. However, we had only scratched the surface. So in this episode and the ones to follow, we'll take the time to break down the newest features that users have requested for RTI Connext DDS.

In this episode we'll go over two features: Flat Data and Zero Copy over Shared Memory. My first guest is a principal software engineer. He's part of the core engineering team and had a big role in developing features like Flat Data, which is what we'll be talking about first. I'd like to welcome to The Connext Podcast, Alex Campos.

[0:50] What is Flat Data?

Well Alex, thanks for joining the podcast. So I thought we'd jump right in. First of all, what is Flat Data?

Alex Campos: Hi Steven, thank you for having me. So Flat Data is a brand new feature we've developed over the last few months and we released with Connext DDS 6. It's a feature we designed to reduce the latency of your DDS applications when you send very large data. Flat Data is a new language binding for your IDL types, and when you use Flat Data for a type, knowing your application, that data for that type is always going to be in CLS format. That makes operations that write and read the data much faster when the data is very large because it doesn't have to be serialized and deserialized.

By reducing the number of times that data is copied through the path from the time you write it in your application, to the time you receive it, we can reduce the latency quite a bit.

[1:53] How does Flat Data benefit our customers?

Steven Onzo: So why do our customers care about a feature like this? What can it help them do that they couldn't do before?

Alex Campos: Right, so Flat Data enables the DDS applications that really need to achieve very low latency and they were not able to do it before. You see in a distributed system, there are three main sources of latency. We could say that there is a network latency, and that's the time that the data takes to be sent on the network. There is a middleware latency. That is the overhead of just the bookkeeping that your middleware has to do it to keep its internal state. But this is really relevant for scenarios with very large data because it's comparably very small. And there is a copy latency, we could call it. This is the time it takes to copy the data from the user space to serialize it and put it on the wire, then to receive it from the wire, deserialize it and put it back on the user space. So those copies can be really heavy when the data is very large.

So now you can go and really get the best possible network and you can minimize the network latency as much as you possibly can, but for many applications, that wasn't enough. They got stuck at a certain point and they were not able to chip away any more milliseconds, and they were just basically stuck. There was nothing they could do.

Steven Onzo: Right, can you give us an example of that?

Alex Campos: In autonomous driving for example, many sensors send raw images or LiDAR readings to the car component that makes decisions. Those applications in many cases couldn’t use DDS in the whole application. So they had to resort to point to point, directly using sockets. It wasn't really convenient. They lose all their benefits of DDS just because they needed that super fast way to transmit large data.

So it just wasn’t possible to send that data over DDS and achieve the latency they wanted. By using Flat Data now and removing these unnecessary copies of these images, it's now possible for them to use DDS across their whole system.

[4:18] How to use Flat Data in an application

Steven Onzo: Can you tell us how you use Flat Data in an application?

Alex Campos: Yeah, as I said, it's a new language binding. So you start by when you define your IDL types, you're going to add a new annotation to your types. This annotation tells the code generator that your time now is going to be mapped to the Flat Data language binding. The code generator's going to generate a C++ class that instead of giving you direct access to fields. Let's say you have an image and you have an array. Before you would set the fields by just using a typical struct in C++, now you have an API to set all these members. Every time you set them, they automatically get serialized, making the sample stay always in serialized format. So basically if you want to start using Flat Data in an application that already exists, you will have to change your IDL definition, add this new annotation, then in the parts of your application where you manipulate your type, your buffers, your image, now that's going to change a bit.

Like I said, instead of accessing the fields of your type like a regular C++ type, you'll have to use a new API to access that data because it's always going to be in a serialized buffer.

Steven Onzo: I think that wraps up the conversation. Those were the main highlights we wanted to ask you about this new feature. I want to thank you for coming on the podcast and shedding some light on Flat Data.

Alex Campos: Of course, thank you.

Steven Onzo: Thank you.

Hello everybody, and welcome back to the second segment of this episode. We just spoke with Alex Campos about Flat Data and how it's used. Now we're going to dive into another important feature, and this is Zero Copy over Shared Memory. My next guest is also part of the core engineering team and played a big role in developing features for Connext 6, include Zero Copy. I'd like to welcome to The Connext Podcast senior software engineer Harish Umayi.

[6:35] What is Zero Copy over Shared Memory?

Alright well welcome to the podcast, Harish. I want to just jump right in and start with the basics. What is Zero Copy over Shared Memory?

Harish Umayi: So the Zero Copy over Shared Memory feature is an enhancement to the shared memory transport, which we did in the Connext 6 release. It basically helps in reducing the latency in the send/receive process of a sample. This reduction in latency is because we eliminated the need to copy samples in the process.

Steven Onzo: How will Zero Copy help RTI customers? What can they do that wasn't possible before?

Harish Umayi: Previously if they wanted to send over samples, there were copies involved in multiple steps. Initially a sample is serialized as a copy, then the serialized sample is copied into the transport, which contributes to one more copy. Then it's received on the other end and it's deserialized so there are two more copies there. So when the sample size is large, the latency due to these copies are large too.

With Zero Copy what we send is not a sample but a small reference to the sample. This reference is only about 16 bytes. So all the copies which I described before now apply to the reference and not the sample, so that would result in a reduction in latency because the reference is small. Another point to remember is application built with the Zero Copy feature can communicate with an application which doesn't have this Zero Copy feature. So if that second application is running on the same node, then we'll use the regular shared memory transport but not use the Zero Copy feature. If the secondary application is running on a different node, then we'll use the existing UDP transport. So this allows the customers to integrate the Zero Copy feature into an existing system where they already have certain applications deployed without using the Zero Copy feature.

[8:33] How Flat Data & Zero Copy compliment each other

Steven Onzo: A few minutes ago we spoke with Alex about Flat Data, which reduces latency in your DDS applications when you use a very large data. Zero Copy seems to have a similar objective, reduce latency for submillisecond communications. Can you talk about how they complement each other?

Harish Umayi: Yes, both these features help in reducing the latency. The benefit of Zero Copy feature is visible when the two applications are running on the same node, because that's when you use the shared memory transport. But if the two applications were on two different nodes, then we cannot take advantage of the Zero Copy feature. We'll use the regular UDP transport.

But if the user were to use both the Flat Data as well as the Zero Copy feature in conjunction, then they get the benefit of Flat Data when they are having two applications running on two different nodes. So that way they can fall back to Zero Copy of there is in-node communication, and then move over to Flat Data if the communication is between two different nodes.

Further, if the Zero Copy types have strings or sequences, then there's also a need to use the Flat Data language binding, even for applications running on the same node.

[9:48] Example of how developers will utilize Zero Copy

Steven Onzo: For the last question, I wanted to ask you if you can give us an example of how developers will utilize Zero Copy and in what applications we'll see this used?

Harish Umayi: The basic steps to use the Zero Copy feature is first to identify the types which require this feature support. So these types will need to be annotated with the transfer more shared memory ref annotation in the IDL file. One or more types can be annotated. Once these types are annotated during the code generation step, there is additional type plugin code which gets generated. The next step is to link your application with the metp library. Once these two steps are done, then the user can write his application with the new get loan API on the data writer, which basically gets him or her a sample to write and then it can be used for publishing.

The feature was developed with the focus of the autonomous vehicle industry. There's a strong need which we have observed where they want to send large samples across, and these samples are usually image data. These images can come from LiDAR or any other sources, and they usually are sent at high rates, like 30 frames per second or more. So the Zero Copy feature helps in addressing that need where the latencies are reduced and the size of the sample doesn't matter because we are just sending a reference.

Steven Onzo: Excellent, well that wraps up the conversation. I want to thank you, Harish, for coming on the podcast and giving us some insight on Zero Copy over Shared Memory.

Harish Umayi: Thanks, Steven.

INSIDE THE PLAYBOOK

Get the latest updates and insights from the RTI newsletter

Subscribe to the Newsletter