Skip to the main content.

Did you know?


RTI is the world’s largest DDS supplier and Connext is the most trusted software framework for critical systems.

Success-Plan-Services-DSSuccess-Plan Services

Our Professional Services and Customer Success teams bring extensive experience to train, problem-solve, mentor, and accelerate customer success.

Learn more


From downloads to Hello World, we've got you covered. Find all of the tutorials, documentation, peer conversations and inspiration you need to get started using Connext today.

Try the Connectivity Selection Tool ⇢


RTI provides a broad range of technical and high-level resources designed to assist in understanding industry applications, the RTI Connext product line and its underlying data-centric technology.


RTI is the infrastructure software company for smart-world systems. The company’s RTI Connext product is the world's leading software framework for intelligent distributed systems.

Contact Us

News & Events

2 min read

Reconsidering Your Priorities

Reconsidering Your Priorities

A lot of people who have never used RTI's infrastructure ask, "How do I prioritize my messages so that more important messages arrive before less important ones?" Yet almost no one worries about that once they've actually used our software. Why is that?

The reason is that people coming from competing solutions are used to having lots of intermediate hops in between data producers and data consumers. When the producing application sends data, maybe it goes to a per-node messaging daemon, maybe it goes to a central server; the process is then reversed on the way from the daemon or server to the final subscribing application. All these extra message queues have to be ordered somehow, and people want to know that the ordering is under their control. That makes perfect sense.

What sometimes takes more time to get used to is that, when you're deploying your application with RTI inside, there are no daemons or servers, so there are no queues to prioritize*. You don't need the solution, because the problem doesn't exist. Messages go on the network in the order in which you call write() (in the DDS API, or send() / publish() in the JMS API), because we send them in the calling thread. And on the receiving side, if you take delivery of your data in a listener callback, you'll get it immediately after it comes out of the operating system's socket buffer. There's no blocking or context switching required.

What's better than a problem solved? A problem you never had to begin with.

* There are just a couple of wrinkles in the simplified picture I painted that may or may not be relevant to you.

  1. In some cases, you may ask us to send your data on the network asynchronously relative to your publishing thread. (The most common reason is that your data is really big, and you need us to fragment it before sending it. If we sent it all at once, it could put an unacceptably large burst of data on the network and/or could block your application for longer than you'd like. By sending it in our own thread, we can perform more sophisticated flow control on it.) In this case, there is a publishing-side queue of outgoing data waiting to be sent, and we do prioritize it. We use the DDS-standard Latency Budget QoS policy, which is fully customizable.
  2. If your network is heavily loaded, you might worry that your network hardware will be storing deep queues of packets. If it's going to drop something, you want it to drop something relatively less important. In that case, you need IP-level packet priorities; there's little your middleware can do. What we can do, and do do, is to set the IP TOS field with the data packets we send based on the DDS-standard Transport Priority QoS policy.

(The QoS parameters mentioned above are available to JMS users too.)