Skip to the main content.

Did you know?

 

RTI is the world’s largest DDS supplier and Connext is the most trusted real-time data streaming platform for intelligent physical systems.

Success-Plan-Services-DSSuccess-Plan Services

Our Professional Services and Customer Success teams bring extensive experience to train, problem-solve, mentor, and accelerate customer success.

Learn more

Developers

From downloads to Hello World, we've got you covered. Find all of the tutorials, documentation, peer conversations and inspiration you need to get started using Connext today.

Resources

RTI provides a broad range of technical and high-level resources designed to assist in understanding industry applications, the RTI Connext product line and its underlying data-centric technology.

The monthly RTI Newsletter lets you in on what’s happening across all the industries that matter to RTI customers.

Subscribe

Company

RTI is the real-time data streaming company for autonomy. RTI Connext supplies the reliability, security and performance essential for intelligent physical systems.

Contact Us

News & Events
Cooperation

5 min read

Taming the Beast: 5 Proven Strategies to Simplify Data Sharing with RTI Connext

Taming the Beast: 5 Proven Strategies to Simplify Data Sharing with RTI Connext

Data Distribution Service (DDS®) is the gold standard for real-time, scalable communications, but its learning curve can be steep. Here is a practical roadmap to cutting through the potential complexity.


If you are building complex, distributed systems, success depends on timely, reliable data sharing. And if you’re in autonomous robotics, aerospace, defense, industrial automation, or numerous other industries, you’ve probably already heard that DDS is the leading infrastructure choice for a reason.

Today, that reason is unequivocal: DDS is uniquely able to offer unparalleled reliability, speed, and modular design through decoupling and fine-grained control over real-time data.

However, with all the capability that comes with DDS, some might think that embarking on a new application design would also come with complexity. Shifting from traditional socket-based networking to a "data-centric" paradigm does, after all, require a slight change in architecting the system. The task of creating the data model of your application plus defining the behavior of the delivery of data can get complicated if you let it.

How do you harness the power of DDS without getting bogged down in instantly mastering all the capabilities of the DDS standard? The secret lies not in learning every single feature, but in adopting a structured approach that simplifies the design phase.

Here are five strategies to streamline the design of your distributed network application using RTI Connext, which is the leading implementation of the DDS standard.

1. Start with the Data Model (Don't Reinvent the Wheel)

In DDS, the data model (defined in IDL – Interface Definition Language) is the contract between your applications. A poorly designed data model can lead to a brittle system that is not easy to scale or evolve.

The biggest mistake teams make is starting with a blank slate. Trying to define every data type from scratch often leads to "message-centric" designs disguised as DDS, defeating the data-centric principle of the infrastructure.

How to simplify:

  • To get started with data modeling, utilizing AI can help speed up the process: RTI's System Designer tool uses AI to assist users in creating an initial data model, giving architects a structured starting point that they refine and extend as the system evolves. This capability can even look at a system diagram and develop a starting data model from which a proof of concept application can be generated..
  • Leverage existing, proven data models as your foundation: Look at industry standards relevant to your domain (such as FACE® in avionics, GVA, NGVA, OARIS, UMAA for autonomous systems or common ROS 2 message definitions in robotics).
    By importing established IDL structures for common things such as vectors, timestamps, or sensor readings, you can ensure that you are adopting best practices for data alignment and serialization. You can then modify these proven base models to fit your specific application needs. This approach ensures a robust foundation and forces your team to think "data-first" right from day one.

Plan for extensibility:

Updates to the DDS standard now allow for defining your application data types in an extensible and changeable way. This allows you to evolve your data model as your application evolves with new requirements and functionality.

2. Leverage AI and Code Generation to Create Boilerplate Code

Connecting your data model to your application logic requires a certain amount of "plumbing" code – creating type support code, Domain Participants, Topics, Publishers, Subscribers, and DataReaders. Writing and maintaining this manually can be time consuming for some development teams.

How to simplify:

Never write DDS initialization code by hand.

  • Code Generation: Use the tools provided by RTI (rtiddsgen). These tools ingest your data model (provided in IDL and/or XML) and create type-safe code in C++, C#, Java, or Python that handles serialization, deserialization, and network interfacing. In addition, this tool will also create example publication and subscription applications for the datatype that you have input to the generator. Your team focuses only on the business logic connected to these generated interfaces.
  • Familiar Code Development Environment: RTI’s Connext Copilot is your AI-powered coding assistant within Visual Studio Code. This integration makes you even more efficient when developing applications.
  • AI Assistance: Leverage AI coding assistants to help generate the complex algorithmic logic within your application. While codegen handles the middleware connection, AI can help speed up the creation of the decision paths or control algorithms that process the incoming data.

Once your initial code is built, you can also use DDS to run simulations and get a sense of the initial data flow across different parts of the system. For example, simply adding some preliminary application logic to create data sources gives you a preliminary Digital Twin, which you can then use to test out algorithms and real-time control procedures. This allows you to start your application testing without any hardware yet in place.

3. Standardize on 5 to 6 "Golden" QoS Profiles

Quality of Service (QoS) is DDS's superpower. It allows you to control exactly how data moves – reliably or best effort, durable data for late joining subscriptions, data delivery deadlines, application liveliness, etc.

It is also DDS’s biggest source of confusion. With dozens of policies and permutations, asking an application developer to configure QoS for every topic can be daunting.

How to simplify:

Stop tweaking individual knobs. Instead, leverage the Connext built-in standardized profiles (usually stored in an XML file) that are based on functional use cases. Some of the available profiles are:

  • Sensors (Best Effort): High frequency, okay to drop packets for the latest data.
  • Alarms/Commands (Strict Reliable): Must arrive, must be acknowledged.
  • Status (Liveliness): Used to detect if a component is alive.
  • Streaming Data (High Throughput): Optimized for large packet flow.
  • Large Payload (Video/Imagery): Tuned for asynchronous, large-data transfer.

Developers then simply choose the profile that matches their intent. This drastically reduces cognitive load and enforces system-wide consistency. These profiles can probably handle 90% of the data flows necessary in a distributed application. RTI provides additional assistance for anything needing a more complex set of QoS definitions.

4. Take Advantage of Advanced Tooling for "X-Ray Vision"

Distributed systems are notoriously difficult to debug because they are inherently "black boxes." When Component A sends data that Component B doesn't receive, where is the break? The network? A QoS mismatch? A firewall?

Trying to debug this via printf statements across different machines is agonizing.

How to simplify:

Embrace the Connext ecosystem tooling immediately. Do not wait until integration testing.

  • Admin Console / Monitor: These tools provide a graphical view of your live system. They immediately display who is talking to whom and, crucially, flag errors such as type inconsistencies or QoS mismatches in bright red. Admin Console introduces the ability to subscribe directly to topics on the network and present them in a graph or table format for quick data visualization. Admin Console also enables simple Python scripting for injecting data test patterns into your network application. These capabilities reduce your integration and testing time dramatically.
  • Record / Replay Services: These act as a data recorder for the data you’re sharing. You can record a real-world scenario (such as a drone flight test) and replay that data into your lab network at a later time. This allows you to test new software against real-world data repeatedly without needing the actual hardware.
  • Routing Service: This simplifies integrating disparate systems, acting as a bridge between different network domains or data models without requiring custom application code.

5. Validate Early with Perftest

A common trap is designing a system on paper, building the applications, and only discovering during full-scale integration that the network bandwidth cannot handle the required video streams or sensor loads.

How to simplify:

Shift performance validation to the very beginning of the project using standard performance testing tools such as RTI Perftest. Before writing a single line of application code, use Perftest to simulate your expected network load. You can tell it: "Simulate 50 publishers sending 1KB samples at 100Hz with Reliable QoS."

Perftest will confirm if your hardware and network infrastructure can sustain your most stringent requirements. It allows you to experiment with different QoS combinations in isolation to find the optimal settings before unvetted application logic can muddy the waters. 

Conclusion: The Synergy of Simplicity

Designing a distributed application doesn't have to be overwhelming. By combining these five strategies, you change the architecting, development, testing, and integration phases dynamic entirely.

To sum up, you start with a solid Data Model foundation. You remove guesswork with standardized QoS Profiles. You eliminate repetitive coding with Code Generation. You gain visibility during development with Advanced Tooling, and prove your architecture works before you build it with Performance Testing.

The result is not just a simpler design phase; it leads to a faster build, easier integration, and a more robust final product. You move from fighting the middleware to leveraging it as the powerful backbone it was designed to be.

 

 

About the authors:

Bert Farabaugh is the Director of Field Application Engineering at RTI. During his 21-year tenure with RTI, Bert has developed extensive expertise in real-time applications and distributed systems architecture design. Prior to his current role, he also spent 12 years developing robotic systems with various key defense contractors. Bert holds a B.S. in Electrical Engineering from The University of Pittsburgh.

 

 

Martin Brignall Preferred_2023Martin Brignall is a Senior Field Application Engineer at RTI