Software costs a lot to develop, maintain, market and sell, but the incremental cost of providing this software to an additional customer is very low. Vendors need to cover the entire cost, but users want to pay that low incremental cost. This gap must somehow be bridged. Attempting to charge based on licenses, support or services results in flawed justifications. Charging based on the value of development seats, runtime royalties, or product creates weak rationales. The best solution is to charge a fair, open and simple price based on the investment the customer is making in their project.
For program managers of mission-critical systems, the goal of interoperability through a common, shared infrastructure has become one of paramount importance for mission-critical systems. This should come as no surprise. Interoperability and shared architectures deliver real business benefits across industries. But achieving interoperability is not just a technical challenge; it also requires a new way of thinking about licensing and pricing policies for key system software. This whitepaper explores the business challenges of achieving interoperability and proposes a new Infrastructure Community model that facilitates the development, deployment, and maintenance of fully interoperable systems.
The economic imperative is driving procurement and system integration innovation in the same way that wars tend to drive technology innovation. However technology has advanced sufficiently to create the foundations of a fundamentally new procurement strategy. IOA provides a fundamental shift in defense procurement strategy. It allows government agencies to build high performance, reliable systems that scale while reducing cost and increasing interoperability.
Smart, connected systems are changing many industries. The need for high-performance real-time intelligent systems abounds in defense, industrial automation, medical devices, automotive, and more. The fundamental value driver is easy integration of applications into subsystems, of subsystems into systems, and of systems into larger systems of systems.
RTI's Connext product line provides infrastructure to connect all of these systems. It spans the gap between technologies; it can connect the tiniest of devices to mission-critical real-time computers through to enterprise information systems. It makes many different systems work together as one application.
With the Generic Vehicle Architecture (GVA), the UK MOD has initiated a fundamental shift in perspective regarding collaboration between defense procurement agencies and systems integrators (SIs). Read about the innovative aspects of GVA, IOA and the acquisition approach, as well as the confluence of thinking and events that led to this new engagement model.
Enterprises increasingly need to develop distributed systems in an agile manner, with minimal perturbation to end users and at lower costs. This paper discusses architectural design options and principles to address integration challenges. It includes a comprehensive table for evaluating relevant technologies.
This paper addresses the system design and integration challenges involved in meeting the requirements for coordinated deployment of multiple re-configurable Unmanned Aircraft Systems (UAS).
RTI has created a number of additional capabilities for its OMG Data Distribution Service (DDS) compliant middleware, specifically to address the challenges of constrained network communication. This paper discusses some sample network types and present solutions using RTI Data Distribution Service.
This whitepaper discusses why integrating modern systems requires a new modular, network-centric approach that relies only on standard APIs and protocols, provides stronger information-management services, and avoids historical problems of integrating complex, heterogeneous systems. The paper focuses on "real-world" systems, that is, systems that interact with the external physical world and must live within the constraints imposed by real-world physics. Examples of these include air-traffic control systems, real-time stock trading, command and control (C2) systems, Unmanned Vehicles, Robotic and Vetronics, and supervisory control and data acquisition (SCADA) systems.
In the modern world, two powerful forces are at absolute odds: system complexity is increasing while budgets are tightening. It seems clear that, in order to manage this state of affairs, we must look for new and different ways to do things rather than just making incremental improvements on the old ways. Data-centric middleware provides that opportunity by enabling a fundamental step forward in efficiency for designing, developing, and deploying next generation, distributed mission-critical systems. Having now been successfully employed on many projects, a data-centric architecture must be seriously considered for any new mission-critical undertaking.
Integration effort and risk grow dramatically with system and team size. Connecting large systems developed by independent teams at different times requires connecting multiple systems, thus creating a system of systems. Enterprise systems use a design called Service Oriented Architecture (SOA) to effect this integration. However, the technologies used in the enterprise do not apply well to real-time systems; they cannot handle the strict delivery and timing requirements. So what IS Real-Time SOA?
Embedded Market Forecasters, the premier market intelligence and advisory firm for the embedded technology industry, released a report along with survey results that provide a cost-based evaluation framework for embedded developers evaluating the relative merits of developing in-house, or 'Roll Your Own' (RYO), middleware or using a commercial alternative. The study marks the first time that ROI metrics have been developed from comprehensive data derived from an extensive survey of embedded developers. The findings show that middleware selection can have a very significant impact on a project's cost, timeliness, risk and performance.
Reliable one-to-many communication is frequently prone to two serious problems in particular: (1) how to prevent a slow consumer from holding up the rest of the system, and (2) how to prevent massive amounts of negative acknowledgement (NACK) traffic from swamping the network. These problems are related to one another: both deal with the way in which a communications stack (network protocols combined with a middleware on top of them) maintains reliability across a logical network topology with broad fan-out.
This paper discusses how these problems can be lessened or avoided altogether by leveraging the unique capabilities of RTI Data Distribution Service middleware.
Securing RTI Data Distribution Service with Security Enhanced Linux (SELinux) provides a new level of security for applications and solutions created with the DDS standard. This white paper, written by Tresys Technology, a recognized leader in SELinux and Linux security services, provides technical details concerning leveraging SELinux to secure DDS applications. It explores how SELinux and Linux can be used to create solutions with RTI Data Distribution Service that meet the most stringent government security requirements.
Latency, or rumors of it, is a destructive force that can incapacitate a trading firm at any time. Markets are increasingly fragmented and reliant on multi-tasking applications and connectivity to unite them. The analysis of these disparate sources of liquidity is one of several challenges to traders of all stripes. Among those market participants aiming for first place, the ability to quickly process, analyze and react to the onslaught of data is a critical component of their competitive advantage. Whether those market participants are exchanges trying to attract liquidity, hedge funds seeking alpha or algorithms chasing best execution, speed is a necessary trait for survival.
(Updated: June 2011) Designing and building an unmanned autonomous vehicle (UAV) is one of the most difficult problems in engineering; and it is particularly challenging from a software systems perspective. By optimizing software performance, scalability, high availability and reliability, security, interoperability, and affordability, system designers can create a UAV that is adaptable to new mission parameters while remaining robust across the product lifecycle.
This paper presents a comprehensive overview of the Data Distribution Service standard (DDS) and describes its benefits for developing robust precision assembly applications. DDS is a platform-independent standard released by the Object Management Group (OMG) for data-centric publish-subscribe systems. It allows decoupled applications to transfer information, regardless of what architecture, programming language or operating system they use. The standard is particularly designed for real-time systems that need to control timing and memory resources, have low latency and high robustness requirements. As such, DDS has the potential to provide the communication infrastructure for next generation precision assembly systems where a large number of independently controlled components need to communicate. To illustrate the benefits of DDS for precision assembly an example application is presented.
Are all real-time distributed applications supposed to be designed the same way? Is the design for a UAV-based application the same as that of a command-and-control application?
In the case of a UAV-based application, a high volume of data gets created, only some of which is of interest to the base station. To preserve the radio link's bandwidth, only the relevant information is transmitted. The application will use the data for post-mission analysis, so it also has persistence and data-mining needs. In contrast, a real-time command-and-control application needs low-latency and high-reliability, but has little need to persist or cleanse the data in real-time. No, all real-time distributed applications are not designed the same way. While we categorize both these applications as real-time with similar data-transmission characteristics, their architectures and designs vary significantly because the information that they manage and process varies significantly.
This paper characterizes the lifecycle of data in real-time applications — from creation to consumption. The paper covers questions that architects should ask about data management — creation, transmission, validation, enrichment, and consumption; questions that will determine the foundation of their project.
Increases in data volumes over the past few years have stretched current financial information backbones, resetting the technology challenges in a way that demands a new approach.
For example, Automated Trading Desk (atdesk.com), now part of Citigroup Inc., chose RTI to distribute real-time data from direct-exchange and ECN feeds to price-prediction engines and automated trading applications. PIMCO (pimco.com) has selected RTI as part of a new initiative to enforce and monitor regulatory and client-imposed pre-trade investment restrictions.
RTI's solution is a software-only messaging backbone based on a true peer-to-peer architecture that fundamentally challenges earlier generations of daemon-based architectures, and recent peer-to-peer approaches. Leveraging 16+ years of research and development, RTI’s key technical value is unparalleled intelligent messaging that performs predictably as message sizes increase and consistently as overall message throughput grows under extreme market conditions. This deterministic behavior is unmatched.
With no intermediate daemons, brokers or other components to add latency, market data and market order/execution updates are benchmarking at over 3 million messages per second. RTI customers have reported mean latencies as low as 43 microseconds in Gigabit Ethernet environments. Current tests support symbol spaces of over 1,000,000 "subjects." Overall throughput is essentially linear as publishers are added, with maximum throughput limited only by available bandwidth.
Modern military operations rely on integration between many disparate systems for functions including command and control, weapons and self-defense. Developing and integrating these applications is particularly challenging because of their strict real-time performance constraints. Often, their latency and throughput requirements exceed the capabilities of traditional enterprise messaging and integration middleware.
This paper examines defense systems’ real-time requirements and shows how they can be met with commercial middleware specifically designed to meet their stringent performance requirements.
The tactical battlefield has long been characterized by the use of many different data collection and analysis systems that present information on small and discrete areas of the conflict to separate command and control stations. The operators of these stations attempt to use that data to estimate enemy intentions and actions, and counter with manual direction of the equipment and personnel in a simulacrum of coordinated response.
The result is a disjointed and often extremely dynamic environment of forces operating across the battlefield; a variety of aircraft with different weaponry, performance, and flight characteristics, fixed and mobile artillery, shipboard combat and weapon systems, and dismounted soldiers all with unique pieces of data when aggregated represent the complete strategic picture of the battlefield operations. When these disparate systems are integrated, it is often with a particular mix and mission in mind.
A much discussed way to dramatically improve the speed-of-command on the battlefield is to create a net-centric battlefield operation. Each individual element of a tactical system performs its narrow mission, but shares data as needed with others in a way that provides a more complete and accurate representation of the battlefield environment and the role of that system in the environment.
The purpose of net-centric warfare is to translate an information advantage into a battlefield advantage through the comprehensive networking and dynamic data-sharing between geographically dispersed forces. The shared situational awareness enables better strategic coordination of forces and enhances speed-of-command, which dramatically increases mission effectiveness.
Truly profound technologies become part of everyday life. Motors, plastics, computers, and now networking have made this transition in the last 100 years. These technologies are embedded in billions of devices; they have melted into the assumed background of the modern world.
Another stage is emerging in this progression: pervasive, real-time data. Like the Internet; except that this pervasive information infrastructure will connect devices, not people. Just as the "web" connected people and fundamentally changed how we all interact, pervasive data will connect devices and change how they interact.
Today's network technology makes it easy to connect nodes, but not so easy to find and access the information resident in networks of connected nodes. This is changing; we will soon gain the ability to pool information from many distributed sources, and access it at rates meaningful to physical processes. Many label this new capability the "network centric" architecture. However a more appropriate term being used is "data centric" because the change, fundamentally, is driven by the dynamic availability of information, not the static connectivity of the network itself. Whatever the name, this concept will drive the development of vast, distributed, information-critical applications.
The Rise of Data-Centric Programming The network is profoundly changing the nature of system design. The "web" is just a first step; the Internet today focuses on connecting people at human interaction speeds. Future networks will connect vast arrays of cooperating machines at rates meaningful to physical processes. These connections make truly distributed applications possible. Distributed applications will drive the future in many areas, from military information systems to financial trading to transportation.
The growing popularity of cheap and widespread data collection “edge” devices and the easy access to communication networks (both wired and wireless) is weaving in more devices and systems into the fabric of our daily lives. As computation and storage costs continue to drop faster than network costs, the trend is to move data and computation locally, using data distribution technology to move data between the nodes as and when needed. As a result, the quantity of data, the scale of its distribution and the complexity integration is growing at a rapid pace.
The demands on the next generation of distributed systems and systems-of systems include being able to support dynamically changing environments and configurations, being constantly available, and being instantly responsive, while integrating data across many platforms and disparate systems.
How does one systematically approach the design of such systems and systems-of- systems? What are the common unifying traits that can be exploited by architects to build systems that can integrate with other independently systems, and yet preserve the flexibility to evolve incrementally? How does one build systems that can be self aware and self-healing, and dynamically adapt to changes in their environment? Can this be done on the scale of the Internet, and yet be optimized for the best performance that can be supported by the underlying hardware and platform infrastructure? Can this be done without magnifying the ongoing operational and administrative costs?
These and related topics are the subject of this paper.
In this paper we provide a comparative overview of the data distribution service
with respect to high-level architecture. We describe the equivalent terminology
and concepts, and highlight the key similarities and differences in the areas
of declaration management, object management, data distribution management,
ownership management, federation management, and time management.
We explore the architectural mapping between HLA and DDS. We develop an outline for translating from one model to the other, and examine the needed supporting transformations and assumptions. We conclude with remarks and observations on building applications that can utilize both HLA and DDS technologies.
Today's embedded software applications are increasingly distributed; they communicate data between many computing nodes in a networked system. Several network middleware designs have arisen to meet the resulting communications need, including client-server, message passing, and publish-subscribe architectures.
The new Object Management Group (OMG) Data Distribution Service (DDS) standard is the first comprehensive specification available for "publish-subscribe" data-centric designs. This paper helps system architects understand when DDS is the best fit to an application. It first provides an overview of DDS's functionality and compares it to the other available technologies and standards. Then, it examines driving factors, potential use cases, and existing applications. It concludes with general design guidance to help answer the question of when DDS is the best networking solution.
© Copyright Real-Time Innovations. 2007-2013. All rights reserved.