Hi. My name is Lacey Trebaol and welcome to the Connext Podcast. Today I'm going to be sharing with you and audio version of our most popular eBook, Leading Applications & Architecture for the Industrial Internet of Things. The author of this eBook is our CEO, Stan Schneider. In order to keep our podcast length down to about 30 minutes, we'll be breaking up the reading of this eBook into three separate episodes. Let's get started with part two. We hope you enjoy episode seven.
Section two: Toward a taxonomy of the IIOT. There is today, no organized system science for the IIOT. We have no clear way to classify systems, evaluate architectural alternatives, or select core technologies. To address the space more systematically, we need to develop a taxonomy of IIOT applications, based on their system requirements.
This taxonomy will reduce the space of requirements to a manageable set, by focusing only on those that drive significant architectural decisions. Based on extensive experience with real applications, we suggest a few divisions, and explain why they impact architecture. Each of these divisions defines an important dimension of the IIOT taxonomic model. We thus envision the IIOT space as a multidimensional requirement space. This space provides a framework for analyzing the fit of architectures and technologies to IIOT applications.
A taxonomy logically divides types of systems by their characteristics. The first problem is to choose top level divisions. In the animal kingdom, you could label more animals, land, sea, or air animals. However, these environmental descriptors don't help much in understanding the animal. For instance, the architecture of a whale is not much like an octopus, but it is very like a bear. To be understood, animals must be divided up by their characteristics and architecture, such as heart type, reproductive strategies, and skeletal structure.
It is similarly not useful to divide IIOT applications by their industries, like medical, transportation and power. While these environments are important, the requirements simply do not split along industry lines. For instance, each of these industries has some applications that must process huge data sets. Some that require real time response, and other's that need life critical reliability. Conversely, systems with vastly different requirements exist in each industry. The bottom line, is that fundamental system requirements vary by application, and not by industry. These different types of system need very different approaches.
Thus, as in biology, the IIOT needs an environment independent system science. This science starts by understanding the key system challenges and resulting requirements. If we can identify common cross industry requirements, we can then logically specify common cross industry architectures that meet those requirements. That architecture will lead to technologies and standards that can span industries.
There is both immense power and challenge in this statement. Technologies that span industries face many challenges, both political and practical. Nonetheless, a clear fact of systems in the field is the similarity of requirements and architecture across industries. Leveraging this fact promises a much better understood, better connected future. It also has immense economic benefit. Over time, generic technologies offer huge advantage over special purpose approaches. Thus, to grow our understanding and realize the promise of the IIOT, we must abandon our old industry specific thinking.
So, what can we use for divisions? What defining characteristics can we use to separate the mammals from the reptiles, from the insects of the IIOT? There are far too many requirements, both functional and non-functional to consider in developing a comprehensive set to use as criteria. As with animals, we need to find those few requirements that divide the space into useful, major categories.
The task is simplified by the realization that the goal is to divide the space, so that we can determine system architecture. Thus, good division criteria are one, unambiguous, and two, impactful on the architecture. That makes the task easier, but still non-trivial. The only way to do it, is through experience. We are still early in our quest. However, significant progress is within our collective grasp.
This work draws on extensive experience with nearly 1,000 real world IIOT applications. Our conclusion is that an IIOT taxonomy is not only possible, but also critical to both the individual systems building, and the inception of a true cross industry IIOT. While the classification of IIOT systems is very early, we do suggest a few divisions. To be as crisp as possible, we also chose numeric metrics for each division. The lines of course are not that stark. Those lines evolve with technology over time at a much faster pace than biological evolution. Nonetheless, the numbers are critical to force clarity. Without numerical metrics, meaning as often too fuzzy.
Metric, continuous availability must exceed 99.999% to avoid severe consequences. Architectural impact, redundancy. Many systems describe their requirements as being "highly reliable, mission critical, or minimal downtime." However, those labels are more often platitudes than actionable system requirements. For these requirements to be meaningful, we must be more specific about the reasons we must achieve that reliability. That requires understanding how quickly a failure causes problems, and how bad those problems are.
Thus, we define continuous availability as the probability of a temporary interruption in service over a defined system relevant time period. The five nines golden specification for enterprise class servers, translates to about five minutes of downtime per year. Of course, many industrial systems cannot tolerate even a few milliseconds of unexpected down time. For a power system, the relevant time period could span years. For a medical imaging machine, it could be only a few seconds.
The consequences of violating the requirement are also meaningful. A traffic control system that goes down for a few seconds could result in fatalities. A website that goes down for those same few seconds, would only frustrate users. These are fundamentally different requirements. Reliability thus defined is an important characteristic, because it greatly impacts the system architecture. A system that cannot fail, even for a short time, must support redundant computing, sensors, networking, storage, software and more. Servers become troublesome single point of failure weak points. When reliability is truly critical, redundancy quickly becomes a, or perhaps the key architectural driver.
Metric, response less than 100 milliseconds. Architectural impact. Peer-to-peer data path. There are many ways to characterize real time. All systems should be fast. However, for these requirements to be useful, we must specifically understand which timing requirements drive success. Thus, real time is much more about guaranteed response, than it is about fast.
Many systems require low average latency, delivery delay. However, true real time system succeed only if they always respond on-time. This is the maximum latency, of ten expressed as the average delay plus the variation, or jitter. Even a fast server with low average latency can experience large jitter under load. In a distributed system, the most important architectural impact is the potential jitter imposed by a server or broker in the data path.
In architecture that can satisfy a human user, annoyed by a weight of longer than eight seconds for a website, will never satisfy an industrial control that must respond into milliseconds. We find that the knee in the curve that greatly impacts design, occurs when the speed of response is measured in a few tens of milliseconds, or even microseconds. We choose 100 milliseconds simply because that is about the unpredictable delay for today's servers. Systems that response faster than this, usually must be peer-to-peer. That is a huge architectural impact.
Metric, more than 10,000 addressable data items. Architectural impact, selective delivery filtering. Scale is a fundamental challenge for the IIOT. It is also complex. There are many dimensions of scale, including number of nodes, number of applications, number of developers on the project, number of data items, data item size, total data volume and more. We cannot divide the space by all these parameters.
In practice however, they are related. For instance. A system with many data items probably has many nodes. Despite the broad space, we have found that two simple metrics correlate well with architectural requirements. The first scale metric is addressable data item scale, defined as the number of different data instances, that could be of interest to different parts of the system.
Note that this is not the same as the size of a single large data set, such as a stream of data from a single fast sensor. The key scale parameter is the existence of many different data items, that could potentially be of interest to different consumers. So, a few fast sensors create only a few addressable data items. Many sensors or sources, create many data items. A large number of addressable data items implies difficulty in sending the right data to the right place.
When systems get big in this way, it is no longer practical to send every data update to every possible receiver. We find that the challenge is significant, for a few as 100 data items. It is extreme for systems with more than 10,000 addressable data items. Above this limit, managing the data itself becomes a key architectural need. These systems need an architectural design that explicitly understands the data. Thereby allowing selective filtering and delivery. There are two approaches in common use. Run time introspection that allows consumers to choose data items themselves, and "data centric designs that empower the infrastructure itself to understand actively filter the data system wide."
Metric, more than 10 team or interacting applications. Architectural impact, interface control and evolution. The second scaled parameter we choose is the number of modules in the system, where a module is defined as a reasonably independent piece of software. Each module is typically an independently developed application, built by an independent team of developers on the project. Module scale quickly becomes a key architectural driver.
The reason is that system integration is inherently an an N-squared problem. Each new team presents another interface into the system. Smaller projects, built by a cohesive team, can easily share interface specifications without formality. Larger projects built by many independent groups of developers, face a daunting challenge. System integration can occupy half of the delivery schedule and most of its risk.
In these large systems, interface control dominates the interoperability challenge. It is not practical to expect interfaces to be static. Modules, or groups of modules that depend on an evolving interface schema, must somehow continue to inter operate with older versions of the schema. Communicating to all the interfaces becomes hard. Forcing all modules to update on a coordinated time frame to a new schema becomes impossible. Thus, interacting teams quickly find that they need tools, processed, and eventually architectural support to solve the system integration problem.
Of course, this is a well studied problem at Enterprise Software Systems. In the storage world, databases ease system integration by explicitly modeling and controlling data tables. Thus allowing multiple applications to access information in a controlled manner. Communication technologies like Enterprise Service Buses, ESB's, web services, enterprise queuing middleware, and textural schema, like XML and JSON, all provide evolvable interface flexibility. However, these are often not appropriate for industrial systems. Usually for performance or resource reasons.
Data centric system expose and control interfaces directly, thus easing system integration. Databases for instance, provide data centric storage, and are thus important in systems with many modules. However, databases provide storage for data at rest. Most IIOT systems require data in motion, not, or in addition to data at rest. Data centric middleware is a relatively new concept for distributed systems. Similar to a database data table, data centric middleware allows applications to interact through explicit data models. Advanced technologies can even detect an manage differences in interfaces between modules and then adapt to deliver to each endpoint in the schema, what the endpoint expects. These systems thus decouple application interface dependencies, allowing large projects to evolve interfaces and make parallel progress on multiple fronts.
Metric, more than 20 devices, each with many parameters and data sources, or sinks that cannot be configured at development in time. Architectural impact must provide a discoverable integration model. Some IIOT systems can, or even must be configured and understood before runtime. This does not mean that every data source insync is known, but rather that this configuration is relatively static. Others, despite a potentially large size, have applications that implement specific functions that depend on knowing what data will be available. These systems can or must implement an endpoint discovery model that finds all the data in the system directly.
However, other systems cannot easily know what devices or data will be available until runtime. For instance, when IIOT systems integrate racks of field replaceable machines, or devices, they must often be configured and understood during operation. For instance, a plant controller HMI may need to discover the device characteristics of an installed device or rack, so a user can choose data to monitor. The key factor here is not addition or changes in which devices used, it is more a function of not knowing which types of devices may be involved. These system must implement a different way to discover information.
Instead of searching for data, it is more efficient to automate the process, by building runtime maps of devices and their data relationships. The choice of 20 different devices is arbitrary. The point is that when there are many different configurations for many devices, mapping them at runtime becomes an important architectural need. Each device requires some sort of server or manager, that locally configures attached sub devices, and then presents that catalog to the rest of the system. This avoids manual gymnastics.
Metric, fan out greater than 10. Architectural impact, must use one to many connection technology. We define fan out as the number of data recipients that must be informed upon change of a single data item. Thus, a data item that must go to 10 different destinations each time it changes, has a fan out of 10. Fan out impacts architecture because many protocols work through single one-to-one connections. Most of the Enterprise world works this way, often with TCP, a one-to-one session protocol.
Examples include connecting a browser to a web server, a phone app to a back end, or a bank to a credit card company. While the systems can achieve significant scale, they must manage a separate connection to each endpoint. When many data updates must go to many endpoints, the system is not only managing many connections, but there's also sending the same data over and over through each of those connections.
IIOT systems often need to distribute information to many more destinations than enterprise systems. They also often need higher performance on slower machines. Complex systems even face a fan out mesh problem, where many producers of information, must send it to many recipients. When fan out exceeds 10 or so, it becomes impractical to do this branching by managing a set of one-to-one connections. In architecture that supports efficient, multiple updates, greatly simplifies these systems.
Metric, one way data flow from more than 100 sources. Architectural impact, local concentrator or gateway design. Data collection from field systems is key driver of the IIOT. Many systems transmit copious amounts of information to be stored or analyzed in higher level servers or on the cloud. Systems that are essentially restricted to the collection problem, do not share significant data between devices. The systems must efficiently move information to a common destination, but not between devices in the field. This has huge architectural impact. Collection systems can often benefit from a hub-and-spoke concentrator or gateway. Widely distributed systems can use a cloud based server design, thus moving the concentrator to the cloud.
Dimensional decomposition and map to implementation. The analogy with a biological taxonomy only goes so far. Industrial systems do not stem from common ancestors, and thus do not fall into crystally defined categories. As implied previously, most systems exhibit some degree of each of the characteristics. This is actually a source of much of the confusion, and the reason for our attempt to choose hard metrics, at the risk of declaring arbitrary boundaries.
In the end however, the goal is to use the characteristics to help select a single system architecture. Designs and technology satisfy the previously mentioned goals to varying degrees. With no system science to frame the search, the selection of a single architecture based on any one requirement becomes confusing.
Perhaps a better analysis is to consider each of the key characteristics as an access in an in dimensional space. The taxonomical classification process, then places each application on a point in, in dimensional space. This is not a precise map. Applications may be complex, and thus placement not exact. The metrics mentioned before, delineate architecturally significant boundaries, that are not in reality crisp. So, the lines that we have named here are somewhat fuzzy. However, an exact position is often not important. Our classification challenge is really only to decide on which side of each boundary our application falls.
In this framework architectural approaches and the technologies that implement them, can be considered to occupy some region of this in dimensional space. For instance, a data centri technology like the object management group, OMG DDS, provides peer-to-peer fully redundant connectivity with content filtering. Thus, it would occupy a space that satisfies many reliable real time applications with significant numbers of data items, the first three challenge dimensions previously mentioned.
The message queuing telemetry transport MQTT protocol on the other hand, is more suited for the data collection focus challenge. Thus, these technologies occupy different regions of the solution space. Figure 20 in the eBook represents this concept in three dimensions. The application can be placed in the space and the architectural approaches represented as regions. This reduces the problem selecting an architecture to one of mapping the application point to appropriate architectural regions. Of course, this may not be a unique map. The regions overlap.
In this case, the process indicates options. The trade off is to find something that fits the key requirements, while not imposing too much cost in some other dimension. Thinking of the system as an in dimensional mapping of requirements to architecture, offers important clarity and process. It greatly simplifies the search.
Defining an IIOT taxonomy will not be trivial. The IIOT encompasses many industries and use cases. It encompasses much more diversity than applications for specialized industry requirements, enterprise IT, or even consumer IOT. Technologies also evolve quickly. The scene is constantly shifting. This present state just scratches the surface.
However, the benefit of developing a taxonomical understanding of the IIOT, is enormous. Resolving these issues will help system architects choose protocols, network topologies, and compute capabilities. Today, we see designers struggling with issues like server location, or configuration. When the right design may not even require servers. Overloaded terms like realtime and thing, cause massive confusion between technologies, despite the fact that they have no practical use case overlap. The industry needs a better framework to discuss architectural fit.
Collectively, organizations like the IIC, enjoy extensive experience across the breadth of the IIOT. Mapping those experiences to a framework is the first step in development of a systems science of the IIOT. Accepting this challenge promises to help form the basis of better understanding and logical approach to designing tomorrow’s industrial systems.
IOT protocols. The IOT protocol roadmap in figure 21 of the eBook, outlines the basic needs of the IOT. Devices must communicate with each other, D to D. Device data must be collected and sent to the server infrastructure. D to S and that server infrastructure has to share device data, S to S, possibly providing it back to devices, to analysis programs or to people.
From 30,000 feet, the main IOT protocols can be described in this framework as, one MQTT. A protocol for collecting device data and communicating it to servers, D to S. Two, XMPP. A protocol best for connecting devices to people, a special case of the D to S patterns since people are connected to the servers. Three, DDS, a fast bus for integrating intelligent machines, D to D. Four, advanced message queuing protocol, AMQP, a Queuing system designed to connect servers to each other, S to S. And five, open platform communications, OPC, unified architecture known as OPCUA, a control plane technology that enables interoperability between devices.
Each of these protocols is widely adopted. There are at least 10 implementations of each. Confusion is understandable, because at the high level positioning, is similar. In fact, the first four all claim to be real time published subscribe IOT protocols, that can connect thousands of things. Worse, those claims are true depending on how you define real time, published subscribe, and thing. Nonetheless, all five are very different indeed. They do not in fact overlap much at all. Moreover, they don't even fill strictly comparable roles.
For instance, OPCUA and DDS are best described as information systems that have a protocol. They are much more than just a way to send bets. The previously mentioned simple taxonomy frames the basic protocol use cases. Of course, it's not really that simple. For instance, the control plane represents some of the complexity in controlling and managing all of these connections. Many protocols cooperate in this region. Today's enterprise internet supports hundreds of protocols. The IOT will support hundreds more. It's important to understand the class of use that each of these important protocols addresses.
Thanks for listening to episode seven, part two of our eBook. Leading Applications in Architectures for the Industrial Internet of Things. Coming up, we'll have part three, which will contain the rest of the eBook. If you have any question, please reach out. You can reach us at firstname.lastname@example.org, or over on social media. Thanks for listening to the Connext Podcast. Have a great day.