Lacey Trebaol: Hi, my name is Lacey Trebaol. Welcome to the Connext podcast. Today, I'm going to be sharing with you an audio version of our most popular e-book: Leading Applications and Architectures for the Industrial Internet of Things. The author of this e-book is our CEO, Stan Schneider. In order to keep our podcast length down to about 30 minutes we'll be breaking up the reading of this e-book into three separate episodes. Let's get started with part one. We hope you enjoy episode six.
Section one, introduction to the IIoT. The Internet of Things, IoT, is the name given to the future of connected devices. There are two clear subsets. The Consumer IoT includes wearable computers, smart household devices, and networked appliances. The Industrial IoT, or IIoT, includes networked smart power, manufacturing, medical, and transportation.
Technologically, the Consumer IoT and the Industrial IoT are more different than they are similar. The Consumer IoT attracts more attention because it is more understandable to most people. Consumer systems typically connect only a few points, for instance, a watch or thermostat to the cloud. Reliability is not usually critical. Most systems are greenfield, meaning there is no existing infrastructure or distributed design that must be considered.
There are many exciting new applications that will change daily life. However, the Consumer IoT is mostly a natural evolution of connectivity from human-operated computers to automated things that surround humans.
While it will grow slower than the consumer IoT, the IIoT will eventually have much larger economic impact. The IIoT will bring entirely new infrastructures to our most critical and impactful societal systems. The opportunity to build truly intelligent distributed machines that can greatly improve function and efficiency across virtually all industries is indisputable. The IIoT is the strategic future of most large companies, even traditional industrial manufacturers and infrastructure providers. The dawn of a new age is clear.
Unlike connecting consumer devices, the IIoT will control expensive, mission-critical systems. Thus, the requirements are very different. Reliability is often a huge challenge. The consequences of a security breach are vastly more profound for the power grid than for a home thermostat. Existing industrial systems are already networked in some fashion, and interfacing with these legacy brownfield designs is a key blocking factor. Plus, unlike consumer devices that are mostly connected on small networks, industrial plants, electrical systems or transportation grids will encompass many thousands or millions of interconnected points.
Building a technology stack for any one of these applications is a challenge. However, the real power is a single architecture that can span sensor to cloud, inter-operate between vendors and span industries. The challenge is to evolve from today’s mashup of special purpose standards and technologies to a fast, secure, interoperable future.
In the long term, there is an even larger opportunity. The future of the IIoT will include enterprise class platforms that guarantee real time delivery across enterprises and metro or continental areas. This will become a new utility that enables reliable distributed systems. This utility will support 21st century infrastructure, like intelligent transportation with autonomous vehicles and traffic control, smart grids that integrate distributed energy resources, smart healthcare systems that assist care teams and safe flying robot air traffic control systems.
This utility will be as profound as the cell phone network, GPS or the internet itself. There are many consortia of companies targeting the IIoT. The largest and fastest growing is the Industrial Internet Consortium, IIC. The IIC was founded in 2014 by global industrial leaders GE, Intel, Cisco, AT&T and IBM.
As of this writing in 2016, it includes over 250 members. The German government, along with several large German manufacturers, has an active effort called Industry 4.0. There is also a smaller startup consortium called the OpenFog Consortium. Of these, the IIC is by far the broadest. It addresses end-to-end designs in all industries. Industry 4.0 is focused only on manufacturing, and OpenFog targets, quote, "intelligence at the edge," meaning the movement of powerful, elastic computing out of data centers into neighborhoods and consumer premises. However, all share many common members and goals. They are working together in many ways.
Because of its size and growth, the IIC gets by far the most attention. The goal of the IIC is to develop and test an architecture that will span all industries. Just as ethernet, Linux and the internet itself grew as general purpose technologies that pushed out their special purpose predecessors, the IIC will build a general purpose industrial internet architecture that can build and connect systems such as transportation, medical, power, factory, industrial controls and others.
The IIC’s unique and powerful combination of leaders from both government and industry gives it the necessary platform to make this huge impact. The IIC was the first to create a venue for users across industries with similar challenges. This actually created the IIoT as a true market category that changed the landscape dramatically. Suddenly, hundreds of companies are deciding their strategy for this new direction. Gartner, the large analyst firm, predicts that the smart machine era will be the most disruptive in the history of IT. That disruption will be led by smart distributed infrastructure called the IIoT.
Section one, some example IIoT applications. The author of this e-book is the CEO of Real-Time Innovations, Stan Schneider. RTI is the largest vendor of embedded middleware and the leading vendor of middleware compliant with the Data Distribution Service, DDS, standard.
All applications in this section are operational RTI Connext DDS systems. The DDS standard is detailed in later sections. The applications are presented first to provide background and highlight the breadth of the IIoT challenge. These are only a few of the nearly 1,000 applications. Further examples can be found at www.rti.com.
Example one, connected Medical Devices for Patient Safety. 30 years ago, healthcare technologists realized a simple truth, monitoring patients improves outcomes. That epiphany spawned the dozens of devices that populate today’s hospital rooms, pulse oximeters, multi-parameter monitors, ECG monitors, Holter monitors, and more.
Over the ensuing years, technology and intelligent algorithms improved many other medical devices, from infusion pumps, IV drug delivery, to ventilators. Healthcare is much better today because of these advances. However, hospital error is still a leading cause of death. In fact, the Institute of Medicine named it the third leading cause of death after heart disease and cancer. Thousands and thousands of errors occur in hospitals every day. Many of these errors are caused by false alarms, slow responses, and inaccurate treatment delivery.
Today, a new technology disruption is spreading through patient care, intelligent, distributed medical systems. By networking devices, alarms can become smart, only sounding when multiple devices indicate errant physiological parameters. By connecting measurements to treatment, smart drug delivery systems can react to patient conditions much faster and more reliably than busy hospital staff. By tracking patients around the hospital and connecting them to cloud resources, efficiency of care can be dramatically improved. The advent of true IoT networking in healthcare will save costs and lives.
The Integrated Clinical Environment, ICE. Researchers and device developers are making quick progress on medical device connectivity. The Integrated Clinical Environment standard, ASTM F2761, is one key effort to build such a connected system. ICE combines standards. It takes data definitions and nomenclature from the IEEE 11073 standard for health informatics. It specifies communication via the Data Distribution Service, DDS, standard.
ICE then defines via the DDS standard control, data logging and supervisory functionality to create a connected intelligent substrate for smart clinical connected systems. Like most standards, large organizations may take time to adapt to a standards-driven environment. ICE nonetheless represents an excellent example of how to build smarter systems.
Patient monitoring. Modern hospitals use hundreds of types of devices for patient care and monitoring. These systems must work in a large hospital environment. Integrating whole hospitals with thousands of devices presents challenges for scalability, performance and data discovery. To prove the design viable for GE Healthcare, RTI built a simulation to prove that DDS could handle a thousand-bed hospital with over 100,000 devices. The simulation ran in RTI’s networking lab. It sent realistic data flows between hundreds of applications, instances of RTI’s Connector product.
RTI services teams developed a matrix, which was an Excel spreadsheet, to configure Connector and send the mix of data types and rates expected from real devices. RTI developed an automated test harness to deploy these applications across the lab’s test computers and collect the results. A graph of part of the simulation topology is presented in the following. RTI’s test harness collected data flow rates and loading across this topology.
The system had realistic scale, performance and discovery. Since it is important to communicate real-time waveforms and video, the potential network-wide data flow is large. However, the need is sparse, most data is only needed at relatively few points. As explained in the succeeding text, DDS can propagate specifications to the senders to indicate exactly what each receiver needs from the senders. The senders then filter the information to send only what is needed, thereby eliminating wasted bandwidth.
Discovering data sources is also critical, since 62% of hospital patients move every day. So, the system also tested transitions between network locations. When deployed, the new system will ease patient tracking. It will coordinate devices in each room and connect rooms into an integrated whole hospital. Information will flow easily and securely to cloud based Electronic Health Records, EHR, databases. The hospital of the future will become an intelligent, distributed machine in the IIoT.
Example two, Microgrid Power Systems. The North American electric power grid has been described as the biggest machine in the world. It was designed and incrementally deployed based on centralized power generation, transmission and distribution concepts.
Times have changed. Instead of large, centralized power plants burning fossil fuels that drive spinning masses, Distributed Energy Resources, DERs, have emerged as decentralized, local alternatives to bulk power. DERs are typically clean energy solutions, solar, wind, thermal, that take advantage of local environmental and market conditions to manage the local generation, storage or consumption of electricity.
The DER time challenge. Most renewable energy sources are not reliable producers. Solar and wind can change their power output very quickly. Unfortunately, that dynamic behavior is not compatible with today’s grid. Today’s grid uses local power substations to convert high voltage power to neighborhood distribution voltage levels. Those stations estimate power needs and reports back to the utility. The utility then needs up to 15 minutes to spin up, or down, a centralized generation plant to match the estimate.
So, since a solar array can lose power in a matter of seconds with a fast moving clouds, the grid cannot react. An alternate source has to be available and ready to pick up the load immediately. If there isn’t sufficient backup, the voltage on the grid can drop and the grid can fail. The only way to provide that backup today is to provide spinning reserve capacity, meaning the generators must use more energy than the grid needs.
As solar energy resources grow in a utility’s service area, the utility has to have more excess spinning reserve ready as backup. While the sun is shining, power may be flowing from these distributed solar arrays back to the grid. However, the fossil fuel generators needed to be running and spun up sufficiently to quickly take over if the solar arrays stop producing.
So, with every solar array pushing power onto the grid, there is an equivalent fossil fuel generator spinning in the background to take over. Thus, little fossil fuel is saved. Even worse, driving the generators without load makes them overheat and even prematurely wears out bearings. To fix this, the utility needs 15, 30 minutes of extra time to ramp up the generators. Then, they would not need to have the spinning reserve. The only way to provide the time needed is to implement energy storage or load reduction.
Microgrid architecture. Microgrids are the leading way to provide that time. Microgrids combine intermittent energy sources, energy storage systems like batteries, and some local control capability. This allows the microgrid to smooth out the changes in DER power. A microgrid can even island itself from the main power grid and run autonomously.
Microgrids usually encompass a well defined, relatively small geographic region. College campuses have been proving grounds for this technology, as have military bases. A microgrid can respond rapidly and locally to a loss of power from solar arrays or local wind turbines using backup energy sources like batteries.
Many proof of concept microgrid projects are active. They range from small demos to utility class pilot testbeds. All seek to incorporate energy storage and load reduction techniques into the grid. Two key capabilities for microgrids are intelligent control at the edge of the grid and peer to peer, high-performance communications for local autonomy.
With these, a local battery energy storage system can receive a message in milliseconds from the solar arrays when backup energy is needed. The local controller on the battery can then quickly switch the battery from charge to source mode. This keeps the local energy consumers powered and gives the utility time to spin up central power resources as needed.
The OpenFMB Framework is the first field system addressing the need for reliable, safe, upgradeable distributed intelligence on the grid. OpenFMB directly addresses the decentralization issue facing utilities and regulators by leveraging existing electricity information models, such as IEC 61968/61970, IEC 61850, MultiSpeak and SEP 2, and creating a data centric bus on the grid to allow devices to talk directly to one another.
The initial use cases targeted by OpenFMB are microgrid focused, and the OpenFMB framework is closely adhering to the IIC’s Industrial Internet Reference Architecture, IIRA. The OpenFMB team held a major demonstration in February 2016 with 25 different companies. Many parts of the implementation were using RTI’s Connext DDS platform.
DDS interfaces were developed for the optimization engine, load simulators, the Point of Common Coupling, PCC, transition logic and other required simulators to drive the demonstration. To test non proprietary interoperability, the system built a cross platform solution with multiple operating system targets and CPU architectures. The demonstration proves that IIoT interoperability is a practical, achievable path for fielded utility devices and systems.
Example three, large scale SCADA control. NASA Kennedy Space Center’s launch control system is the largest SCADA, Supervisory Control And Data Acquisition system in the world. With over 400,000 control points, it connects together all of the equipment needed to monitor and prep the rocket systems.
Before launch, it pumps rocket fuels and gases, charges all electrical systems, and runs extensive tests. During launch, a very tightly controlled sequence enables main rocket engines, charges and arms all the attitude thrusters, and monitors thousands of different values that make up a modern space system. It must also adapt to the various mission payloads, some of which need special preparation and monitoring for launch.
The launch control system has very tight and unique communications requirements. The system is distributed over a large area and the control room. It must be secure. Data flow is tidal, it activity cycles through the surge of preparation, spikes during the actual launch and then ebbs afterward. During the most critical few seconds, it sends hundreds of thousands of messages per second.
Connext DDS intelligently batches updates from thousands of sensors, reducing traffic dramatically. Everything must be stored for later analysis. All information is viewable, after down-sampling, on HMI stations in the control room. After launch, all the data must be available for replay, both to analyze the launch and to debug future modifications in simulation.
Example four, autonomy. RTI was founded by researchers at the Stanford Aerospace Robotics Laboratory, ARL. The ARL studies complex electromechanical systems, especially those with increasing levels of autonomy.
Unmanned air and defense vehicles have long relied on DDS for deployments on land, in the air and underwater. DDS is a key technology in many open architecture initiatives, including the Future Airborne Capability Environment, FACE, for avionics, Unmanned Air Systems, UAS, control segment architecture, UCS for UAS ground stations, and the Generic Vehicle Architecture, GVA for military ground vehicles.
UAS have complex communication requirements, with flight critical components distributed across the air and ground segments. Furthermore, to operate in the U.S. National Airspace System, NAS, the system must be certified to the same safety standards as civil aircraft. RTI recently announced a version of DDS with full DO-178C Level A safety certification evidence. It was developed to meet the needs of the Ground Based Sense and Avoid, GBSAA, systems.
This technology is now also being applied aggressively to autonomous cars for consumer use. This market is perhaps the most disruptive of the IIoT applications. Because the ground is a much more complex environment, autonomous cars face even greater challenges than air systems. They must coordinate navigation, traffic analysis, collision detection and avoidance, high definition mapping, lane departure tracking, image and sensor processing, and more.
Safety certification for the entire system as a whole is prohibitively expensive. Dividing the system into modules and certifying them independently reduces the cost dramatically. Separation kernels are operating systems that provide guaranteed separation of tasks running on one processor. Separation middleware provides a similar function to applications that must communicate, whether they are running on one processor or in a distributed system. A clean, well controlled interface eases certification by enabling modules to work together.
Alright, well that's all for episode six, which is part one of the audio version of our e-book, Leading Applications and Architectures for the Industrial Internet of Things. Because this is an audio version of an e-book, you didn't actually get to see all of the amazing graphics and pictures that show the example systems and their architectures that we have, so I would highly recommend that you head on over to rti.com\podcast, select episode six from the list of episodes available. At the bottom of that episode page you'll find a link to the e-book. Download the e-book and let us know what you think.
You can reach us at firstname.lastname@example.org, or over on social media. Also, I'd like to point out that our podcast is now up on iTunes, so head on over and you can subscribe today. If you're not an iPhone or iTunes user, you can also subscribe over on SoundCloud or via the RSS feed. Thanks for listening to the Connext podcast. Have a great day.