Autonomous Systems expert Bob Leigh kicks off the first episode of a five-part automotive podcast series. Take a peek under the hood and learn what developers are doing to address common challenges on the systems side of these vehicles.

In Episode 40 of The Connext Podcast:

  • [1:40] Current state of the autonomous vehicle technology market
  • [5:06] How does Connext DDS work with enabling AI?
  • [6:08] How a layered databus works with autonomous systems 
  • [11:31] What is data fusion and how does Connext DDS enable it? 
  • [18:14] Most important thing for autonomous vehicle developers to consider 

Related Content:

  • [Blog] Autonomy Takes the Driver's Seat in Connext 6
  • [Case + Code] Accelerate Autonomous Car Development
  • [Datasheet] RTI Connext 6
  • [Datasheet] RTI in Autonomous Driving
  • [eBook] The Rise of the Robot Overlords: Clarifying the Industrial IoT
  • [Events] RTI at CES 2020: The Future of Autonomous Driving 
  • [Product Page] Connext 6
  • [Press Release] RTI Announces the First Connectivity Software for Highly Autonomous Systems
  • [Podcast] Connecting Autonomous Systems with NextDroid, Part 1
  • [Webinar] Learn the Simplest Way to Build Complex, Autonomous Systems
  • [Webinar] A Game-Changing Leap in Autonomous Vehicle Software Architecture 
  • [Whitepaper] The Secret Sauce of Autonomous Cars

Podcast Transcription:

Steven Onzo: Alright. Welcome to the podcast, Bob.

Bob Leigh: Thank you.

Steven Onzo: Really happy to have you on. There's a ton of things I want to cover today, especially with this interview being a little different than the others in which this is the first episode of a mini-series we'll be doing focusing on IIoT in the automotive market. More specifically, how we're in this moment of time where high-level autonomous vehicles are transitioning from a concept to reality. So, we're here today to talk about the technical side of how autonomous vehicles will become that reality. And for that to happen, there are a lot of things on the systems side that need to be addressed. So, with all of that being said, let's just jump right in, and start by telling us a little bit about yourself in regard to your role here at RTI.

Bob Leigh: Sure, I'm the Senior Director of Market Development for Autonomous Systems, and when we look at systems, we see a lot of commonality between whether it's cars or trucks or autonomous boats or submarines - we're involved in all of those. But clearly the biggest market that we're looking at is automotive. Just its size and the amount of investment there. It's really a key market for RTI's growth in commercial. We're investing quite a bit in automotive. And my role is to work with our sales teams or through our product management teams, work with our partners and also with our strategic customers trying to develop that market for RTI, position us well and help everybody succeed and try and get us pulling in the same direction, on the same page.

[1:40] Current state of the autonomous vehicle technology market

Steven Onzo: Right. As a way to introduce the technical side, can you start by giving us a brief overview of the autonomous vehicle technology market today? You know, automotive systems, subsystems, proprietary solutions at a very high level. How do these work, and where does connectivity and interoperability come into play?

Bob Leigh: Sure. That's a wide-ranging question. There's a lot of complexity built into one question and lots of areas you could delve into. But if we try and simplify it a little bit, an autonomous system is really a robotic system. And they're doing three main things. Sensing the environment, processing on that environment, and then acting on the environment. And that's essentially a cycle or a loop that's happening over and over and over again.

So if we want to break that down a little further, when we look at a self-driving car - or any kind of autonomous driving vehicle especially - they have a sensor package that's looking at the environment and depending on what level of autonomy you're talking about, whether it's simple driver- assist level technology to more complicated, highly or fully autonomous vehicles, will dictate what level of fidelity and how much data you're going to be collecting.

So, how many LIDAR sensors, how many radar sensors, how many other points of input? And that's your sense part. That has to be rationalized. We have to get an understanding what the world is from that sensor package. And we call that sensor fusion. And then there's the thinking: "Okay, what do I do with this information? Am I going to go to turn left? Am I going to go straight? Am I going to turn right? What's the environment? What's going on in the environment?" Analyzing the different factors like people or bicycles or cars, and then making decisions and planning. Essentially, what the car is going to do is the thinking part. And then the acting is going out and turning the wheel and pushing the brake and pushing the gas and actually taking some physical action in that environment, which then changes the environment and the cycle starts all over again.

So that's the high-level connectivity, which is really about connecting all these different things together. And when we say thinking, it’s not necessarily one system thinking. We've got different levels of thinking where  you're looking at what the environment looks like, but you're also checking to see that you're not going to run into anything in a very simple way. So it's more reliable. And we also have thinking about: How is the engine working? And that's more traditional autonomous vehicle technology, in that it really doesn't have anything to do with autonomous driving, just: Is the car functioning properly?

But all of these systems still need to share information. If your engine isn't working, you're not going to be able to push the gas and go forward. They all have to be connected. So, when we're looking at everything in the car, we're talking about hundreds of ECUs and we're talking about some that are very complex and have a lot of capacity to analyze, and some that are very simple.

And then we have sensors, and then we have actuators. And all these things need to communicate with each other. So it's really a very complex distributed system with many components, all in this very small tight package. Connectivity is about connecting all of these things together. And then when you add things like V to X  or V to I, connecting to the cloud and connecting to other systems, now you have external connections as well, which is also part of your connectivity solution.

[5:06] How does Connext DDS work with enabling AI?

Steven Onzo: Right, so I think this is actually a good segue to bring Connext DDS into the conversation. How does Connext DDS work with enabling AI into these systems?

Bob Leigh: AI is  discussed at the highest level in terms of complexity of what's going on in the thinking the part of the car. And what Connext DDS does is provide data. Simply, it just provides data to that system.

That's really what we're doing is providing data. And if you're talking with an AI developer, they're going to have lots of different types of data that they want. They're going to want to have a rich breadth and depth of data, because that allows them to analyze the environment better. And what DDS does is give access to all that data, and hopefully in a normalized sort of common way. No matter what type of LIDAR you have or what type of radar you're have, you're getting that data consistently in the same way, so you can process it by your AI algorithm.

[6:08] How a layered databus works with autonomous systems 

Steven Onzo: For our automotive listeners, can you tell them briefly about the layered databus, and how it works with autonomous vehicle systems?

Bob Leigh: Layered databus is a concept and a term that was developed by the industrial internet consortium, which is the IIoT Industrial Internet of Things Consortium that RTI is a member of. We helped write a few documents and specifications. And one of the things that came out of that work, collaborating with other companies, was this concept of a layered databus. And the layered databus allows you to identify different planes of either control or information within a system. So, within an autonomous vehicle that may have different conditions or parameters around it.

An AI might be concerned with all the sensor data that's coming in, and what control do I have over the system? And it needs that data, let's say 20 times per second, and it has to process it with a certain speed, and it needs a certain bandwidth and there's things that we call Quality of Service that determine how reliable should that data be. Do I retransmit the data if I miss it? How much history do I need in that data?

That information would define this higher-level layer in the databus that the AI needs. But there's other layers, there's other types of data necessary at a lower layer. Or when we're talking about things like engine control, the engine needs to know that there's fuel coming, it needs to know how far the gas lever is depressed -  it needs to provide that information. Perhaps some of that information about the engine goes to an infotainment system or to an instrument cluster. And that may have very different parameters in terms of how that data is consumed and what type of data is needed than the AB system - and if we go the other way, there is data that's sent back to the cloud.. Not all the data, but some of the data may be sent back to the cloud. It may not have a reliability constraint. We may not want to re-transmit it. We may have bandwidth constraints we want to consider. And so, we may need a different set of rules for how we transmit data back to the cloud.

This layered databus concept allows us to use the same standard, usually consistent or compatible data models. So we're really talking about the same types of data but different conditions and different rules about how that data is managed for different parts of the system and allows us to have a very standardized way to communicate between these different systems without having new protocols and gateways and other bridges. We just have this concept of layered databus, which naturally allows you to find these different conditions for use of data.

Steven Onzo: Actually in a previous podcast I talked to a fellow RTI employee, Brett Murphy, and he was giving this use case about how these big wind turbines use databuses, and there's a databus running in the actual wind turbine and then there's a databus running the farm of wind turbines and maybe another one running the control center. Could that use case kind of be applicable to autonomous vehicles eventually?

Bob Leigh: Eventually, yeah. The difference with autonomous vehicles versus wind turbines is one company is building the wind turbines and the entire system, and they compose their architecture on the whole thing. When we're talking about autonomous vehicles, even within the car is not just one company often, and so the ecosystem is much more complex and maybe not as clean and not as elegant and certainly not consistent from company to company and project to project. But we can still apply it. And the simplest way to look at it is there's the traditional CAN bus, and you could call that a layer in the databus, and it’s where  the hardware, communication system, all the low-level ECUs communicate with each other - it's a safety-critical system, and does things like apply the brakes,turn on the lights and open the doors.

Then there's a layer up from that where we're gathering data for the system and its typically still a safety critical domain where we may need safety certification in that domain. And this is where Connext DDS can start to play. And this'll be some of that CAN bus data, but also additional sensor data that's fed into things like perception and planning algorithm.

Then there'll be another layer within the car that would be more for the infotainment or instrument cluster. It may or may not have any safety criticality to it. But it's going to take a smaller subset of that lower-level data because you don't need data as frequently when you're displaying it on a screen for a person as you do when you're planning an AI algorithm. So, we've already got three layers here and then we could add a fourth layer, which is connecting regionally between different cars within your fleet, for example.

If you're managing a fleet of cars and you have autonomous vehicles especially, and you want to know where they are or which ones require gas or a charge within the next 20 minutes or half-hour. And then you may give some special instructions to go fill up or go to the closest station. Or you want to know what sort of event is someone at a red light having. Is there a car that's having trouble? Either mechanical issues, or can't navigate around a pothole for example? That could be the next layer up in the databus.

[11:31] What is data fusion and how does Connext DDS enable it? 

Steven Onzo: I want to talk a little bit about the sensors that are attached to autonomous vehicles. Autonomous vehicles have several sensors that produce data from radar, LIDAR, cameras, etc. But individually, these sensors don't have enough information to realize what's happening around the vehicle until they can all talk to each other and share data in turn enabling the vehicle to perceive that clear coherent picture of its environment. Can you talk about how the data fusion that's happening here, supplements these applications, what that means and how it Connext DDS enables it?

Bob Leigh: Sure. So, sensor fusion is one of the really hard problems in autonomous system development and there's some really impressive algorithms out there. We've seen things like navigating left turns really well. And identifying people and bicycles and all these things, and all that because of sensor fusion. But it's still really, really hard because there's a never-ending long trail of use cases. You could work for years and years, and there's always going to be a condition that you've never come across. There are too many permutations in the world.

So sensor fusion is collecting all the data that you have and so that's going to be LIDAR data which gives you very precise distance information of everything in your environment, radar information which can do things like see through fog and very reliably tell you: Is there something in front of you, and how far away is it? And then camera data, which gives you a 2D picture of the world that you need to then interpret. And then we may add things like GPS and other things into that world.

All of them are going to have a different set of information about what's out there. Some of them will see trees, some of them will not see trees. Most of them will see a car, but you may see five or six cars. And how do you know that the car in the camera is the same as the car in the LIDAR, which is the same as the car in the radar? So, sensor fusion is about combining all of that information together, figuring out what the objects are, and ensuring that you're counting those objects once between all of your data inputs, and then creating some sort of labeled model about what's out there in the environment. So that you have as accurate as possible a picture of what's there. And to do any planning or any algorithms, it really depends on how accurate the information you have is about the environment.

And we've seen things in the news where, the truck wasn't visible to the camera because it was the same color as the sky. And so we couldn't see that truck. Well, could LIDAR see the truck? Could radar see the truck? And could you then figure out that there's a truck there, even though the camera can't see the truck because it's sort of washed out with the lighting conditions? So those are the things that a sensor fusion does. And where Connext DDS helps is giving developers the ability to process that information and collect that information in a consistent way, but also reliably get information from all of the sensors.

If you're developing a sensor fusion algorithm, the last thing you want to worry about is: Am I getting my data reliably from the writeup LIDAR sensor? Am I getting my data reliably from the cameras? What happens if the network drops a frame? What do I do? That's not really part of a sense of fusion algorithm. But as you move into a production system, it's a critical consideration to make a reliable system. So what we do is make sure that that data gets there reliably, gets there in a consistent way that you can expect within your algorithm. And then normalize data across all the different types of systems you can. And then ou have a common data model that everybody uses. And so, this won't help you create a better sensor fusion algorithm, but once you've created one, it'll help you create a more reliable and robust sensor fusion system.

Steven Onzo: I think it's interesting to think that developers are designing these systems for a market that hasn't been super clearly defined yet. Which I'm sure it can make it difficult to prepare how your system runs over time. So how are autonomous vehicle companies considering the future of their systems as the autonomous vehicle market evolves?

Bob Leigh: I don't think there's a consistent answer to that. At one level, this is a scramble for IP generation. Who's got the best IP. And some companies are investing in getting their product to market. And so, there's a level of IP in that and that making the whole system work and making it reliable - and being able to sell it. And others are looking at very specific parts of the technology. Like, “I'm just going to build maybe as narrow as the LIDAR, or let's set the sensor fusion algorithm and build a really good one." Or, "I'm just going to build a package that does the sensing and the thinking and then I'll sell it to someone else to install in an actual car. Then we'll integrate the acting part and make the system work."

With all the competition and all the investment going on, it's mostly chaos, I'd say, in how companies are approaching the future. Trying to leverage their strengths and get as much traction as they want. But at the end of the day, what they're trying to do is get to market first with a reliable technology in a business that makes money and that allows them to scale and  win the day.

And I think we've taken a step back recently from trying to achieve full autonomy. So that's a change perhaps in the last year, that we're looking at how can we practically apply this technology where we're looking at maybe a human still in the loop. Or high levels of autonomy in enclosed conditions, like within certain cities or certain routes. And we're seeing a real push within things like trucking and mining for that, because the problems are a little simpler and there's a real value to be had from deploying these technologies into these more industrial applications that have real problems to solve compared to the general passenger car market, which is really an arms race at this point. And nobody knows if we're going to have the promise of autonomous vehicles in two years or 20 years. And it all depends on how you define it.

[18:14] Most important thing for autonomous vehicle developers to consider 

Steven Onzo: Well, as we wrap up here, I do have one last question for you, Bob. In your opinion, what do you think the most important thing for developers who are building autonomous vehicles to know about autonomous vehicle development?

Bob Leigh: Well, I think what we see most often, and I'm not going to even try to pretend to tell developers how to build an autonomous vehicle because frankly we don't know how to do that. And that's our not our strength. Go and build an autonomous vehicle. But when you want something to work reliably, when you want to get it to production, then that's where we can help. And I think what we see most often is people not understanding the jump in complexity when you're building a system that's running under controlled test systems compared to a system that needs to scale and go out into the market and function with all these new test cases that the general public will do that you never thought of.

And we can help with that because we can provide a very reliable foundation for you to build your software on. And we've been doing it for many years,  working with autonomous systems long before it was a buzzword in the automotive industry. Or in things like the military and other applications. So, what I'd say is don't try and do everything yourself. Even when it comes to software, there is expertise out there like from RTI that you can leverage to, take care of some of the difficult but maybe less fun problems, and that's infrastructure and communication.

Steven Onzo: It's good advice. Bob, I want to thank you for coming back onto the podcast for a second time and discussing this cool and exciting time where autonomous vehicles are indeed going from concept to reality. So thanks again.

Bob Leigh: Thank you.

INSIDE THE PLAYBOOK

Get the latest updates and insights from the RTI newsletter

Subscribe to the Newsletter