[0:45] Meet Vince and his
[5:50] and [30:30] Reliability, security and safety - they’re not the same and you have to make tradeoffs
[8:25] Telerobotics and the risks of increasing levels of autonomy
[9:25] Security cannot be a bolt-on solution
[10:30] What is ROS and what’s new in ROS 2?
[31:40] Challenges and consequences of implementing security in real-time systems
Lacey Trebaol: Hi, my name is Lacey Trebaol, and welcome to The Connext Podcast. Today, I'm really excited to share with you episode one, where I'm interviewing Vince Dilouffo, who is an RTI customer and a Connext user and also a Ph.D candidate from Worcester Polytech, WPI in Massachusetts. His focus for his Ph.D thesis is on robotics and security. This is part one of a two-part interview. In this portion, we'll be discussing ROS, ROS 2.0, and some of the challenges associated with security and robotics.
Bill Michaelson: I'm Professor Bill Michaelson, professor of electrical engineering at Worcester Polytechnic Institute, and also involved with our robotics engineering program.
Lacey Trebaol: Great.
Berk Sunar: I'm Professor Berk Sunar, professor of electrical and computer engineering at Worcester Polytechnic Institute, and my specialty is cyber security.
Vince Dilouffo: I'm Vince Dilouffo. I'm a Ph.D student here at WPI.
Lacey Trebaol: And you're the star of this interview.
Vince Dilouffo: I'm nervous.
Lacey Trebaol: ‘Cause that's what they said on the phone.
Vince Dilouffo: That's what they said on the phone.
Lacey Trebaol: This is all about you and your work.
Vince Dilouffo: Yeah.
Lacey Trebaol: Okay, great. One of the areas that I wanted to start with, there are people, and this is crazy, I know, who don't know much about ROS beyond, oh yeah, that's what people who play with robots use. Right? So if we could talk a little bit about that first, I'd like to start there. And then another quick thing at a, you know, almost a theoretical level about, like, what is a robot, and then-
Bill Michaelson: Oh, that's a really touchy
Lacey Trebaol: Oh, I know it is.
Vince Dilouffo: I'm gonna let him do that one.
Lacey Trebaol: And that's a good thing, see? And also talking about, like, what do you mean by security, just-
Vince Dilouffo: If this is security, when I'm done with it I burn it.
Lacey Trebaol: And he's referencing a notepad. Yes.
Bill Michaelson: I can give you our party line definition.
Lacey Trebaol: Oh, there's a party line. I'm interested.
Bill Michaelson: Robotics systems, obviously, can be a lot of different things. But fundamentally, we view a robot, very broadly, as something that can sense its environment, can make decisions about that environment, and then can act within that environment. So, you know, sense, control, and act are really three characteristics that separate robots from your laptop that's running Word or something. So, that takes a lot of different forms. The form that we've been working with mostly are mobile robots. We've got a project that we're building a robotic sailboat that we basically tell it what we want it to do and it has to figure out based on wind speed, wind direction, currents, et cetera, et cetera, how to do it.
Lacey Trebaol: So it forms its own plan.
Bill Michaelson: So it forms its own plan based on decision making on board the robot. So it's doing those sense, control, act kinds of operations. Now when we get into things like automatic door openers, you could still argue that they fulfill that sense, control, and act, but we probably wouldn't call it a robot. So at the base of this, there's still this issue of, a robot is something that when I see it I know it. So it's a fuzzy definition, but generally speaking it's a system that has those three fundamental abilities, whether it be mobile, whether it be fixed in a manufacturing situation.
Lacey Trebaol: So what are some examples of robots where you've seen them and said, "That's not a robot"?
Bill Michaelson: Probably the garage door opener would be a great example. It's like, yeah it senses, it can detect a person approaching. It can open the door as a result of that. It has some decision making that if the door is occluded it can stop opening it or stop closing it, perhaps more importantly. But I probably wouldn't classify that as a robot.
Lacey Trebaol: Understood. I don't think I would either. I'd never considered the garage door opener as one, so I think that's actually a very good example of not a robot. Okay. That's a very good working definition. Thank you. So, security.
Berk Sunar: Alright, security. I got the tough one. Security is rather difficult to define unless we bring in an adversary. And it totally makes sense if you have an adversary, a malicious party in the system. As soon as you have a malicious party, you can define malicious acts, actions such as diversion of the system, obstruction of the operation, or just getting information out of the system, like secret information. And so in any setting where you have a malicious party, you can define security as all those principles we would like to assert in the malicious party from gaining access to, right? So all of the processes, technology, and whatever layer of implementation you want to include you can have security defined broadly. So what is not under the security umbrella? We can, if we take out the malicious entity out of the system, like typically the liability is falsely considered under security. It's not really a part of security.
Lacey Trebaol: Right. Is there ... You mean when people want like the five nines up time for reliability and-
Berk Sunar: For example-
Lacey Trebaol: And then they say it's a security issue because of it's not up then-
Berk Sunar: It's a safety issue, it's not security issue.
Lacey Trebaol: Right, exactly. Okay.
Berk Sunar: Exactly. So, and just to make that clear, so any malicious entity being in the system makes things a lot more difficult, right? I mean, gaining reliability against, let's say, non-adversarial, natural hazards is rather easy compared to security objectives. I'm making a broad statement here, I know, but that's really the case. Because then you have an ... It's like having an intelligent actor, right, when you have a malicious entity, so you cannot compare to natural events, which are quite predictable and they're not intelligent. They don't take the shortest route. Right? They don't ... They're not aware of your algorithm running, for example, usually. These are like natural random errors, as opposed to the adversary playing attacks and sometimes even adaptive ones, second-guessing you in the process, right?
Lacey Trebaol: Right. They're strategic and adaptive.
Berk Sunar: Exactly. Strategic, adaptive, and so on.
Lacey Trebaol: Okay. So in the context of robotics ...
Berk Sunar: In the context of robotics, becomes even more important, because now you have moving pieces, people involved. So, and more broadly, especially in the cyber physical realm, right, that's what we're calling it now. We have digital and physical devices interacting. There's also a human at the center, and think about healthcare robotics. Think about transportation. You have a lot at stake there, and any malicious mal-intentioned adversary can cause severe harm. It's not just your computer crashing. It could be your car crashing.
Lacey Trebaol: Like the Jeep exploit that they had. Wasn't that a few years back?
Berk Sunar: Right. And that wasn't ... I mean, that was a very simple and beautiful way to demonstrate the threat, right? The severity of the threat.
Lacey Trebaol: Everyone understood when that one was put out, even if they didn't actually understand.
Berk Sunar: Exactly. You don't necessarily have to understand the full technical details to appreciate the severity of the situation. But then think about robotic surgery, for instance. Right? And yes, you have serious reliability issues there, even reliability threat can be bad. But if you have an intelligent adversary subverting your patient then that's basically murder, right?
Lacey Trebaol: Yeah. Not basically. Pretty sure it would be for sure. But yeah. That's scary. Absolutely.
Berk Sunar: Yes.
Lacey Trebaol: And especially when they're looking at things like the telerobotic surgery, right? So it's automatically opened up more to that type of breach, because the surgeon connecting in is not physically there, so it's on a physical line connecting them.
Berk Sunar: Absolutely. And you have these kinds of semi-autonomous settings quite a bit. And that's really what makes this whole robotics business valuable, right? Once you have autonomous robots, not requiring any supervision, doing jobs on behalf of you, especially annoying, repetitive, or mission critical. Like sometimes you send robots into dangerous situations. That's why you need autonomous operation. But once you have autonomous operation, there is a risk of infection, taking over, the robot being re-programmed, like drones captured and-
Lacey Trebaol: I was gonna say, wasn't one of the drones actually told to go home, and home was an air strip or a...INAUDIBLE
Berk Sunar: Right. That's a simple GPS spoofing attempt. And these kinds of problems will occur more and more. And that's why we wanted to look into security issues while robotics systems are still in their infancy.
Lacey Trebaol: So that's it's not a bolt-on solution.
Berk Sunar: Exactly. Rather than bringing them in as an add-on solution in the end. Like, security is not something you can just install in a system. The system has to be built from the ground up with security in mind.
Lacey Trebaol: Right. That security needs to be the first class citizen type requirement, not the, "Oh, hey, by the way, guys. I know your system's built, but could you please now make this secure."
Vince Dilouffo: Not as simple as just attaching a KG-84 box anymore.
Lacey Trebaol: No. No, it's not. We're not in that age. Definitely not. So, thank you for that definition. That was actually very helpful. It had to do also with the work you were doing with ROS that RTI has a huge interest in seeing that be successful. So-
Bill Michaelson: We want to thank you, Tom O'Conner and Dave Seltz, as two extremely supportive people of RTI working with us.
Lacey Trebaol: Yeah. I'm glad that they've been helpful.
Bill Michaelson: Yeah.
Lacey Trebaol: He's one of my favorite account people.
Bill Michaelson: So, I wanted to thank him.
Lacey Trebaol: So now, let's get into a little bit about ROS. ROS started at 1.0 when? When was that release?
Vince Dilouffo: So let's go back to the definition of ROS. ROS stands for Robot Operating System. And it was originally initiated around 2007. So basically it's a distributed architecture using publisher subscriber messaging for data exchanges. And this is a middleware concept or layer. So each node associated with the architecture can have a topic or a service. A topic is really a name of a content, again from a publisher subscriber point of view. And then there's the service, which is more of a client server, so it's a request and a respond associated with the messaging. ROS was really designed for a couple things associated with robotics: locomotion, manipulation, navigation, and recognition. I guess this goes back to some of the points that Bill was making before about robotics systems. Currently in a past survey, ROS has over 80 robots that's being supported by it, but that's a past survey. And just recently, I guess at ROSCon, there was probably over 14 millions lines of code that were implemented onto the ROS community.
So back to ROS, kinda the concept of ROS. It was really back to those four prime areas of robotics, and security wasn't considered up-front into the initial design. There was really no means of quality of service at the communication layer. And there was limitations with the scalability and performance of ROS 1. And we'll get into ROS 2 a little bit later. So as far as ROS growth and becoming a ubiquitous kind of standard, there is different areas now that ROS is taking off, specifically in t hese industries: military, industrial, agricultural, and automotive. And so with ROS, there was also another initiative, which was called Hardware ROS. And this is really at the physical layer of connecting components, and then interacting right at the physical layer. So that's kind-of a brief overview of what ROS is.
And let's go onto ROS 2. So ROS 2 was introduced in 2015 with a alpha release. And this is really dependent on the data distributed services from the object management group. And this is different from ROS 1, because now it's dependent upon a standard for messaging. And on top of that, they're layering the DTS security specification is extension onto that-
Lacey Trebaol: So they're using DDS, which is what, data distribution service-
Vince Dilouffo: Data distributed services.
Lacey Trebaol: And it's included with ROS 2.
Vince Dilouffo: Correct. It's dependent upon it. So think of it as a layering in the software architecture, is that you have the OS, the operating system, then you have the DDS layer on top of that, and then you have ROS communication or ROS modules running on top of that data distributed communication layer.
Lacey Trebaol: Great.
Vince Dilouffo: And then on top of that ... well, inclusive of that data distributed, is that you have the security, which is not a afterthought and tacked on, but now it's being built right into the middleware layer. So the coupling of the real-time transport, the quality of services, and the security is being formulated for a robotic industry now. And this is kind-of a area that we're focused in to take these software layers plus the physical layers and look at the security aspects to those.
And so going back to ROS 2, there's a transition between the alpha releases of ROS, which had four vendors that were supported, and that after beta 1, that got down selected to RTI and eProsima as the two vendors going forward right now for implementations and working in their demos and examples.
Lacey Trebaol: Great. Okay. So, which brings you into security. So you had written, in something that you had send over when we were emailing back and forth, that security has to be included in initial design. And so with ROS 2, you'd mentioned that security is being actually considered as one of those layers. It's not going to be the afterthought that it has been in past systems, the way it was being forced. So what is the consequence of that to somebody using ROS? The fact that security is now something that is included. What does that mean for the average person who uses this?
Vince Dilouffo: So now, in the ROS 1 context, security was not inclusive, so robotics systems, as we discussed before about the kind-of the examples that we're giving where security and vulnerabilities, in essence, could affect the outcome of robotics systems, security's being layered in now to potentially protect against some of these risks and risk mitigations. So this becomes a big force, or shall I say going forward where security is now part of the solution and we can potentially protect the data, not only ... and the safety and the privacy of data being transferred between the actually distributed nodes on the network.
Lacey Trebaol: Okay. Great. So within a given robotics system.
Vince Dilouffo: Within a given robotics system. Yes.
Lacey Trebaol: Robots ... I mean, some of them, I think, it gets really interesting, because you're ... A lot people talk about systems of systems, right, in the system engineering terms, which is where I come from. So it's all the different boxes from all the different vendors, right? They're all proprietary. You can't touch them. And then security happens at levels in the box and then outside the boxes. And in between these two, this data can be shared, but between those, you know, this stuff doesn't exist. And it's a very ugly version of multilayer security. But then you're talking about a robotic system, and it's, in a sense, it is a system of systems. There's many different operations and processes that are occurring within that entity, and you have to ... I'm sure there are going to be situations where you don't want data shared between certain areas, but in other ones it absolutely has to be, that's how the operations are carried out. How does ROS 2 help with that?
Vince Dilouffo: So this gets back into kind-of the layering of how security works with ROS. So with the extension of the data distributed service and the security standard, there's a kind of a plugable framework that goes with it. And the modules are authentication, access control, and data tagging and logging. So authentication is really based off of a PKI infrastructure. The access controls are really dealing with subjects and objects access controls. In the cryptographic library you can plug your own or you can use the open SSL, which gives you both symmetric and asymmetric algorithms and other cryptographic libraries that are needed-
Lacey Trebaol: So for the people who don't know, what are the differences between symmetric and asymmetric algorithms?
Vince Dilouffo: Sure. Symmetric algorithms is that you have one key that's generated and both sides share that same key. In asymmetric, you have two keys being generated. One's a public, and one's a private. The public is shareable and the private is always kept within your own privacy.
Lacey Trebaol: And so with Connext DDS secure, you have the choice to use either one.
Vince Dilouffo: Well, so with authentication, and we'll get into ... So with authentication, public key is always used. So with the PKI infrastructure, you have a certificate authority. You have a registration authority. A registration authority authorizes the person to receive a certificate on behalf of that person. So what the real issue with the registration authority is, they check your credentials, normally, to validate that that is the person that is getting the certificate and their certificate from the CA being signed. And public key is used for the communication between the different parties.
With the ROS 2 and the DDS standard, there's actually two levels of control. One is for the domain governance policy. That is really dealing with the domain and how that is being controlled. The other is a participant policy, and that's really how you can join the domain, read and writes control, and topics in the data, data accesses. So those are two policies that are controlled at different levels. The governance and the participant policies are signed by the CAs at respective levels. Each participant has an asymmetric key pair associated with them. Topics can be either a publisher or subscriber as part of the participant policies, and these are defined also within the policy itself. Once the overall security policies are defined, each participant has their credentials. The systems could run in a secure manner using the plugins as described before.
Lacey Trebaol: Okay. Great.
Vince Dilouffo: So, some of the differences between ROS 1 and ROS 2 is that, as we stated before, ROS 1 hasn’t been developed since 2007, so it's more of a stable, mature code base. But it's really driven towards a design of a central broker model, shall I say. And so then you have these scalability and performance limitations with just a central model there. It's based off a TCPIP. There's no security and limited quality of service support. In the ROS 2, we have the distributed architecture, and that's really defined by the data distributed services, which is a mature tool set and a communication layer from that. And it supports the equality of service policies, mostly history, depth, reliability, and durability are some of the policies associated with that. And of course the support for the security. And another aspect to that is the fast real-time publish subscribe protocol being supported also within ROS 2.
Other things with ROS 2 is that there's a different build system that's gonna be incorporated. There's support for Python 3 and C++ 11 or above. They're still discussing those things right now. And support for different OS's. So primary right now, it's Linux based, but they're thinking moving over to Windows and Mac systems. The development team is providing a set of examples, demos, to help get people started off, so there's also a migration guide that's going on from ROS 1 to ROS 2. And that's maturing also.
Lacey Trebaol: That's pretty key content.
Vince Dilouffo: That's pretty key content. Limited on documentation right now, so they're just trying to get the basic infrastructure components to work the underlining. It's not mature, but can get your feet wet. There is a learning curve associated with it. And that's something people need to be mindful of.
Bill Michaelson: Then not the mention the fact that all your Python 2 code is not gonna work well with Python 3.
Vince Dilouffo: There's gonna be a ... Yeah. There's gonna be a change there.
Lacey Trebaol: You're here at WPI and you're doing your research in the robotics department. What brought you here and what in robotics are you researching?
Vince Dilouffo: As a kid, I grew up working on cars, since my father was a mechanic. Mostly hands-on approach. Undergrad, I was exposed to robots. And my work assignments, primarily in high assurance security-related projects. And I had a desire to obtain a Ph.D in robotics with a focus on security. So that led me to WPI and that's what brought me here and working with robots now. My research is focused on ROS 2 and the DDS security. I'm bringing those two together in a number of experiments that I've been dealing with. So now it ... Primary we're looking at this, now is the time to investigate the integration of security into ROS 2 environment.
And robots are vulnerable and high value targets. I think that's what was discussed before. And security shouldn't be an add-on. It should be incorporated straight up and front into the life cycle. And some of the concerns and methodologies related to the, kind-of the transition between ROS and base robots to more of a high assurance robotics systems. And looking at adversarial machine learning and secure robotics systems. So as we go up the food chain with the robotic systems and autonomous, we get into aspects of machine learning. More of the trade-offs with the what, why, how, and when security is applied to robotics architectures and applications. And also the performance of when security gets applied, there's trade-offs with security in the trust models associated with that. And all respect to the ROS 2 and environment. Does that kind-of cover that?
Lacey Trebaol: It does kind-of cover that.
Vince Dilouffo: Kind-of.
Lacey Trebaol: So what kind of robots are you playing with?
Vince Dilouffo: Okay. Currently working with ROS 2 beta 1 release, and the 5.26 of RTI that includes the DDS security component on top of their professional tool set. So some of the experiments that working with is a simple talker listener application and applying security to that simple model. There's more of a complex, mobile platform, and again, incorporating a machine learning application on top of that layer and trying to see what aspects of that security can be applied to that and how it affects the environment. And again going back to the performance analysis and the trade-offs with the security model there.
Lacey Trebaol: Can you talk a little bit more about the machine learning part? Like, what's the difference in securing something like that versus the first one, the simple listener and speaker kind of version?
Vince Dilouffo: Well, so in the simpler experiment of the talker listener, that's just more of a software layer communication that's occurring there in how you really protect the two entities of the publisher subscriber communicating over the network. In the more complex model, we have a mobile platform. So you have the navigation system, locomotion, that's occurring on that platform. And then the aspect to the machine learning, it's trying to learn about the recognition of a tracking object and trying to use that model as more sophisticated experimental platform, and applying security to see if machine learning could be compromised at that level.
Lacey Trebaol: If they can mess with someone's learning process.
Vince Dilouffo: Correct.
Bill Michaelson: You also have a lot more communications pads in the system and a lot more potential vulnerabilities. And also potential performance issues.
Lacey Trebaol: Right.
Bill Michaelson: Because maybe you don't have to protect everything. You have to figure out what needs protection.
Lacey Trebaol: Right. Not just the, secure all the things. We are not securing all the data.
Vince Dilouffo: Well, that's the trade-offs, right.
Lacey Trebaol: Intelligently applying a solution, versus ... Yeah.
Vince Dilouffo: Correct.
Lacey Trebaol: So what actually ... I don't know this, because I have not worked in robotics. But what ends up being the cost of doing the, "I'm going to apply a solution like this to everything in my system," if you're looking at a system with that many things happening in it? Like, why do you need to pick and choose what you apply it to? Why wouldn't you secure everything?
Vince Dilouffo: Well, people have taken that all or nothing approach, and there is security performance issues on those notions. There's the other is why waste the resources if you don't need it. Others have taken the other approach and said, protect it all. And quite frankly, they don't know what they're protecting if they don't analyze it from that perspective. So going back to protecting it, that's a kind-of a ... Going back to the different layers that we talked about before, if you incorporate those, there still could be side channels associated with that and still could be compromises, even in a all-secure environment, per se. It goes back to applying security, shall we say, in the right places in the right time and using the correct protocols and the cryptographic libraries at that right time to protect the system. So that's something that we need to look at.
Lacey Trebaol: It's not a simple thing to apply then.
Vince Dilouffo: Correct.
Lacey Trebaol: So it's not as easy as, download a security product, install security product.
Bill Michaelson: Yeah. Well, there's some trade-offs to that that are forming the basis of Vince's Ph.D thesis, because there's ... As far as we can tell, we're among the first to explore these trade-offs and try to quantify what these trade-offs are. And there's an interesting interplay. Berk earlier had talked about security is not reliability. But there is, when you get into a mobile or robot environments and an uncertain environment, you know, there are aspects of security in that you may have malicious events that occur in your environment. But how do you differentiate those from fault tolerance or quality service events? And, you know, maybe some malicious activity is okay, provided it doesn't compromise specific things in your system. So it turns out to be a really, really complicated problem.
Lacey Trebaol: It sounds really interesting. It's so new to think about it and talk about it. And the idea of quantifying it, right, just no one ... I have not seen any quantified data on this, by the way, and I haven't read much about anyone doing anything looking at that. So that's gotta be kind-of exciting, honestly. If you're gonna be doing a Ph.D thesis on something, way to pick a good topic.
Berk Sunar: And to add a little bit to that. I mean, if you think about security being implemented, and this is a real-time system, right, that is performing actions and moving around, interacting with people, maybe helping somebody walk, right? It's a scalable kind of setup. So, in these real-time systems, it's very difficult to introduce security everywhere, because it's going to slow down the operation, right? I mean, if you encrypt and integrate to protect every piece of communication, then it may just not be a workable scenario, right? Maybe the delay is just too much. So that's ... So these are highly sensitive, these systems are highly sensitive to lag and delay. So we have to be careful. That's one of the trade-offs with ... I guess Bill was mentioning earlier. So in that regard, this is a very interesting topic.
And the second aspect that makes security and robotic systems so different is, robotic systems are built around numerous sensing equipment, right? So they are open and just waiting for information to be fed in. And then some of these inputs can be also spoofed, like the GPS example we have seen before. But that's just the beginning. That's just scratching the surface. You can mess with temperature controls. You can mess with navigation. You can mess with the communication line itself, too. Just deny communication by jamming signals, for example.
Lacey Trebaol: Denial of service attacks.
Berk Sunar: For example, that's the simplest you can do-
Lacey Trebaol: The one that everyone might have encountered at some point.
Berk Sunar: Right. But like Vince pointed out earlier, what would in my opinion be even more destructive is where you tailor these readings, these spoof informations that you're feeding to the robotic system to mess with its cognitive layer, let's say with its brain, right? With its machine learning algorithm, its constantly learning algorithm, for example. Then you end up in a completely different land for security. It's not your usual, you know, attack, compromise, leak data, exfiltrate data kind of setting. It's more like you're messing with the brain of the machine, the control of the machine.
Lacey Trebaol: You're messing with how it perceives the world.
Berk Sunar: How it perceives, how it decodes, and how it decides, right? How it makes decisions. So, that's the second set of unique, let's say characteristic that comes in besides the performance issue that I brought up earlier. So these are two different things that are kind of new in the robotics setting for security to study. That's why we're excited about Vince's Ph.D.
Lacey Trebaol: That's actually a neat point. I hadn't thought of that, but-
Berk Sunar: And it's waiting for input constantly. Normally when you're attacking a system, you have to find a way in first, right? And to corrupt a-
Lacey Trebaol: The door's always open.
Berk Sunar: Here it's the opposite. It's just waiting for input, and why not feed it fake input?
Lacey Trebaol: Manipulate it just to see.
Berk Sunar: Manipulate. And you can very carefully craft the input so that it can cause serious damage in its learning algorithm.
Lacey Trebaol: Yay. Autonomous cars, everyone. It's gonna be exciting. Wow. Not thought of that. So what type of things do you think you'll be researching then? I mean, where do you start? This is kind-of a huge thing to be tackling.
Vince Dilouffo: I think we brought up some of the topics and research areas again. I think we've all alluded to, this is a new field, a green space in essence. There's a considerable learning curve that has to occur here, and looking at the exploits associated with the robotic and machine learning is a whole new area, I think, that needs to have a lot of focus on. And that's primary what we're looking at right now going forward.
Lacey Trebaol: Sounds fun.
Bill Michaelson: As I characterize it, it's a hobby gone terribly wrong.
Lacey Trebaol: Or terribly right. I'm not seeing the wrong here yet. I don't know. Isn't that when they go right, though? You hit on something new, forces you to ask new questions and try and see things differently. That's kind-of the fun of it. It's like where science and engineering meet, right? You get to play the scientist and then play the engineer.
Vince Dilouffo: The evil scientist.
Lacey Trebaol: Yes. But in this case it's actually good to be the evil one, 'cause that's what you're trying to measure and quantify. It's when the bad things happen. It's like, you know, if you want to know how fast something can be destroyed, give it to a toddler.
Bill Michaelson: So we're therefore gray hat programmers.
Lacey Trebaol: Thanks for listening to this episode of the Connext Podcast. We hope you enjoyed it. If you did, please be sure to share it. And if you have any questions, reach out to us as email@example.com. Thanks, and have a great day.