Welcome to the Design to Product Podcast!
In this interview episode, we talked to Nicholas Nadeau.
Nicholas Nadeau is the CTO of Halodi Robotics, a company that builds humanoid robots for different purposes like safety or medical treatment. We talked about different considerations that they needed to take into account and how they applied them while designing robots that should operate among humans and interact with them.
Today we talked about:
- Halodi Robotics
- How robots help security world
- Challenges in making a humanoid robot and ways to solve it
- Tips for dealing with the different stages of getting a product to production
Adar: Our guest today is Nicholas Nadeau. Nicholas is the CTO of Halodi Robotics. Hello, Nicholas.
Nicholas: Hello, Adar. Thank you very much for having me. Great being here.
Adar: Great to have you. So maybe we can start by telling us a little bit more about Halodi Robotics.
Nicholas: So, Halodi Robotics is a humanoid robotics company. It’s been around five, six years or so now, primarily focusing on the R&D market at its birth. Since then, we’ve expanded internationally. We’re currently in four countries, five offices with 50 plus employees deploying robots to the security guarding space in North America with the dream of home and healthcare in the next decade being those helper robots of the future that science fiction promised us way back when.
Adar: That’s crazy. So maybe we can start with the existing use cases. How does the robots actually help the security world? Like, how do they patrol, I would say?
Nicholas: So, given the 4 Ds of robotics, dull, dangerous, dirty, and dear, the 02:00 a.m. night patrol shift in an office building for sure falls under the dull category of that. It’s repetitive. There’s not much interaction. I think I’ve heard statistics around 95% to 98% of security events are just false alarms. Maybe the door handle just needs to be jiggled. And so, with this in mind, how can we bring robotics into the loop and do a little human-robot collaboration? Can the robot take care of that dull patrol, that repetitive back and forth along the same dark hallway over and over again? Can it go through doorways where other systems would be blocked by the door? And this is why we have the humanoid robot form factor, is because we’re able to interact with the human world as humans do instead of having to change the office building to integrate robots, this lets us also use the onboard computer vision for navigation. We do, end to end, learn navigation very much behavior learning and behavior cloning and imitation learning, foreign navigation with some basic slam techniques for localisation. And then using our two R manipulations, we’re able to open up doors, interact with objects in the world around us. One of our claims to fame is our VR avatar mode. So using the Oculus quest 2, we’re able to teleport into a robot and do embodied robotics take over the robot’s controls, the operator’s vision, becomes the robot’s vision. You hear what the robot hears. You can interact with the world through the robot, but remotely until you operate it. And so this lets us and our security partners in North America work in a bit of an operation style approach. You can have the humans at a guard post at an operations center remote from where the robot is. And now you can have robots on different floors and different buildings and different locations around the world. And as security events pop up, the robot can autonomously navigate to the security event. If the machine learning models and the automation is caught up enough, we can resolve the event autonomously. And if not, you can always fall back onto a VR mode where an operator can don the VR set, teleport into the robot, resolve the event remotely, disengage, and the robot continue autonimously. And so you save that time between a human receiving event somewhere and having to run over that floor, that building, and there’s a lot of lost value in that process.
Adar: Yeah. So, most of the time it works without an operator. And just when you need it, the operator gets a notification and logs in, basically.
Nicholas: Yeah, exactly. So it’s a bit of the 80/20 rule in a lot of ways. There is a 20% that’s unsolved yet, those deformable objects, those unstructured tasks. A great example is a backpack wedged into a doorway. It’s unstructured, it’s deformable. We haven’t necessarily solved that fully yet. So you could put on the VR goggles, resolve the event remotely, close the door, and the robot continues autonomously. But this also lets us collect training data for future imitation learning, and yeah, it just keeps on going.
Adar: It’s fascinating. And you talked about your future plans as well. You want your robot to be able to take care of people. That’s a whole other challenge you have to take into consideration, like user experience at a very high level. You will have to understand human psychology. I know this company, I think it was Intuition Robotics, maybe? Which wanted to do the same thing. How are you planning to tackle these challenges?
Nicholas: Well, it’s a step by step piecemeal approach for sure. So if we take security guarding as a stepping stone for there, we’re already in human environments, we’re already interacting in environments where humans are navigating. Whether we think of it from an agent multi agent approach, we have other agents that we’re interacting with. So already from a physical human robot interaction point of view and a social human robot interaction point of view, we have the stepping stones we build off from there. So it might start with simply being part of the existing nursing home or hospital ecosystem and checking to make sure everyone’s in their bed at night. With things like computer vision, it’s honestly pretty trivial to detect if a person is in their room, where they are, where the robot is currently localized in that room. And so from there, we can build on top of that, we can do things eventually like helping people navigate from room, we have a couple of demo videos of basically our robot giving an elderly person their hand to help walk them along the hallway from room to room. And that’s just a mixture of basic physical, human robot interaction and navigation and it’s very much a platform approach. Can we, with time software updates and hardware updates, build and approve upon the existing platform piece by piece? Definitely from the social psychology point of view, I think there’s a lot of work being done in our world, in society, bringing us towards a robot revolution where we will get more and more used to having robots around us. Very much like Star Wars, where you see that world of droids and humans working together. It’s just a matter of time.
Adar: Got it. So, you’re relying on market changes as well. And probably you will have to do a lot of internal work in order to develop the intuition for the robot to be able to take care of people. What about safety? Robots can do wrong gestures near a weak person and it could be harmful. How do you hear of that?
Nicholas: So I’ll jump on to the things. You mentioned market changes will definitely be the labor shortage is helping accelerate a lot of those changes with less people in the workforce are willing to do these dull, dangerous, dirty and dear jobs. We’re finding that robots are really filling the gaps from the market dynamics and that’s accelerating our acceptance of robots as a whole. And then we jump over to the safety side, a lot of it, now you can look at your intrinsic safety and extrinsic safety. So extrinsic safety, we’re looking at things like risk assessments. Is our application designed to be safe? We’re not programming the robot to go at full speed straight down the hallway and then hit the estop and stop. But we’re looking at making sure that what we design for the application in and of itself is safe. But from the intrinsic point of view, that’s actually a bit of our claim to fame. Our robot at Halodi has the highest torque to weight ratio direct drive motors in existence on the market. They’re actually designed custom in house at Halodi. They’re our rebo-motor line. And so this is a bit of our secret sauce being that we don’t have major gear ratios. Most of our gear ratios are near one to one, which lets us have everything backdrivable. I remember back when I was doing my PhD in medical robotics, I was using a Kuka LBR iiwa Robot to do medical ultrasound. And with the harmonic drives, you have a very high gear ratio where if ever we get into a clamping scenario with a subject pin to a table, that would be a very bad situation because it’d be very, very difficult to backdrive that to escape from that situation. Being that our robot is direct drive, being that all the joints are backdrivable, compliant, we use a rope system internally for the drivetrain. We’re able to make it passively safe, and using things like passive dynamics, low reflect inertia, and our motor set up and design, we’re able to make the application inherently safe in a lot of ways. Even if you put max torque on the motors and swing the arm, you’re not going to injure somebody because passively, the robot’s arm will resist that, and you’ll be able to push the robot backwards.
Adar: Got it. Are there any other challenges? So probably you do put a lot of emphasis on safety. Any other challenges specific to humanoid robots that should interact with humans and should be very gentle?
Nicholas: I think the biggest challenge isn’t so much with the gentle side as a whole. Humans, especially with the humanoid reform factor, humans have a much better innate understanding of how to interact with the system when working with traditional cobots and things like that. We don’t have an innate understanding of how the motion happens with a seven axis industrial robot. Even if it’s made safe, even if it’s extrinsically safe, even if it’s intrinsically safe, we don’t have an understanding of how the motions occur. It’s very nonlinear to us seeing an industrial robot move around in our space. A humanoid form factor, however, really helps bridge that gap. We have a better understanding of how the robot will move through our space, will interact with our space. Being that our robot is about six foot tall, 185 CM tall, so standard human height, we are able to interact with the human world as humans do. We’re able to pick up boxes like a human would in a squatting pose. We’re able to push buttons for elevators or open doors as a human would. And so this intuition and almost like nonverbal communication helps a lot with safety.
Adar: Got it. So, because humans treat the robot just like they would treat their friends or other humans, so it makes it more easy just because of the way it looks and the way it behaves, they know how to act with it.
Nicholas: Yeah, exactly. So the best example I could almost think of off the top of my head is my robot vacuum at home, bouncing around the house. I always have to dodge and dance around it because I never know where it’s going next or how it’s going to behave. Because this puck on wheels, on the floor does not interact with the environment as a humanoid would.
Adar: Yeah, but humans know how humans behave. They have a lot of experience with that. You have a lot of experience in getting products to manufacturing. You work with startups. Do you have any tips for dealing with the different stages of getting a product to production?
Nicholas: Yeah. So the biggest tip is definitely be very conscious about defining your volumes and your quantities with respect to your MVPs in your prototyping stage, early on in the process and understanding that there will be a transfer to production step. A lot of young companies, startups, forget about that transfer production step. Maybe they put together a quick alpha prototype, clean it up a bit, call it a release candidate and think they could just scale it up to production, not thinking about the implications of supply chain procurement design for manufacturing. Design for maintenance is a big one. Just because you can make it in house and assemble it easily in house, how are you going to maintain it out of the fuel? And this is a big thing for hardware. Software, it’s easy. You hit F9 recompile and maybe you can send another update out. Hardware is very expensive to maintain and understanding what your deployment infrastructure looks like for the hardware, what your deployment target and site will look like for your hardware, what kind of services are accessible. If you’re working at an aerospace facility, maybe you won’t be allowed cellphone access there. How will your service support team be able to call back home and debug things? If you are doing work in Southeast Asia, high humidity, high temperatures are pretty nasty for your electronics and PCB components. Are you IP rated? Did you do conformal coding around your PCBs? These are things that you might not think about when you’re prototyping in house, but will be very expensive. You have to fly back and forth from North America to Singapore to repair and replace components. What else is there? One fun thing to always think about is your design review process. Making sure that all your stakeholders are involved in the process. Did you call on procurement supply chain, your technicians and your assembly team, those are some of the most important people to bring on board early into your design process after maybe, you got the jitters out and got the first idea working, but definitely before you go into full scale transfer production. My usual rule of thumb, I call it the rule of pie. It takes about 3.1 iterations to go from product idea to something that’s scaling up in production. Your first one is your alpha, it’s your proof of concept. Does it work? Your second one is your beta, that’s maybe the one you might deploy to a pilot site or two just to check. Your third one would be your release candidate. At that point, maybe you start certification with this and that’s something a lot of people forget about. A certification is very expensive, very time consuming, and you’ll need lots of documentation for that. And your .1 iteration at the end is when marketing says, hey, can we change it to a different color blue? Or can we move this screw hole just a half a millimeter to make it easier? And if you keep that in mind already, you have a bit of a roadmap of how many iterations and design cycles you might need to go through before you go from idea to actual ramping up of production.
Adar: Yeah, so many things to think about and I think that especially in the early stage, there’s a lot of theories that say just get something out there, just validate your product. But then you also have to think about all of this stuff and maybe people that are listening are saying like hey, I can’t think about everything. I should just get something out there. Maybe I’m building something that no one wants. I should just put it in the hands of people first instead of thinking about everything first. So when is the right time to think about those things? So maybe I should think about everything in the beginning because it’s hardware and you shouldn’t take that step.
Nicholas: No, you definitely think the old saying, what is it? “Measure twice, cut once.” Definitely think as best you can ahead of time before necessary. Deploying hardware is expensive. You don’t want to spend too much time and money on a wasted product or a wasted idea. That being said, getting into the hands of customers as quickly as possible is key to that. Especially for your first couple of iterations of your alpha and beta prototypes. Get them into how the customers validate. That’s what pilots are for. But be conscious that you can’t go directly from pilots and prototypes to sellable commercial units. That’s not a one for one transition. And at that point you have to switch hats, basically and switch gears going from a R&D mindset to a production mindset. And often, honestly, sometimes it’s two different companies almost. And that’s why you have those contract manufacturing services out there in the world. The Seminas, the Jables and things like that. We can outsource a lot of that manufacturing too. That’s a valid strategy.
Adar: But would you want to do that?
Nicholas: It depends on what your value proposition is. If your internal value as a company is to be a manufacturer, then that’s fine. If you need to own the end to end in order to guarantee the value of your product, that’s fine. But be prepared to accept that with that comes scaling the production company, which is very different than scaling an engineering company.
Adar: Yeah. And it’s a huge and complex and iterative thing that requires a lot of things to take into account and things you might not even think about and things that might affect your process and change your product.
Nicholas: Yeah, exactly. And honestly, some of the best feedback you can get is from the production teams out there. And that’s why, the companies I’ve worked, we often start with the small volume production locally, on site, with our own team. And only when we scale to the next order of magnitude do we then start outsourcing those. That way we can own what’s called a single production line, if you will. And then we could copy and paste it to remote sites or contract manufacturers after we’ve debugged the production process itself after we’ve had that closed loop understanding between the production team and engineering to get those quick feedback cycles. But, however, that being said, again, if you treat your production team even internally already, as a bit of a contract manufacturer where engineering shouldn’t have to walk down to the production floor every single day, not every problem has to become an engineering problem. Then you’ll be much better set up for success for the future. Your documentation will be cleaner, your drawings will be better, your revision system and your ERPs and MRPs will be better set up and you’ll be ready for that next stage.
Adar: Fascinating. Nicholas Nadeau, the CTO of Halodi Robotics, thank you so much for being with us today.
Nicholas: I really appreciate it today. Thank you very much. Adar.
Adar: Thank you.