Sensing in Robotics

Can we equip robots with sensors such that they can interact with their environment like we humans do with our biological sensors?

Before we even talk about how we create algorithms to fuse the incoming sensor data and interpret it while at the same time using it to inform the movement decisions of a robotic arm, we have to talk about how we get that sensor data.

One might assume that if we just have a video camera recording the robot, that's all we need for the robot to avoid things like collisions. This approach doesn't work when things like occlusions from the cameras view occur. Furthermore, that camera would have to be calibrated so that we know the exact distances to the camera. If the camera doesn't have depth sensing, then getting these distances require the camera to be calibrated to the size of the robot arm, and even then the distances will have a bit of variance. That's not good if the robot should be interacting with other things.

Ok, lets use multiple cameras! So, now we place multiple cameras around the robot arm from many angles. We still need to calibrate the cameras to the arm and to each other. The cameras must stay fixed or they must be re-calibrated. Ok, this will definitely solve our problem of calculating depth, but what about occlusions? Well, if there are enough cameras completely surrounding the arm, then perhaps even when interacting with another object there will be enough views from cameras to be able to see what is happening and calculate the distance of the object to the robot arm....but then again, maybe there won't be. Furthermore, the increased amount of cameras mean a massive increase to the amount of data that needs to be processed, this slows down the responsiveness of the system.

Instead of taking an external view to the problem, what happens if we adopt a more natural self-focused view? Just like other animals, lets put the sensors on the object that is moving. That way nothing can come inbetween the object and the sensor. Furthermore, with this approach, we don't need to modify anything if we want to move the moving object, in this case a robotic arm, to a new environment. The arm and its sensors is self contained. This was the approach I took in building a sensor network that would allow a Franka Emica Panda robotic arm to process data from its surrounding with the goal of being able to avoid collisions with other objects.

The algorithms for processing this sort of data are an active topic in the research community, and are a topic of research I'm still currently pursuing. In the end I created two different sensor setups, one using 18 cameras separated into 6 different hubs. The orientation of the 3 cameras in each hub created an almost full 180 degrees of merged view for that hub. The hubs were placed on either side of the 3 main joints of the robotic arm and operated at roughly 10 hz. The second setup consisted of using time of flight (ToF) sensors to cover more positions on the robotic arm, but with considerably less information about 3rd party object interaction. This faciliated easier data fusion in combination with calculating the correct torque that needed to be applied to the robotic arm joint to avoid collision.