Landing : Athabascau University
  • Blogs
  • Unit 4 - Food for thought (Chapters 7, 8 & 9)

Unit 4 - Food for thought (Chapters 7, 8 & 9)

Questions from Chapters 7, 8 & 9 of:

Matarić, M. J. (2007) The Robotics Primer. Cambridge, Massachusetts: The MIT Press

 

What’s Going On? (Chapter 7; Food for thought)

 Q1 -          Uncertainty is not much of a problem in computer simulations, can you figure out why?

Uncertainty is not much of a problem in computer simulations because the dynamic forces and effects that we know of in the physical world (such as noise, hidden information, temperature, pressure etc.) are not present in a simulated environment. A simulated robot would have no physical sensors, which means there would be no negative effects from noise or errors caused by the switches or sensors malfunctioning or picking up undesirable signals. All information in that simulated environment would be observable to that robot. The only limitations that would be applied in that simulated environment would be limitations set by the creator or programmer of that environment, where as in the physical word there is no way to completely remove these challenges, we can only try to reduce their effects in order to overcome them.

 

Q2 -          Some robotics engineers have argued that sensors are the main limiting factor in robotic intelligence. Is this all that is missing?

 There may be some truth to that but I feel like there are other greater limiting factors in robotics. One of these would have to power, since we still only have large bulky batteries with very limited capacity to rely on to power mobile robots. Another limiting factor is intelligence in robotics. Even with the best sensors that were impervious to noise or error, you would still have to design a complex program to properly use them. Maybe someday you could do minimal programming and the machine could learn how to use its own sensors that were installed using artificial intelligence.

 

Q3 -          What do you think it will take to get robots to be self-aware and highly intelligent?

 This is a very complex question since no one even knows how or why humans (and some animals) are conscious. I think that with the evolution of AI, for better or worse, it will one day be possible for robots to become conscious. How did humans even get to this point? We apparently evolved from less intelligent species, who likely had only needs, survival instincts and impulses. Fight or flight, hunger, thirst, sleep, reproduction were all the abilities required to keep on living. At some point in time people began to use their minds in other ways, creating tools to make their lives easier, creating shelter and clothing for better protection. We must have used our senses to observe the physical world around us and learn how to mimic those abilities. Other animals today make shelters, have been recorded using tools (sticks / rocks etc.), can mimic human actions or obey commands. AI computer programs have already bested some of the smartest humans in certain games. I feel like a robot equipped with sensors and artificial super intelligence could become self-aware in a blink of an eye relative to human evolution. The question after that would be, what happens next?

 

Switch on the Light? (Chapter 8; Food for thought)

Q1 -          Why might you prefer a passive to an active sensor?

 Passive sensors are usually simpler and require less power since active sensors require an emitter and a detector to do their job. Passive sensors probably don’t require calibration at all, or at least as often as active sensors would since they generally require calibration to operate effectively. Passive sensors such as switches are not susceptible to noise at all (but can still be prone to errors from malfunctions of the device or bouncing of the switch).

 

Q2 -          Are potentiometers active or passive sensors/

 Potentiometers are definitely passive sensors. They require something (i.e. a hand or obstacle) to turn or slide them in order for the internal wiper to move and the resistance to change. That’s no different than a switch being pushed open or closed in the same manner, and a switch is a passive device. The only real difference is that a potentiometer has an increased resolution, it’s not just on or off. A potentiometer can be used on an emitter though to change the power or voltage being applied to the emitter. This would be a way of tuning that emitter to have a change on the power of the force (light, sound etc.) received back on the detector.

 

Q3 -          Our stomachs have stretch receptors. What robot sensor would you say are most similar to such stretch receptors? Are they similar in form (mechanism of how they detect) or function (what they detect)? Why might stretch receptors be useful in robotics?

 The robot sensor that would be most similar to our stomachs stretch receptors are resistive position sensors. Although limit switches or potentiometers (with a spring to return them to their starting position) could probably be used as well to do the same job. When resistive position sensors are bent, their resistance increases, similar to the pressure that would be applied to your stomach walls as more food and drink entered. The pressure would increase and your stomach would expand (like a balloon blowing up), this would stretch the muscles in your abdomen and your body would tell your brain that you are getting full. Analogous to that, if we were using resistive position sensors for the stomach lining, the more the lining was pushed outward (due to food & drink entering the stomach) the higher the resistance would be. These sensors are similar in how they detect since they are both detecting tension, but even though they both detect resistance, they are different forms of resistance (electrical resistance for the resistive position sensor & physical resistance (a.k.a. tension) for the stomach lining / muscles).

 These stretch receptors would have many uses in robotics. One use could be for detecting how much liquid or bulk materials are in a bin, the force of the material would create a tension and the robot could detect this and determine if they needed to add more materials or if it was already enough. Another useful function of stretch receptors would be to detect how much pressure is being applied in a gripper, say for grabbing apples. The robot could detect that enough pressure was applied so that the apple was not crushed. One other potential use would e similar to how our own muscles work, a stretch receptor could be used to determine how much an arm has moved. This would be similar to our bicep or tricep muscles. It would likely not have as much accuracy as a servo motor would but it might make computation easier.

 

Sonars, Lasers, and Cameras (Chapter 9; Food for thought)

Q1 -          What is the speed of sound in metric units?

 Sound travels at 1.12 feet per millisecond (at room temperature). There is 0.3048 meters in 1 foot (according to the first result in a google search). That means that if measuring sound in meters per millisecond, the speed would be: 0.3414 meters / ms (I just multiplied 1.12x0.3048 to get that answer). That would equal 341.1 meters per second.

 

Q2 -          How much greater is the speed of light than the speed of sound. What does this tell you about sensors that use one or the other?

 The speed of light is 299,792,458 meters per second. That is a little more than 878,899 times faster than the speed of sound!! Sonar (ultrasound) is typically measured using the time of flight method, where a transducer will sound out a ping (a quick tone / pulse), then if the signal is received back, the processor will multiply the entire time from when the pulse was transmitted to when it was received by the speed of sound and divide that value by 2 since the sound wave travelled double the distance from the transmitter to where it reflected off of. The calculation that the ultrasound device uses may differ from my previous answer of 341.1m/s since many factors affect the speed of a sound wave. Sonar also uses quite a bit of power so that would have to be factored into the robot design.

The speed of light is so great, when compared to the speed of sound, that we have to figure out different methods other than time-of-flight in order to process the received signal. This is apparently because electronics are not yet capable of measuring the received light signal, so a different method is used called phase-shift detection. I’m making an assumption here, but I imagine that this phase-shift measurement likely requires some mid-range processing capabilities (not quite as high as visual processing though).

 

Q3 -          What happens when multiple robots have to work together and all have sonar sensors? How might you deal with sensor interference?

 This is an interesting problem, which has probably caused issues for many robot designers. I think the easiest method to remove interference would be to have the transducers operate at different frequencies, say one at 40 kHz and the other at 50kHz.  If this wasn’t possible, then another method would be to have each system send out a special ping to be used as a code or pattern that would help identify that it was its own self sending out the pulse. If there were robots that were interacting with each other, not just trying to avoid each other and do their own tasks, then a special patter could be sent out initially as a preamble, and once they were both receiving each other’s signals, they could begin to do whatever it is they were programmed for. For individual robots, acting autonomously, a better option may be that one could just be sending (and looking to receive) short bursts and the other robots could be sending and looking for longer pulses?

 

Q4 -          Why don’t we employ the Doppler shift method in robotics?

 I think that this is not as necessary in robotics because:

1) Robots don’t typically move that fast so there would only be the slightest change in frequency. This would be hard to detect without more sophisticated electronics and would likely require greater processing capabilities. The added value wouldn’t make it worth the trouble.

2) Robots are typically used in small spaces, so if the robot was sending out a continuous tone (or pulses of a tone), then there would be additional frequency shifted signals coming back at the robot (ultrasound signals from the robots own transducers bounced back by specular reflection), which would require techniques to reduce the impact of these unwanted signals.

3) There’s probably no need for Doppler shift method in most robotics applications because the main goals are usually measuring distance to a slow moving or stationary object or obstacle and the robot doesn’t typically need to figure out how fast another object is travelling.

 

Q5 -          Since two eyes are better than one, would three eyes be even better then two?

 I don’t think there would be any great benefit to having three eyes on a robot or human instead of one. It’s hard to imagine but there’s a possibility that it could make depth perception even better and improve the quality of vision but it would likely require greater processing power, which means longer processing times and more power usage. One benefit would be that there would be a redundant eye or camera, so that if one stopped functioning properly (due to damage, or was simply blurry for one reason or another), then the person or robot would still have near perfect vision and depth perception. It would also matter where the eye was or was direction it was viewing. If the third eye was on the back of the person / robot then it could be very beneficial (but would still require additional processing power). Maybe if it wouldn’t look so weird having a third eye wouldn’t be so bad after all…