By C. N. Thai
Researcher at University of Georgia, USA
Most, if not all my beginning robotics students were surprised when I started my robotics short courses by mentioning that they had been doing “Robotics” in their everyday activities and doing it very well too, all by themselves.
Fig. 1.1 explains how humans use their senses to let them know about the external World (i.e. Perception) and apply those Sensations into our Cognitive process (i.e. Thinking) to devise proper Actions/Reactions to the situation at hand.
These Actions/Reactions in turn would change some aspects of the external World resulting in new Sensations triggering the next World-Human Interactions cycle and so forth.
Fig. 1.1 World-Human Interactions Cycle
Thus, in a way, doing Robotics is just trying to reproduce this interactions cycle onto a robot as shown in Fig. 1.2. For the “Autonomous” option (inner loop), mechanical or electronic Sensors would convert their interactions with the World into Input Data to be sent to the Robot/Computer which was previously Programmed by the user to deal with Events deemed important to the operation of the Robot.
Fig. 1.2 World-Robot Interactions Cycle
Using this user programmed Logic, the Robot Outputs appropriate commands to its Actuators which then could change the Sensors’ perspective onto the world. Therefore, this generates new Input Data for the Robot / Computer to process anew, through another interactions cycle.
In the case, when the robot does not have enough sophistication in its sensing and processing capacities, the human user then handles all the sensing and decision making processes and through a remote-control device, he / she just sends direct commands to the robot; telling it what to do (i.e. “Remote Control” outer loop in Fig. 1.2).
Of course, depending on the robotic application’s needs, one can have a gradual variation between “100% autonomous” and “100% remote-control” . For example, consider the current situation for “smart cars”.
Another approach to increase the sensing and processing capacities of the robot is to just build those extra capacities into the robot itself at great costs, but for consumer robotics, a cheaper “Co-Controller” approach is usually followed, i.e. to leverage other computing platforms that already have the features that the robot designer would need such as machine vision, multimedia services, speech synthesis or recognition, etc. Refer Fig. 1.3.
ROBOTIS follows this approach…
For the Robotis PLAY 700 and Robotis DREAM systems, the user can also leverage the features of the very popular SCRATCH 2 IDE from MIT with ROBOTIS R+SCRATCH, a Windows OS helper application.
Fig. 1.3 Co-Controller Approach to leverage additional capabilities for ROBOTIS robots.
About the Author:
Chi Thai, retired in 2016 from 31 years of teaching and research at the University of Georgia (College of Engineering) in Educational Robotics, Multi-Spectral Imaging Application for Plant Health Monitoring & Horticultural Products Quality Characterization, and in the area of Agricultural Systems Modelling and Simulation