Artificial Intelligence and Robots ignoring Real Human Intelligence and Manpower
Artificial Intelligence and Robots ignoring Real Human Intelligence and Manpower
Humans’ five senses act together to expose what we see, listen, smell, taste, and touch. But robots are still learning to recognize different tactile signals.
Robots that have been programmed to view or feel can’t use these signals actually as interchangeably. To better link this sensory gap, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have grown up with a predictive artificial intelligence (AI) that can learn to see by touching and learn to feel by seeing. (1)
Robotics and not just AI are displaying signs of a reawakening with major developments from startups and R&D connected with colleges and BigTech firms. Teaching artificial intelligence to join senses like vision and touch truly does sound like important work. (2)
The system can also practice tactile, touch-based input and turn that into a prediction about what it resembles , kind of like those kids’ discovery museums where you place your hands into random boxes and try to figure at the objects you find within. When robots learn as kids learn, could they scale sensory intelligence?
So why is this interesting?
It gives the robots a path to self-learn better. By viewing at the scene, the model can guess the feeling of touching a flat surface or a sharp edge. By blindly touching around, the model can predict the communication with the environment completely from tactile feelings. (3)
Is a Robot Renaissance Coming?
This type of AI also could be practiced to support robots operate more efficiently and productively in low-light environments without wanting advanced sensors, for instance, and as parts of more general systems when used in sequence with other sensory simulation technologies. As robots learn to join touch and sight better, they are actually becoming more human. (4)
While our sense of touch provides us a channel to feel the physical world, our eyes help us quickly understand the full picture of these tactile signals. What if in the succeeding few years robots are able to replicate this? What if they quickly “get it”?
What did this resemble like in the original research setting? Employing a simple web camera, the team recorded nearly 200 objects, such as tools, household products, fabrics and more, being touched more than 12,000 times.
Cracking those 12,000 video clips down into static frames, the team assembled “VisGel,” a dataset of more than 3 million visual/tactile-paired images. (5)
For instance, given tactile input on a shoe, the model could discover where the shoe was most likely to be touched. It helped to encode features about the objects and the environment, allowing the machine learning model to self-improve. These datasets are only improving. (6)
Next step for the future:
The current dataset only has examples of communications in a controlled environment. They desire to improve on this by gathering data in more unstructured areas. The reality in robotics is there’s lots of room for growth, and by the 2030s, they way more possible to see working robots along with humans in the community, both at work and at home.
This type of model could assist in creating a more harmonious relationship between fantasy and robotics, particularly for object recognition, understanding, and better scene recognition. It could also support in developing seamless human-robot combination in an assistive or production setting. (7)
To know more about Artificial Intelligence, refer:
Artificial Intelligence Part 1
Artificial Intelligence Part 2
Artificial Intelligence Part 3
Artificial Intelligence In HealthCare