A “quick and sideways” glance at Embodied Artificial Intelligence / AI Humanoid Robots, Robo Dogs & Co.
Embodied AI, or Embodied Artificial Intelligence, is a type of AI that uses robots to interact with the physical environment.
Through trial and error, the AI agent develops an abstract understanding of the world’s spatial and temporal dimensions, and learns to reach its goals. For example, an AI robot might learn to walk or remove obstacles. Or learn complex tasks by observing humans. Learning by observation and trial and error, just like a child learns. A self-learning AI.
Embodied AI combines machine learning, computer vision, and robotics. It’s rooted in the concept of “embodied cognition,” which suggests that intelligence emerges from how the body interacts with its surroundings.
Embodied AI has applications in humanoid robots, personal assistance robotics, virtual assistants, and autonomous vehicles.
Embodied AI provides potential solutions for current AI technologies that are dependent on large amounts of data and reliable output. For example, it can bridge the gap between digital AI and real-world applications by integrating physical bodies and sensory capabilities into AI systems.
Embodied AI is egocentric, meaning it encodes objects with respect to the agent. This is different from allocentric perception, which encodes objects with respect to another object.
Welcome to the Brave New World!
Some of the small videos I include are quite creepy but we better get used to it.
The field of Embodied AIs, Humanoid Robots is exploding in just the last few years.
Personally I am quite disturbed by this development, especially in the military department.
Remember Frankenstein?
With the rush towards AGI (Artificial General Intelligence) and Embodied AI Robots, we humans have unleashed a Genie that we will not be able to put back into the bottle.
Good luck to the future.
AMECA – a Embodied AI Robotic Humanoid with astonishing expressivity in mimic and gestures
Ameca is a Robotic Humanoid created by Engineered Arts.
The first generation of Ameca was developed at Engineered Arts in the UK.
The project started in February 2021, with the first video revealed publicly in December 2021.
Ameca is currently associated with the Museum of the Future’s robotic family, where it can interact with visitors.
Ameca is primarily designed as a platform for further developing robotics technologies involving human-robot interaction. It utilizes embedded microphones, binocular eye mounted cameras, a chest camera and facial recognition software to interact with the public.
Interactions can be governed by either GPT-3 or human telepresence. It also features articulated motorized arms, fingers, neck and facial features.
AI-DA the humanoid robot artist
AI-DA the humanoid robot artist was conceived in 2019 by galerist Aidan Meller as an AI art generator embodied as a life-like humanoid robot.
The hardware was built in collaboration with Engineered Arts.
The graphics algorithms allowing it to draw were developed by computer AI researchers at the University of Oxford,and its drawing arm was developed by students from the School of Electronic and Electrical Engineering at the University of Leeds.
Aidan Meller presented outputs from Ai-Da at a solo show called Ai-Da: Portrait of a Robot at the Design Museum in London, including “self-portraits”, an apparent paradox given a robot has no “self”.
This raised questions about identity in the digital age, and the effects artificial intelligence may have on art in the future. It was also the first humanoid to devise a font, displayed at the Design Museum.
.