Categories: Technology Facts

A concept in psychology is helping AI to better navigate our worldon July 17, 2020 at 4:05 pm

The concept: When we look at a chair, regardless of its shape and color, we know that we can sit on it. When a fish is in water, regardless of its location, it knows that it can swim. This is known as the theory of affordance, a term coined by psychologist James J. Gibson. It states that when intelligent beings look at the world they perceive not simply objects and their relationships but also their possibilities. In other words, the chair “affords” the possibility of sitting. The water “affords” the possibility of swimming. The theory could explain in part why animal intelligence is so generalizable–we often immediately know how to engage with new objects because we recognize their affordances.

The idea: Researchers at DeepMind are now using this concept to develop a new approach to reinforcement learning. In typical reinforcement learning, an agent learns through trial and error, beginning with the assumption that any action is possible. A robot learning to move from point A to point B, for example, will assume that it can move through walls or furniture until repeated failures tell it otherwise. The idea is if the robot were instead first taught its environment’s affordances, it would immediately eliminate a significant fraction of the failed trials it would have to perform. This would make its learning process more efficient and help it generalize across different environments.

The experiments: The researchers set up a simple virtual scenario. They placed a virtual agent in a 2D environment with a wall down the middle and had the agent explore its range of motion until it had learned what the environment would allow it to do–its affordances. The researchers then gave the agent a set of simple objectives to achieve through reinforcement learning, such as moving a certain amount to the right or to the left. They found that, compared with an agent that hadn’t learned the affordances, it avoided any moves that would cause it to get blocked by the wall partway through its motion, setting it up to achieve its goal more efficiently.

Why it matters: The work is still in its early stages, so the researchers used only a simple environment and primitive objectives. But their hope is that their initial experiments will help lay a theoretical foundation for scaling the idea up to much more complex actions. In the future, they see this approach allowing a robot to quickly assess whether it can, say, pour liquid into a cup. Having developed a general understanding of which objects afford the possibility of holding liquid and which do not, it won’t have to repeatedly miss the cup and pour liquid all over the table to learn how to achieve its objective.

Read More

Recent Posts

The Importance Of Local Gun Stores In Houston: More Than Just A Purchase

The conversation around firearms and gun ownership has intensified in the United States. Amidst this…

14 hours ago

Dog Microchips Thousand Oaks: Guide For Pet Owners On Choosing The Right Veterinarian

Selecting the correct veterinarian for your pet can be a critical decision. This choice will…

14 hours ago

Exploring the Use of AI in Video Editing Software for More Efficient Explainer Videos

Explainer videos are a perfect way to communicate complex ideas in an easy-to-understand way.  However,…

24 hours ago

More Than a Gym: The Cool History of the YMCA

It’s fun to stay at the YMCA! Whether you’re a member or not, click here…

2 days ago

Choosing the Best Option Between LED and Halogen for Retail Track Lighting

In retail, lighting isn't just functional—it sets the mood and attracts customers. Track lighting is…

2 days ago

Eco-Friendly Thanksgiving: Tips for Hosting a Sustainable and Environmentally Friendly Thanksgiving

Thanksgiving is a time for gathering with family and friends, sharing delicious food, and expressing…

3 days ago