If an AI could learn language like humans do – that is, through building associations between sounds, things in the world, and meaning – would the AI be a more competent conversational agent? That doesn’t mean an AI would truly “understand” the meaning of words in the same way humans do. But it would greatly aid the effort to build conversational systems that respond appropriately to conversational input.

This article from Ars Technica describes research coming out of MIT that looks a lot like a first step toward teaching an AI to learn language in a manner that’s similar to how people do:

Researchers at MIT have developed software with the same ability to learn to recognize objects in the world using nothing but raw images and spoken audio. The software examined about 400,000 images, each paired with a brief audio clip describing the scene. By studying these labels, the software was able to correctly label which portions of the picture contained each object mentioned in the audio description.

h/t: Ars Technica