There are self-modifying computer programs that “learn” from success and failure. Chess-playing computers, for example, become better through repeated games against humans.
Could a similar robot also learn to speak? If the robot gets the same input as a child gets when it learns to speak, should it not be possible in principle?
Notice how the question zigzags between child and machine. We say that the robot learns. We say that the child gets input. We speak of the robot as if it were a child. We speak of the child as if it were a robot. Finally, we take this linguistic zigzagging seriously as a fascinating question, perhaps even a great research task.
An AI expert and prospective father who dreamed of this great research task took the following ambitious measures. He equipped his whole house with cameras and microphones, to document all parent-child interactions during the child’s first years. Why? He wanted to know exactly what kind of linguistic input a child gets when it learns to speak. At a later stage, he might be able to give a self-modifying robot the same input and test if it also learns to speak.
How did the project turn out? The personal experience of raising the child led the AI expert to question the whole project of teaching a robot to speak. How could a personal experience lead to the questioning of a seemingly serious scientific project?
Here, I could start babbling about how amiably social children are compared to cold machines. How they learn in close relationships with their parents. How they curiously and joyfully take the initiative, rather than calculatingly await input.
The problem is that such babbling on my part would make it seem as if the AI expert simply was wrong about robots and children. That he did not know the facts, but now is more well-informed. It is not that simple. For the idea behind the project presupposed unnoticed linguistic zigzagging. Already in asking the question, the boundaries between robots and children are blurred. Already in the question, we have half answered it!
We cannot be content with responding to the question in the headline with a simple, “No, it cannot.” We must reject the question as nonsense. Deceitful zigzagging creates the illusion that we are dealing with a serious question, worthy of scientific study.
This does not exclude, however, that computational linguistics increasingly uses self-modifying programs, and with great success. But that is another question.
Beard, Alex. How babies learn – and why robots can’t compete. The Guardian, 3 April 2018
0 Comments
1 Pingback