Many people are hard at work on perception and cognition in the AI field. Both government and industry are interested because having autonomous robots would solve the expensive and communications intensive problem we face when today's machines encounter a situation and simply say: "OK, now what do I do?" and wait for a human response. In a situation where time is precious, like in a bomb disposal, on a battlefield, or on the surface of a distant planet, it would be much better if the machine had some more autonomy.
The difficulty of the task has been underestimated. Even animals that can walk within hours of birth have had months of time to build up coordination in the womb, and have millions of years of genetic programming hard-wired into them. Watching your own child learn to move and react to their surroundings is an amazing thing -- but it takes a long time to reach "competence."
Admittedly, we are making the task simpler by redefining the tasks for which competence is necessary: 'grab and cut wires,' 'send pictures of boxy things with turrets,' or 'pick up small rocks.' But even these tasks take time to really figure out, and then to teach to a machine.
I remain a fan of AL vs. AI. Artificial Learning that is. Self-organizing systems are amazing things, and don't have to be that complicated in their individual parts -- it's the behaviour of the whole that is complex. Models of schooling fish are fairly easily built where very simple rules dictate the movement of the individual, but the complex swirls that so entrance us quickly emerge when a critical number of individuals exist. And I'm pretty sure this will be so for robots of the future too.
...and that was the scenario for the Terminator movies.