The title of the thesis was Humanoid Robot Control of Complex Postural Tasks based on Learning from Demonstration  and it was directed by Prof. Carlos Balaguer, from Universidad Carlos III of Madrid, and Dr. Thrishantha Nanayakkara, from King's College London. Its objective is to make an advancement in the development of complex behaviors in humanoid robots, in order to allow them to share our environment in the future. The experimental justification of this thesis was developed using the humanoid robot HOAP-3.
It is widely known by psychologists and neuroscientists that imitation learning is one of the first methods toddlers use to develop his skills. Furthermore, there are evidences that justify that the imitation between humans is goal-directed.
Recent studies show that apes are more suitable to imitate while children show more tendency to over-imitate, in the sense that children make an attempt to improve the optimality of the learnt skills. In that sense, skill innovation is therefore an essential part of the human behavior .
Consider a child learning motor skills based on demonstrations performed by his parent. In this case, the problem of relating demonstrations performed by the parent to the child’s own kinematic scale, weight and height, known as the correspondence problem, would be one of the complex challenges that should be solved first. The correspondence problem is one of the crucial problems of imitation and can be stated as the mapping of action sequences between the demonstrator and the imitator.
From that perspective, the child could find a solution which fits his own muscular strength, size, reachable space and kinematic characteristics which somehow matches the level of optimality of demonstrations performed by the parent. This problem can be solved by mapping movements made in a different kinematic scale to a common domain, such as a set of optimal criteria that defines the goal of the action.
The thesis proposed a novel method for humanoid robots to acquire optimal behaviors based on human demonstrations. We solved the correspondence problem by making comparisons in a common domain, a reward space defined by a multi-objective reward function. The experimental results shows an advancement in how a humanoid robot can learn to imitate and innovate motor skills from demonstrations of human teachers of larger kinematic structures and different actuator constraints.
The representation of the behavior goal is made through a reward function. The shape of the reward functions are selected in accordance to the task. In the case of standing up from a chair, where it is important to maintain stability, the reward value would be high when the robot is in a stable position and would be low when the robot is unstable.
In a similar way, if in the task is important to minimize the effort, the reward value would be high when the robot actuators have a small torque (therefore a small effort) and would be low when the torque is high (therefore a great effort). Consequently, the reward function acts as an attractor of the behavior's goal.
The reward function is formed by different components, depending on the objective of the action in every moment. This agrees with the theory of Marvin Minsky, one of the founders of Artificial Intelligence, that proposes that our brain manages different resources that compete between each other to fulfill different goals .
We collected 3D motion data of a group of human subjects performing several behaviors, the first one was a stand up movement and the second one a sequence of behaviors such as walking to a door and open it.
Then we defined a multi-objective reward function as a measurement of the goal optimality for both human and robot, which is defined in each subtask of the global behavior.
Finally, we optimized a policy to generate whole-body movements for the robot that produces a reward profile that is compared and matched with the human reward profile, producing an imitative behavior. Furthermore, we can search in the proximity of the solution space to improve the reward profile and innovate a new solution, which is more beneficial for the humanoid.
In order to generate the movement we used a genetic algorithm called Differential Evolution. A genetic algorithm is an optimization method that is used to generate candidate robot movements whose reward is matched to the human's reward. The matching of the reward profiles is performed using Kullback-Liebler divergency.