Evolving autonomous locomotion of virtual characters in a simulated physical environment via neural networks and evolutionary strategies
MetadataShow full metadata
The animation of virtual characters is a process that although supported by various software and hardware can be tedious and costly especially when the character to animate is very complicated and/or detailed. The method presented in this thesis tries to automatize or to support this process by letting the character "learn" its movements autonomously. This is established by first modelling physical properties (e.g. mass, inertia moments, joints, degrees of freedom) additionally to the optical ones to allow the interaction of the character with a simulated physical environment. In the second step the sensors (e.g. pressure, forces, angles, speed) and actors (e.g. motors, "muscles", suspension elements) that the character uses are defined. Third the sensors and actors are connected with the inputs and outputs of a neural network whose bias values and link weights are still uninitialized. These values are then modified by evolutionary strategies to find naturally looking movements of the character in its physical environment.