An adaptive model of person identification combining speech and image information
The paper introduces a combination of adaptive neural network systems and statistical method for integrating speech and face image information for person identification. The method allows for the development of models of persons and their on-going adjustment based on new speech and face images. The method is illustrated with a modeling and classification of different persons, when speech and face images are presented in an incremental way. In this model, there are two sub - networks, one for face image and one for speaker recognition. A higher-level layer is applied to make a final decision. In the speaker recognition subnetwork, a text-dependant model is built using Evolving Connectionist Systems (ECOS) . In the face image recognition sub-network, composite profile technique is applied for face image feature extraction and Zero Instruction Set Computing (ZISC)  technology is used to build the neural network. In the higher-level conceptual subsystem, final recognition decision is made using statistical method. The experiments show that ECOS and ZISC are appropriate techniques for the creation of evolving models for the task of speaker and face recognition individually. It is also shown that the integration of the speech and image information using statistical method improves the person identification rate. © 2004 IEEE.