Acoustic-labial speaker verification

Abstract : This paper describes a multimodal approach for speaker verification. The system consists of two classifiers, one using visual features, the other using acoustic features. A lip tracker is used to extract visual information from the speaking face which provides shape and intensity features. We describe an approach for normalizing and mapping different modalities onto a common confidence interval. We also describe a novel method for integrating the scores of multiple classifiers. Verification experiments are reported for the individual modalities and for the combined classifier. The integrated system outperformed each subsystem and reduced the false acceptance rate of the acoustic subsystem from 2.3% to 0.5%. © 1997 Elsevier Science B.V.
Complete list of metadatas

Cited literature [14 references]  Display  Hide  Download

https://hal-univ-avignon.archives-ouvertes.fr/hal-02152874
Contributor : Pierre Jourlin <>
Submitted on : Thursday, June 13, 2019 - 12:06:55 PM
Last modification on : Wednesday, June 19, 2019 - 1:26:19 AM

File

prl97.pdf
Publisher files allowed on an open archive

Licence


Copyright

Identifiers

  • HAL Id : hal-02152874, version 1

Collections

Citation

Pierre Jourlin, D Genoud, H Wassner. Acoustic-labial speaker verification. Pattern Recognition Letters, Elsevier, 1997, 18, pp.853 - 858. ⟨hal-02152874⟩

Share

Metrics

Record views

17

Files downloads

39