Acoustic-labial speaker verification - Avignon Université Access content directly
Journal Articles Pattern Recognition Letters Year : 1997

Acoustic-labial speaker verification

Pierre Jourlin
D Genoud
  • Function : Author
H Wassner
  • Function : Author

Abstract

This paper describes a multimodal approach for speaker verification. The system consists of two classifiers, one using visual features, the other using acoustic features. A lip tracker is used to extract visual information from the speaking face which provides shape and intensity features. We describe an approach for normalizing and mapping different modalities onto a common confidence interval. We also describe a novel method for integrating the scores of multiple classifiers. Verification experiments are reported for the individual modalities and for the combined classifier. The integrated system outperformed each subsystem and reduced the false acceptance rate of the acoustic subsystem from 2.3% to 0.5%. © 1997 Elsevier Science B.V.
Fichier principal
Vignette du fichier
prl97.pdf (419.1 Ko) Télécharger le fichier
Origin : Publisher files allowed on an open archive
Loading...

Dates and versions

hal-02152874 , version 1 (13-06-2019)

Licence

Copyright

Identifiers

  • HAL Id : hal-02152874 , version 1

Cite

Pierre Jourlin, D Genoud, H Wassner. Acoustic-labial speaker verification. Pattern Recognition Letters, 1997, 18, pp.853 - 858. ⟨hal-02152874⟩

Collections

UNIV-AVIGNON LIA
30 View
154 Download

Share

Gmail Facebook Twitter LinkedIn More