Acoustic-labial speaker verification

Abstract : This paper describes a multimodal approach for speaker verification. The system consists of two classifiers, one using visual features, the other using acoustic features. A lip tracker is used to extract visual information from the speaking face which provides shape and intensity features. We describe an approach for normalizing and mapping different modalities onto a common confidence interval. We also describe a novel method for integrating the scores of multiple classifiers. Verification experiments are reported for the individual modalities and for the combined classifier. The integrated system outperformed each subsystem and reduced the false acceptance rate of the acoustic subsystem from 2.3% to 0.5%. © 1997 Elsevier Science B.V.
Liste complète des métadonnées

Littérature citée [14 références]  Voir  Masquer  Télécharger

https://hal-univ-avignon.archives-ouvertes.fr/hal-02152874
Contributeur : Pierre Jourlin <>
Soumis le : jeudi 13 juin 2019 - 12:06:55
Dernière modification le : mercredi 19 juin 2019 - 01:26:19

Fichier

prl97.pdf
Fichiers éditeurs autorisés sur une archive ouverte

Licence


Copyright (Tous droits réservés)

Identifiants

  • HAL Id : hal-02152874, version 1

Collections

Citation

Pierre Jourlin, D Genoud, H Wassner. Acoustic-labial speaker verification. Pattern Recognition Letters, Elsevier, 1997, 18, pp.853 - 858. ⟨https://doi.org/10.1016/S0167-8655(97)00070-6⟩. ⟨hal-02152874⟩

Partager

Métriques

Consultations de la notice

10

Téléchargements de fichiers

19