TY - GEN
T1 - Robotic localization and separation of concurrent sound sources using self-splitting competitive learning
AU - Keyrouz, Fakheredine
AU - Maier, Werner
AU - Diepold, Klaus
PY - 2007
Y1 - 2007
N2 - We combine binaural sound-source localization and separation techniques for an effective deployment in humanoid-like robotic hearing systems. Relying on the concept of binaural hearing, where the human auditory 3D percepts are predominantly formed on the basis of the sound-pressure signals at the two eardrums, our robotic 3D localization system uses only two microphones placed inside the ear canals of a robot head equipped with artificial ears and mounted on a torso. The proposed localization algorithm exploits all the binaural cues encapsulated within the so-called Head Related Transfer Functions (HRTFs). Taking advantage of the sparse representations of the ear input signals, the 3D positions of two concurrent sound sources is extracted. The location of the sources is extracted after identifying which HRTFs they have been filtered with using a well-known self-splitting competitive learning clustering algorithm. Once the location of the sources are identified, they are separated using a generic HRTF dataset. Simulation results demonstrated highly accurate 3D localization of the two concurrent sound sources, and a very high Signal-to-Interferenee Ratio (SIR) for the separated sound signals.
AB - We combine binaural sound-source localization and separation techniques for an effective deployment in humanoid-like robotic hearing systems. Relying on the concept of binaural hearing, where the human auditory 3D percepts are predominantly formed on the basis of the sound-pressure signals at the two eardrums, our robotic 3D localization system uses only two microphones placed inside the ear canals of a robot head equipped with artificial ears and mounted on a torso. The proposed localization algorithm exploits all the binaural cues encapsulated within the so-called Head Related Transfer Functions (HRTFs). Taking advantage of the sparse representations of the ear input signals, the 3D positions of two concurrent sound sources is extracted. The location of the sources is extracted after identifying which HRTFs they have been filtered with using a well-known self-splitting competitive learning clustering algorithm. Once the location of the sources are identified, they are separated using a generic HRTF dataset. Simulation results demonstrated highly accurate 3D localization of the two concurrent sound sources, and a very high Signal-to-Interferenee Ratio (SIR) for the separated sound signals.
KW - HRTF
KW - Self-splitting competitive learning
KW - Sound localization
KW - Source separation
UR - http://www.scopus.com/inward/record.url?scp=34548740335&partnerID=8YFLogxK
U2 - 10.1109/CIISP.2007.369192
DO - 10.1109/CIISP.2007.369192
M3 - Conference contribution
AN - SCOPUS:34548740335
SN - 1424407079
SN - 9781424407071
T3 - Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Image and Signal Processing, CIISP 2007
SP - 340
EP - 345
BT - Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Image and Signal Processing, CIISP 2007
T2 - 2007 IEEE Symposium on Computational Intelligence in Image and Signal Processing, CIISP 2007
Y2 - 1 April 2007 through 5 April 2007
ER -