Gerhard Rigoll

Prof. Dr.

1984 …2024

Research activity per year

Personal profile

Scientific Career

The research of Prof. Rigoll (b. 1958) deals with all aspects of pattern recognition for multimodal human-machine interaction. Subfields include speech processing, audiovisual information processing, handwriting recognition, gesture and emotion recognition, face detection and recognition, object tracking and interactive graphical systems. He is the author or co-author of over 500 publications and has been member of many different programme committees. He has been involved in numerous expert panels in Germany and internationally.

After studying technical cybernetics in Stuttgart, he became research assistant at Fraunhofer Institute (IAO) in Stuttgart. He obtained his doctoral degree in 1986 with a thesis on automatic speech recognition. After that, he was a postdoctoral fellow at IBM Thomas Watson Research Center in Yorktown Heights/USA until 1988. After qualifying as a lecturer in Stuttgart from 1991 to 1993, he was a visiting scientist at the NTT Human Interface Laboratory in Tokyo. From 1993 to 2001, he was professor of computer engineering at Gerhard Mercator University in Duisburg prior to accepting his current position at TUM in 2002.

Expertise related to UN Sustainable Development Goals

In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This person’s work contributes towards the following SDG(s):

  • SDG 8 - Decent Work and Economic Growth
  • SDG 11 - Sustainable Cities and Communities
  • SDG 12 - Responsible Consumption and Production
  • SDG 16 - Peace, Justice and Strong Institutions

Fingerprint

Dive into the research topics where Gerhard Rigoll is active. These topic labels come from the works of this person. Together they form a unique fingerprint.
  • 1 Similar Profiles

Collaborations and top research areas from the last five years

Recent external collaboration on country/territory level. Dive into details by clicking on the dots or