The Multimodal Perception lab focuses on human-centered sensing and multimodal signal processing methods to observe, measure, and model human behavior. These methods are used in applications that facilitate behavioral training, and enable human-agent interactions (HRI). The focus is mainly on vision and audio modalities. Deep Neural networks form the backbone of the underlying formalism. Some specialities of the lab are Multimodal Skill Assessment, Multimodal Conversational Agents, Indian Sign Language Synthesis.
News: MULTIPLE Research Assistant/Associate positions available, please email jdinesh at iiitb dot ac dot in
News: Congrats to Annapurna, journal paper accepted in Multimedia Tools and Applications!!
News: ICMI 2022 successfully organized in Bangalore, India!!
MPL Lab Academic Reco Policy: [PLEASE DO NOT ASK FOR COURSE/PE RECOs]
MPL Masters Thesis => PhD reco
Road map Thesis:
MTech: VR(2 nd Sem), AVR+PE/RE(3 rd Sem), Thesis
iMTech: VR(6 th Sem), AVR+PE/RE(7 rd Sem), PE/RE(8th Sem), Thesis prep(9th Sem), Thesis