The Multimodal Perception lab focuses on human-centered sensing and multimodal signal processing methods to observe, measure, and model human behavior. These methods are used in applications that facilitate behavioral training, and enable human-agent interactions (HRI). The focus is mainly on vision and audio modalities. Deep Neural networks form the backbone of the underlying formalism. Some specialities of the lab are Multimodal Skill Assessment, Multimodal Conversational Agents, Indian Sign Language Synthesis.
News: Looking for an exceptional Research Assistant for 1 year, starting Jan/Feb, Kindly apply. (short term – 3/6 month interns don’t apply please)
News: PhD student Annapurna’s paper accepted in Expert Systems with Applications !!!(Impact Factor: 5.45)
MPL Lab Academic Reco Policy: [PLEASE DO NOT ASK FOR COURSE/PE RECOs]
MPL Masters Thesis => PhD reco
Road map Thesis:
MTech: VR(2 nd Sem), AVR+PE/RE(3 rd Sem), Thesis
iMTech: VR(6 th Sem), AVR+PE/RE(7 rd Sem), PE/RE(8th Sem), Thesis prep(9th Sem), Thesis