Multimodal Perception Lab

Untitled561-300x225The Multimodal Perception lab focuses on human-centered sensing and multimodal signal processing methods to observe, measure, and model human behavior. These methods are used in applications that facilitate behavioral training, and enable human-agent interactions (HRI). The focus is mainly on vision and audio modalities. Deep Neural networks form the backbone of the underlying formalism. Some specialities of the lab are Multimodal Skill Assessment, Multimodal Conversational Agents, Indian Sign Language Synthesis

News: SignPose: Sign Language Animation Through 3D Pose Liftinng is accepted to Crossmodal Social Animation (XSAnim) workshop, which is an ICCV workshop, focused on the intersection of Human Centered Vision and Graphics. ICCV is a Core A* conference in Computer Vision. Congrats Vijay and Shyam!!

MPL Lab Academic Reco Policy: [PLEASE DO NOT ASK FOR COURSE/PE RECOs]

MPL Masters Thesis => PhD reco

Road map Thesis:

MTech: VR(2 nd Sem), AVR+PE/RE(3 rd Sem), Thesis

iMTech: VR(6 th Sem), AVR+PE/RE(7 rd Sem), PE/RE(8th Sem), Thesis prep(9th Sem), Thesis

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>