I am passionate to push the domain of novel sensing and multi-modal interactions for wearables and human-in-the-loop applications in AR/VR. I strive to understand humans better and in turn teach machines to interact with humans in a rich, natural way, using Contextual AI and deep learning techniques.
My work involves:
1) building multi-sensor integrated software and hardware systems
2) understanding humans through user studies and data collection from the humans and environment
3) data analysis and signal processing of biological systems utilizing data collected from said system
4) implementing machine learning or controls methods to derive smart inferences and feedback for the human user
Examples of my work include a multi-modal speech/language/vision contextual AI system for AR interaction, a mobility assistance wearable haptic device for impaired individuals, a smart surgical grasper to assist surgeons in robotic surgeries, and a self-adapting lure system for unmanned mosquito trap deployment.
05:35 PM - 06:00 PM
05:35 PM - 06:00 PM