Tactile sensors aim to mimic human's sense of touch for robots. Advancements in camera technology and GPUs provide an opportunity to perform deep learning models in real-time for measurements applications. In this work, world's first event-based tactile sensor is proposed to estimate the contact force during a grasp. An event-based camera (Dynamic vision sensor) and recurrent deep learning methods (LSTM/CNNLSTM/ConvLSTM) are combined to achieve a solution with low-latency and high accuracy. My research raised +600,000$ funding and awards over past years in multiple countries.
Dedicated and highly self-motivated Ph.D. candidate in Artificial Intelligence with hands-on experience in Deep Learning, Computer Vision, and Robotics. I have been working in industry for years as Artificial Intelligence Engineer to provide machine learning solutions for different problems including surveillance, object detection and product recognition in retails etc.