A pass is required to view this video.

Multi-task learning with Transformers for In-Cabin Monitoring

Sign in | Register to Bookmark

Hear from:

Jungyong Lee
Vision AI Specialist,

LG Electronics

Released on July 04, 2023

In-cabin monitoring in an automotive mobility environment becomes more and more important due to increasing safety regulations and complicated Human \Machine Interface (HMI) requirements. However, due to embedded systems for HMI has usually limited in computational power, algorithms need be faster and duplicated works should be removed. In addition, the recent trend of integrating with existing systems such as head unit integration requires that the algorithm need to be smaller and lighter. In this work, our approach is proposed a multi-task learning technique that is suitable for various cabin scenarios. Our research has only one backbone and shared it to learn multiple tasks using a transformer decoder. As a result, features such as object detection, key point detection and behavior detection were able to operate with lower power and computation because it is different from the existing method in which each algorithm had its own backbone. In addition, the size of the model has been reduced so that it can be easily used for model update, and only the decoder part can be updated separately. And, since our network is designed only with operations commonly provided by existing commercial System on Chip(SoC), the model can be ported without difficulty and can show high efficiency.

Want to watch this session?

You must be logged in and have a pass to watch this session.
Scroll to Top