Hear from:
With the rapid development of cutting edge DMS technology and its deep learning components, the requirements for computing power and memory size are also ever increasing. Most of the existing ECUs fail to keep up pace with such demand due to the difficulty of updating their fixed hardware design. This limits the potential performance of the DMS. Instead, by adopting extendable AI accelerator, we can overcome the limitations and release the full potential of the DMS. We will showcase how deep learning models only need to be transmitted to our extendable AI accelerator module through USB or PCI-E interface to obtain the inference output, without modifying your current software design too much, greatly simplifying the system verification & qualification.