AutoSens Conference
  • Brussels
    • AutoSens Brussels Edition
    • Speakers
    • Agenda
    • Sponsors and Exhibitors
  • Detroit
    • AutoSens Detroit Edition
    • Speakers
    • Agenda
    • Sponsors and Exhibitors
    • View interactive Show Guide
  • Hong Kong
    • AutoSens in Hong Kong
    • Agenda
    • Sponsors & Exhibitors
    • Venue & Hotels
  • Sponsor & Exhibit
    • Sponsor & Exhibit
    • Media Partners
    • Testimonials
  • Awards
    • Watch the Awards
    • About Awards
    • Sponsors
  • About Us
    • About AutoSens
    • Latest news
    • Advisory Board
    • Testimonials
    • Past Events
    • Contact Us
    • Press Coverage
    • Industry Standards
      • IEEE P2020 Working Group
    • About Sense Media
  • Registration
    • Women in Engineering
  • Login to AutoSens Detroit

Blog

AutoSens Conference > Blog > Seeing around corners with Prof. Andreas Velten

Seeing around corners with Prof. Andreas Velten

Apr 2, 2019AutoSensBlog, Interviews, Latest News, Other

Prof. Andreas Velten, Department of Biostatistics and Medical Informatics, Department of Electrical and Computer Engineering

We caught up with Prof. Andreas Velten, Department of Biostatistics and Medical Informatics, Department of Electrical and Computer Engineering to share his views and thoughts on developing new imaging methods to see around corners, the industry-wide challenges with imaging systems yet to match the performance of the human eye and the importance of dialog between basic research and short term applications.

Prof Velten will be presenting “Robust Inexpensive Frequency Domain LiDAR using Hamiltonian Coding” at AutoSens in May 2019.

You worked as a Postdoctoral Associate at the MIT Media Lab, what did you work on there?

I developed ultra-fast imaging systems to capture videos the propagation of light. I captured videos of laser pulses moving through soda bottles or illuminating little still life scenes. We used the time of flight information captured in the videos to develop methods to see around corners. We illuminate a relay surface in a scene with our light pulses. After hitting the relay surface the light travels into the scene and reflects off objects. We capture video of the light that comes back to the relay wall and use it to reconstruct images of the scene as it can be seen from the relay wall. In my own group we are using this method to image scenes like office cubicles through a window, or the inside of caves from the air.

You’re the co-founder and CTO of OnLume, what does the company do?

OnLume develops cameras to better visualize tumors, nerves, and other anatomy during surgery.

What is your main research focus within the Computational Optics group?

Our group develops new imaging methods that can achieve things that normal vision systems can’t, by combining new methods of light capture with computational algorithms to create images from the captured data. A large focus is on fast imaging systems that can measure the time of flight of light through the scene. We use this information to see through fog, around corners, and create high resolution 3D images.


Watch latest videos from AutoSens on YouTube
► Panel from AutoSens Brussels: Practical experiences and future European funding opportunities
► Increasing the adoption of LiDAR through Tier 1 partnerships with AEye
► The biggest trends for solid state LiDAR with Imec

What do you see as the biggest challenges for imaging for automotive?

Imaging systems can’t yet match the performance of the human eye. Especially with respect to dynamic range and efficiency. Trying to write an algorithm that performs like a human driver, but with inferior data, is challenging. Imaging methods like LiDAR can provide algorithms with better data to not only match, but exceed the capability of human vision. Finding ways of providing these technologies in a cost effective way and using them to the largest benefit to the driver (which could be a human or an algorithm) provides a fascinating challenge.

Your presentation covers Hamiltonian Coding. Can you explain what that is and how it applies to automotive?

Hamiltonian Coding is a way to improve the performance of “frequency domain” LiDAR systems. These systems illuminate the target with a modulated light source (i.e. one that is blinking on and off in a particular pattern). The pattern seen by the camera is shifted in time from the illumination and by comparing the two patterns the camera can determine the distance to the target. This is an inexpensive way to perform 3D LiDAR imaging and is used in existing devices like the Microsoft Kinect. They are fast, robust, and don’t have moving parts like other LiDAR imaging systems. Our research analyzes what the best patterns or “codes” are to send out into the scene. By choosing our optimal patterns we can improve the performance of these systems by an order of magnitude and make them more robust to changes in ambient light with minimal changes in hardware. We hope that these changes will make frequency domain methods competitive with the pulsed LiDAR systems often used in automotive applications.

What are you looking forward to about presenting at AutoSens?

Presenting at a venue focusing on such an important emerging application area for advanced vision is of great importance to me. I believe the dialog between basic research and short term applications is very important to make sure our research remains relevant. So I am hoping for lots of questions and interesting conversations.


You can hear Prof. Andreas Velten, Department of Biostatistics and Medical Informatics, Department of Electrical and Computer Engineering at AutoSens in Detroit this Spring. Tickets are available here >>

About Author

AutoSens


Read latest news and updates from AutoSens

  • AutoSens Award Winners Crowned ~ 14 December 20
  • You bring the talent, we’ll bring the tools – with ON Semiconductor ~ 9 December 20
  • AutoSens Awards 2020: Finalists Revealed ~ 2 December 20

Buy your Tickets to an AutoSens Conference
Women in Engineering Tickets
Subscribe to AutoSens Newsletter
Webinar: AI in Automotive
Join AutoSens TV for interviews, videos and past presentations

Follow the conversation

Latest News

  • AutoSens Award Winners Crowned 14/12/2020
  • You bring the talent, we’ll bring the tools – with ON Semiconductor 09/12/2020
  • AutoSens Awards 2020: Finalists Revealed 02/12/2020
  • Driving ADAS technology and autonomous vehicles into the future with Siemens 10/11/2020
  • Wired Interfaces: Enabling End-to-End Interfaces for ADAS, Infotainment and Autonomy 27/10/2020

What does the industry think?

"We are honored to be a part of the AutoSens community." Ziv Livne, TriEye

"Another great conference this year!  It was very successful for CEVA and myself this year, and we are very excited for Detroit in May next year." Jeff Van Washenova, CEVA

"Excellent." Glenn Schuster, Sr. Dir. Sensor Ecosystem, NVIDIA

"The Autoworld Museum is an excellent choice for this event because old cars can inspire new ones." Alexandru Marin, ADAS HiL Engineer, Toyota Motor Europe

"A very nice conference, gathering current and relevant topics, excellent speakers and great networking opportunities in a beautiful setting." Clémentine François, Chief Scientific Officer, Phasya

View all Testimonials

Contact Us

Sense Media Events

The Old Granary, Field Place Estate, Byfleets Lane, Broadbridge Heath, West Sussex RH12 3PB

+44 (0)208 133 5116

info@sensemedia-events.com

https://auto-sens.com/contact-us/

Useful Links

About Us
Book your tickets
Sponsor or Exhibit at AutoSens
Latest news
Subscribe to our newsletter
Privacy Policy
Terms & Conditions

About the Organisers

Sense Media is a smart, agile, B2B events business. Serving technology verticals that enable machine perception via sensors and signal processing, we connect cutting edge innovation with business opportunities. By bringing the world’s foremost sensor experts together with end users and senior business managers, our meetings deliver equal value from the learning and networking experience.
© 2020 Sense Media. All rights reserved.