Tutorial sessions are offered to AutoSensLEARN attendees. Enhance your AutoSens learning experience by booking your AutoSensLEARN ticket and attending any or all of four of our expert-led tutorials. You can also buy a Tutorial Only pass, if you have a particular interest in only one of these tutorials.
These expert-led tutorials are the perfect in-depth, technical accompaniment to the main conference agenda, covering a range of topics and themes. To attend the tutorials, in addition to the main conference sessions, please book your AutoSensLEARN bundle using the buttons below.
Tutorial 1: The Three Goals of HDR
Date: Wednesday 23 September, 2020
Time: 1pm – 4pm BST
Led by: Alessandro Rizzi, Full Professor and Head of MIPSLab, Department of Computer Science, University of Milan
High Dynamic Range (HDR) imaging is a continuously evolving part of Imaging. More than twenty years ago HDR started to be popular with the seminal paper of Debevec and Malik proposing multiple exposures to attempt to capture a wider range of scene information
Ten-plus years ago interest evolved to recreating HDR scenes by integrating widely-used LCD with LED illumination (Helge Seetzen’s Brightsides Displays). Today, the evolution continues in the current sales of HDR televisions using OLED and Quantum Dot technologies. As well, standards for HDR video media formats remain an active area of research.
This tutorial reviews the science and technology underlying the evolution of HDR imaging from silver-halide photography to HDR TVs. HDR technology is a complex problem controlled by optics, signal-processing and visual limits. The solution depends on its goal.
After a detailed description of the dynamic range problem in image acquisition, this course focuses on standard methods of creating and manipulating HDR images focusing on the different possible goals of the HDR pipeline: reproducing light field, reproducing appearance, improving image aesthetic and visibility. For each goal a careful analysis of characteristics, limits and ground truth will be presented. The course aims at replacing myths with measurements about the limits of accurate camera acquisition (range and color) and the usable range of light for displays presented to human vision. It discusses the principles of tone rendering and the role of HDR spatial comparisons.
- HDR Reproduction History
- HDR principles, devices and techniques
- The 3 HDR goals
- Reproducing original HDR scene: Capture Challenges
- Rendering Appearance for LDR display: Display Challenges
- Improving image aesthetic and visibility: HDR in Human Vision
- Goals, ground-truths and assessment criteria for HDR applications
Tutorial 2: Bio-Inspired Computer Vision: Challenges and Perspectives
Time: 2-4pm BST
Led by: Dr. Sos Agaian, Distinguished Professor, Director, Computational Vision and Learning Lab, CUNY
The rapid proliferation of hand-held mobile computing devices, coupled with the acceleration of the ‘Internet-of-Things’ connectivity, and data producing systems, such as embedded sensors, mobile phones, surveillance cameras, have certainly contributed to these advances. One of the fields in which scientific computing has made particular inroads has been the area of large-scale data analytics and machine systems. In our modern digital information connected society, we are producing, storing and using ever-increasing volumes of a digital image and video content. How can we possibly make sense of all this visual-centric data? In addition, how can we be sure that the derived computations and analysis are entirely relevant to our human vision, understanding, and interpretations? The current state of the art in computer vision analytics provide us with a variety of tools and methods to solve various classes of computer vision problems. We then are posed with the following questions – how big of a type of problems in vision are we able currently to solve, compared with the totality of what humans can do? Can we duplicate human vision abilities in a computational device? The objective of this talk is to highlight the latest advances in this research area for Computer Vision and to provide novel insights into bio-inspired intelligence. We will also present our recent research works and a synopsis of the existing state-of-the-art results in the field of Computer Vision and discuss the current trends in these technologies as well as the associated commercial impact and opportunities.
Sos S. Agaian, PhD in Mathematics, Dr, Engineering Science, and is a Distinguished Professor of Computer Science at College of Staten Island and the Graduate Center, City University of New York (CUNY). Prior to joining CUNY, Dr. Agaian was the Peter T. Flawn Professor of Electrical and Computer Engineering at the University of Texas at San Antonio; Graduate School of Biomedical Sciences UTHSCSA at San Antonio; Professor, UT System Cyber and Cloud Security Initiative, and Director of Multimedia and Mobile Signal Processing Laboratory. He has been a visiting faculty at the Tufts University, the Tampere Institute of Technology, and the Leading Scientist at the AWARE, Inc. at Bedford, MA.
Dr. Agaian received his M.S. in Mathematics and Mechanics (summa cum laude) from the Yerevan State University, Armenia; his Ph.D. in Mathematics and Physics from the Steklov Institute of Mathematics, Russian Academy of Sciences (RAS); and his Doctor of Engineering Sciences degree from the Institute of Control Systems, RAS.
Tutorial 3: Autonomous Driving with ROS
Time: 1pm – 4pm BST
Led by: Jeremy Lebon, Lecturer / Researcher, VIVES University of Applied Sciences
As ROS was primarily developed for the classical robot. A lot of the code and functionalities have analogies with autonomous driving. In the presentation, these analogies are handled and clarified on a practical use case.
Tutorial 4: The latest advances in image sensor technology
Date: Wednesday 30 September, 2020
Time: 1pm – 4pm BST
Led by: Prof Albert Theuwissen, Founder, Harvest Imaging, Belgium
- Numbers (add up to nothing),
- High dynamic range,
- Voltage domain global shutters
- Low noise,
- Colour filter news,
- Phase detective auto-focus pixels,
- The extremes,
- “New” materials,
- Beyond silicon in the near-IR,
- Event-based imagers,
- PTC in the dark,