Tutorial sessions are offered to AutoSensLEARN attendees. Enhance your AutoSens learning experience by booking your AutoSensLEARN ticket and attending any or all of four of our expert-led tutorials. You can also buy a Tutorial Only pass, if you have a particular interest in only one of these tutorials.
These expert-led tutorials are the perfect in-depth, technical accompaniment to the main conference agenda, covering a range of topics and themes. To attend the tutorials, in addition to the main conference sessions, please book your AutoSensLEARN bundle using the buttons below.
Tutorial 1: Image Quality: Industry Standards and Developments for Automotive Applications
Date: Tuesday November 4, 2020
Time: 4-7pm GMT
Led by: Peter D. Burns and Don Williams
Don Williams, founder of Image Science Associates, was with Kodak Research Laboratories. His work focuses on quantitative signal and noise performance metrics for digital capture imaging devices and imaging fidelity issues. He co-leads the TC 42 standardization efforts on digital print and film scanner resolution (ISO 16067-1, ISO 16067-2), scanner dynamic range (ISO 21550), and is the editor for the second edition to digital camera resolution (ISO 12233).
Tutorial 2: Modelling and simulation for autonomous vehicles
Date: Tuesday November 10, 2020
Time: 4-7pm GMT
Led by: Daniel Carruth, Associate Director, Advanced Vehicle Systems group, Mississippi State University
Autonomous vehicle modeling and simulation
• Basics of modeling and simulation
• Review of available commercial and non-commercial software packages
• Discussion of relative strengths and weaknesses of available tools
• Developing simulated tests for autonomous vehicles
• Vehicle Data
• Sensor Data
• Sensor Configurations
• Autonomous System Interfaces
• Automation and Coverage of Test Scenarios
• Virtual Environments
• Agents (Vehicles, Pedestrians, Animals)
• Missions/Objectives for Scenarios
• Metrics and Analysis of Performance
• Set up and run basic tests of autonomous vehicles
• Modifying existing vehicle data
• Modifying test parameters
This will include a hands-on portion. The process will be demonstrated by the speaker but attendees will most benefit if they can:
1. Use two monitors, to follow the session and also work on a Windows or Linux Ubuntu machine
2. Install the MAVS software (registered attendees will receive additional information)
Running the software requires no programming knowledge. Making modifications to the settings and taking full advantage of the simulation software requires some Python knowledge.
Daniel Carruth is interested in human perception, cognition, and action in the context of real-world whole body tasks. Dr. Carruth\’s research interests include modeling and simulation of human interaction with autonomous vehicles as well as the study of cognitive and physical factors impacting human performance in athletics, law enforcement, and military domains. Dr. Carruth develops and uses virtual environments for simulation and training.
Tutorial 3: CMOS image sensor and Silicon Photo Multiplier and Single Photon Avalanche detector basics
Time: 4-7pm GMT
Led by: Chuck Kingston, Automotive Field Applications Engineer and Bahman Hadji, LiDAR Engineer, ON Semiconductor
Tutorial 4: Bio-Inspired Computer Vision: Challenges and Perspectives
Time: 4-7pm GMT
Led by: Dr. Sos Agaian, Distinguished Professor, Director, Computational Vision and Learning Lab, CUNY
The rapid proliferation of hand-held mobile computing devices, coupled with the acceleration of the ‘Internet-of-Things’ connectivity, and data producing systems, such as embedded sensors, mobile phones, surveillance cameras, have certainly contributed to these advances. One of the fields in which scientific computing has made particular inroads has been the area of large-scale data analytics and machine systems. In our modern digital information connected society, we are producing, storing and using ever-increasing volumes of a digital image and video content. How can we possibly make sense of all this visual-centric data? In addition, how can we be sure that the derived computations and analysis are entirely relevant to our human vision, understanding, and interpretations? The current state of the art in computer vision analytics provide us with a variety of tools and methods to solve various classes of computer vision problems. We then are posed with the following questions – how big of a type of problems in vision are we able currently to solve, compared with the totality of what humans can do? Can we duplicate human vision abilities in a computational device? The objective of this talk is to highlight the latest advances in this research area for Computer Vision and to provide novel insights into bio-inspired intelligence. We will also present our recent research works and a synopsis of the existing state-of-the-art results in the field of Computer Vision and discuss the current trends in these technologies as well as the associated commercial impact and opportunities.
Sos S. Agaian, PhD in Mathematics, Dr, Engineering Science, and is a Distinguished Professor of Computer Science at College of Staten Island and the Graduate Center, City University of New York (CUNY). Prior to joining CUNY, Dr. Agaian was the Peter T. Flawn Professor of Electrical and Computer Engineering at the University of Texas at San Antonio; Graduate School of Biomedical Sciences UTHSCSA at San Antonio; Professor, UT System Cyber and Cloud Security Initiative, and Director of Multimedia and Mobile Signal Processing Laboratory. He has been a visiting faculty at the Tufts University, the Tampere Institute of Technology, and the Leading Scientist at the AWARE, Inc. at Bedford, MA.
Dr. Agaian received his M.S. in Mathematics and Mechanics (summa cum laude) from the Yerevan State University, Armenia; his Ph.D. in Mathematics and Physics from the Steklov Institute of Mathematics, Russian Academy of Sciences (RAS); and his Doctor of Engineering Sciences degree from the Institute of Control Systems, RAS.