Interview with Valeo’s Patrick Denny – part 1
Patrick Denny (pictured right) is known within the industry as one of the leading technologists in the field of automotive vision, and has co-authored more than 60 patents. With more than 20 years’ experience, and a decade at leading automotive technology supplier Valeo, his experience and insights are an important guide for industry, and as Chair of the Advisory Board for AutoSens, he is a critical sounding board for the content and structure of the AutoSens conference.
In the first of this two-part feature, we asked Patrick about how he came to be involved in the conference (including some insightful advice for people developing autonomous vehicle technologies), as well as looking at challenges facing OEMs (car manufacturers), standards development in the field and the importance of understanding big data.
You are chairing AutoSens after two years leading an established conference in the sector, what made you decide it was time to change?
We originally created a conference with smaller scope in the context of an image sensor conference after it became clear that sensor vendors neither appreciated nor understood the automotive sensor space sufficiently and several of these knowledge gaps were easily addressable if we could only get the relevant stakeholders to talk to each other.
We were able to facilitate this through the creation of a conference where the OEMs (car manufacturers), Tier 1s (their suppliers), Tier 2s (suppliers to Tier 1s) and research community interacted.
However, the video chain is an extremely complex entity and it became clear that the scope needed to increase from purely image sensor considerations to a “glass-to-glass” (lens-to-display-unit) approach. Such an approach would include everything from lens technology through sensor innovations, into processing, display, perception, machine vision, artificial intelligence and several technologies whose use in automotive would have been inconceivable years ago.
I had successfully encouraged my contacts in several of the OEMs such as BMW, Jaguar Land Rover, Daimler, VW and others to participate, and the smaller-scoped conference was promising but the company producing it had a wide conference portfolio across several industries and relatively narrow representation in automotive tech, so its commitment and ability to support our ambition was limited; I anticipated that the nascent conference would be “rolled back” into sessions in an extant conference of theirs, instead of growing in its own right.
Also, the key conference producer had left to set up the company behind AutoSens, (Rob Stead, of Sense Media Events – pictured right), and he shared exactly the ambition and focus needed.
I did not want the focus of gathering the great and the good of automotive sensing to be diluted, so I had to choose a direction to go in. So after much consultation with other stakeholders, in particular the OEMs themselves, I decided that it was in the best interests of our customers, competitors and suppliers and indeed the industry as a whole to go onwards and upwards with AutoSens.
Conference topics now tend to include other sensor modes and system architecture. Can you explain why that is important for the audience?
The pervasive tendency in the industry is a transition from component-level thinking to system-level, where the products and applications are considerably more than the sum of their parts.
This calls for: glass-to-glass thinking for the video chain (for viewing), and; glass-to-bit appreciation (for machine vision).
These instincts must be further widened in multi-sensor architectures that support sensor fusion to include photon/phonon-to-bit thinking. The day of daisy-chaining black boxes to make an automotive product is gone and stakeholders in the video chain need to have an appreciation of, and accommodation for, the interaction of the elements which originate both inside and outside of the automotive industry.
This is coupled with the “big bang” in the application space. Sensors don’t just make a beep sound for the person behind the steering wheel, they mix (sensor fusion) to drive the car and to couple the car to the vehicular and environmental infrastructure.
If you don’t know where or why your subsystem fits in with other subsystems, to get the best out of the overall application, then your component will not be a differentiator and the industry will leave you behind.
This need to engage with technologists outside of your specific domain is met by a conference such as AutoSens, and is one of its core ambitions.
How has the make-up of the audience changed at conferences you’ve attended?
The original audiences at the conferences were niche and were more involved in, for example, low level photoelectric semiconductor physics, but as I’ve elaborated, these widened and cross pollinated considerable.
Also, the participants have changed from niche Tier 1-3 suppliers and Tier 1 suppliers and some OEMs, to traditionally non-automotive participants like Google, Microsoft and others; this really shows how the automotive sensing market has grown. The breadth of the new players is almost the breadth of the technology industry, so I see faces from everywhere.
Challenges facing OEMs
As an established imaging systems and camera expert, what do you think are the main challenges facing OEMs and how could they resolve them?
The business models of the OEMs are changing drastically and a huge challenge for them is find new ways of adding value in a rapidly changing technological and social environment.
Applications that have been associated with non-OEM entities are now moving aggressively into the automotive space; Google is propagating autonomous vehicle technology and an ad-hoc taxi company, Uber, is now larger than Daimler.
Technology has risen to the challenges of the imagination for new product offerings, albeit by creating considerable complexity and a need for cross-functional engineering; this has forced OEMs and their suppliers out of classical organisational silos into forums where the latest knowledge in different domains can be communicated among the fields’ leaders, which is of course where AutoSens comes in.
You are also involved in the formation of the IEEE workgroup on standards for automotive vision – why do you believe this is important?
The industry does not have time to waste resource on constant parallel reinvention of the wheel when we should be spending it on the reinvention of the vehicle.
The IEEE is the world’s largest technical professional organisation and literally sets the standards for industries. The goal is to bring together some of the key minds to maximize the knowledge and experience into good practices which will commodify common elements of the technologies under consideration, giving technologist the capacity to focus on the future and allowing component suppliers to generalize components, reducing overall cost.
There are simple problems such as “What is low light performance for an automotive camera and how do we test it with consistency for the OEMs?”, “How much variation is acceptable in the color response of cameras in a production tester?”, “Can we agree on lens fields of view?”, “What are the minimum non-video data that should be provided by a camera to a supersystem?”.
You are studying the science of ‘big data’ alongside your core work in imaging. How do the two overlap and why is big data important for automakers?
To maximize the benefit of the applications we develop you have to have a clear understanding of how to move from the hardware to the use of the data, from the medium to the message.
Unfortunately, there is a surfeit of form and a deficit of substance in the area of big data, with excessive hype leading to premature disappointment by stakeholders such as Tier 1s, OEMs and indeed end-customers. In order to get a proper understanding of the opportunities afforded by big data, I decided in recent years to supplement my skillset with a Masters degree in Data Analytics to immerse myself in the mathematics and tools that are the bedrock of big data.
In automotive, we are moving from small data to big data.
A typical colour megapixel automotive camera generates a data bit for every man, woman and child on Earth in under 6 seconds and is connected in turn to multi-sensor systems that perform ever more complex processing, and this is the tip of the data iceberg among the sensor fusion systems on a vehicle.
So we are in a situation where we have considerable amounts of data available to the vehicle, the infrastructure, designated third parties, and others who can add value; this provides opportunity for adding value to everything from the engine to the environment.
The introduction of cameras on to vehicles caused a leap in the volume, variety and velocity of data available to the vehicle and a rock solid mathematical appreciation of how these data sets work is vital for automotive technologists trying to make product differentiators.
In the second part of our interview with Patrick Denny, we look at education and skills development, the relationship between industry and academia, his views on open source software in the automotive sector and which country has ‘got it right’ for the future of the automotive industry.
Find out more
In addition to performing as chair of the AutoSens advisory board, Patrick is part of a key panel on the conference agenda, along with representatives from IEEE and BMW, to discuss Why Do We Need Image Quality Standards in Automotive Vision?
Carefully selected experts will discuss the shared challenges, innovation, standardisation and supply chain collaboration involved with the development of the latest ADAS technologies and self-driving cars via panels, presentations and conversations.