Just the Tonic – Interview with Dr Goksel Dedeoglu at PercepTonic
In the last of our extensive series of interviews in the run up to AutoSens, we spoke with Dr Goksel Dedeoglu, member of the AutoSens advisory board and founder of PercepTonic, the Dallas-based research consultancy.
Goksel, we are pleased to have you on board as a speaker and advisor for AutoSens. Give us a short summary of your background in computer vision?
I have been working on autonomous robots and perception systems for almost 20 years. I received a Ph.D. in Robotics from Carnegie Mellon University and worked for Texas Instruments for seven years before launching my own company, PercepTonic. Today, I am actively engaged with clients, including automotive OEMs, developing embedded vision solutions that use cameras and LiDARs.
You’re talking about ‘deep learning’ at AutoSens. There’s a lot of hype about deep learning and neural nets in the press, but this isn’t a new topic, is it?
You’re right, neural networks have been around for decades. Over the years, neural nets have tried their darnedest to crack some of the most pertinent AI challenges, such as pattern recognition, image classification and robotic controls, but their success had been limited. What IS new is that we now have orders of magnitude bigger datasets. With “big data,” we can effectively train and tackle much more complex and larger nets. This has resulted in a performance jump across many fields, from computer vision to speech recognition, and it’s been so successful, that it’s being tried out on complete end-to-end systems, such as self-driving cars.
So what are some of the benefits of deep learning?
Deep learning provides a machine learning architecture for tackling signal processing problems including Computer vision (CV). Deep Learning can automatically discover interesting features and representations of the data, whereas traditional CV requires anticipating and engineering what patterns to look for.
Deep learning can also help augment traditional CV with a holistic and semantic intelligence of the world. Even sophisticated CV algorithms struggle to understand the boundaries and relationships between objects and how they occlude each other in a 2D image. With a basic segmentation and understanding of the scene elements from neural nets, we could stop CV solutions from wasting its limited compute budget processing many implausible scenarios.
How would that work in automotive?
In the automotive setting, there are many “normal” spatial configurations that tend to occur repeatedly, such as vehicles on the road and pedestrians upright and sometimes crossing the road. Deep learning has the capacity to internalize what type of objects appear where and to use that understanding to “fill in” the picture providing cues for behavior when there is only partial data. In traditional AI, the programmer would have to encode and anticipate all possible interactions and that could be computationally intractable.
Who is spearheading research in deep learning?
Interestingly, internet and social media companies are in the forefront of deep learning research. As these industries look to monetize the immense datasets that they have been collecting, deep learning has emerged as a key computational tool. You see, the more data you have, the larger and more capable of a network you can train, and in return, the more insight you gain about what is happening and relevant. This ever-increasing trove of data has already attracted some of the top researchers from academia to join industrial research labs. The good news for the rest of us is that these scientists still seem to be publishing.
What one lesson would you say is the most important for the automotive industry to learn about adopting CV?
The automotive industry is under incredible pressure to deliver perception solutions that work robustly. This, in turn, has spurred a sense of impatience to see CV mature and become standardized hardware. Traditionally, the automotive industry has been able to quickly standardize electronics in cars. For instance, high-definition video encoding was widely adopted, standardized, and finally hardware-accelerated as a hard ASIC block in a chip. At that point, the video features were “done” – mere items on a checklist in the product –1080p, check, H.264, check, and the electronics industry could move on to the next big thing.
The problem is, CV has yet to slow down to let that standardized product happen. The algorithms are too fluid and still evolving. I believe that some of the enthusiasm about deep learning is related to this impatience to define, box, and harden “the CV problem” once and for all. I really doubt there is a stopping point – CV will keep evolving and the industry will need to adapt.
Watch latest videos from AutoSens on YouTube
► Rethinking the Three “Rs” of LiDAR: Rate, Resolution and Range with AEye
► Meeting robust and scalable challenges in the AV and ADAS market head on with Algolux
► NEW two-layer LiDAR integrations with Fraunhofer
Your products count crowds, help drones avoid crashes and stop insects from triggering burglar alarms – how does this experience help your work when it comes to autonomous vehicles, or any other new application?
We have prototyped embedded CV systems for a wide range of applications in the past ten years –automotive, video security, retail, and most recently, drones. Honestly, it really takes hands-on prototyping to appreciate the nuances and establish the hard requirements of a new domain, especially if you are aiming for an embedded product. Sometimes the problem turns out to be more difficult than expected, sometimes easier, or simply hiding in a different place. It gets particularly exciting when we stumble upon that HW+SW sweet spot that is 10x more cost effective than what the industry thinks it should be.
Can any of those be relevant to the automotive sector?
When it comes to algorithms, we find that adapting a solution from one domain to another is not necessarily straightforward or even the right thing to do. Sometimes the algorithms need to change drastically because the underlying assumptions do not hold anymore. So, yes, our prototyping expertise informs product development in any vertical, but each industry’s needs have to be addressed on a case by case basis. There’s no one-size-fits-all CV solution.
What’s the most frustrating problem you’ve come up against – and how did you get past it?
For as long as I have been in the industry, I have observed business managers who falsely perceive Open Source CV libraries as solutions –”we got the camera board, we got OpenCV, let’s have our engineers build it!” Now, I’ve written a CV library and I have lots of experience with resource-limited embedded systems, so I can tell you that is not a good approach. What follows is a furious hacking phase with lots of busy work to get some code on a PC and then more hacking to get it implemented on an embedded system. It’s not especially professional: no respectable industry would call this “engineering.” Buildings, bridges, or airplanes are not designed this way, and you certainly don’t want your car built this way either.
At PercepTonic, we are blessed with clients who know the difference between engineering and hacking. It’s pure delight when we are able to discuss subtleties of CV performance metrics and trade-offs with their engineering teams. When we build, optimize and deliver a head-turning demo, our best clients are already thinking about the next CV feature they want to add. That kind of excitement alone easily makes up for all the frustrations.
You’ve been involved with a number of world-class organisations at every stage of your career – where have you enjoyed working the most and why?
OK, call me nostalgic, but I miss my years at CMU’s Robotics Institute where I got my Ph.D. with Professor Takeo Kanade. The culture there was truly unique – such a concentration of high-calibre people doing incredible technical work and yet so down-to-earth and collaborative.
Finally, we are pleased to have your involvement in AutoSens. What are you looking to gain from your participation?
With the recent announcements about self-driving car roadmaps, the pace of required innovation has definitely picked up speed. I look forward to connecting with the engineers who are part of this fantastic journey pushing the boundaries of practical and deployable CV solutions on a daily basis.
Find out more
We will be delving deeper into all of these areas and a host of extra content at the AutoSens conference, held in September 2016 at AutoWorld in Brussels, Belgium.
Carefully selected experts will discuss the shared challenges, innovation, standardisation and supply chain collaboration involved with the development of the latest ADAS technologies and self-driving cars via panels, presentations and conversations.
Read latest news and updates from AutoSens
- Top Take-aways from AutoSens Brussels 2019 ~ 21 September 19
- Prestigious AutoSens Awards ceremony honours innovation throughout the vehicle perception industry ~ 18 September 19
- Outsight launch at AutoSens ~ 3 September 19