My dream job: In the middle of a revolutionary market disruption
Thomas Wilson, VP Automotive, Graphcore joins AutoSens again in Brussels to present at the conference on “Enabling Volume Deployment of the Autonomous Vehicle with a new AI Processor.” Tom has over 25 years of experience in the semiconductor industry, the last 8 years in Advanced Driver Assistance and Autonomous Vehicles. At Graphcore, Tom is VP Automotive driving the automotive business for Graphcore’s unique AI processing architecture. Tom has a Bachelor of Science and a PhD from Carleton University Ottawa Canada. Tom provides us with some great insight into his interesting career path, challenges with cost-efficient high-resolution sensing and how to build an acoustic guitar.
You have over 25 years of experience in the semiconductor industry, what has been the biggest change that you have seen over this time?
There has been so much explosive growth in semiconductor that has come in waves over the years ranging from the first build-out of telecom circuit-switched networks, then packet and IP networks, followed by the optical network boom, then the burgeoning wireless infrastructure, the mobile phone market utterly disrupted by the smart phone which was enabled by ever smaller semiconductor geometries with low power and high performance, the emergence of social media, the list goes on. However, despite all the revolutionary changes listed above, I believe that artificial intelligence, which is deploying into every aspect of our daily lives, has the potential to change our society and culture in a more fundamental manner than the networking, communication and information revolutions of the past few decades.
You studied cell physiology and biology at University, how did you end up moving into the more physical area of science in your career?
After completing my doctoral degree in the biological sciences I found that science and academia were not all I had imagined when I first set out on that course. So I instead decided to go into the private sector and found work as a technical writer in a local fabless semiconductor company. From there I moved into product management, ASIC design, sales and business development. I continue to find endless intellectual stimulation not only in the process of developing and delivering semiconductor products, but also in the application space I happen to be working in whether its 4G base stations, computer vision or AI processing. Since I first went into semiconductor, I can’t say I’ve had a boring moment!
What do you think are the key technical hurdles remaining to enable high volume deployment of autonomous driving applications?
There are technical hurdles certainly in the area of cost-efficient high-resolution sensing, I know that is being addressed by the players in the sensor market, and to a large extent this issue will be fixed through evolutionary optimization of mostly radar and camera technologies. However, the key revolution must occur in the AI processing that will be the brain of the autonomous vehicle. Existing processing architectures are burdened by their original intended application space and are used as a rather ad hoc, and inefficient, approach for AI processing. An entirely new, ground-up approach is required to realize the full potential of the autonomous vehicle.
You describe the IPU from Graphcore as a “revolutionary” new AI processing platform, what is revolutionary about this?
This is exactly the new, ground-up approach I alluded to above. Existing processing architectures, for example GPUs, carry the burden of their target application: graphics processing and rendering. Yes they are parallel machines, but entirely the wrong kind of parallel, and in fact much of their processing logic has nothing at all to do with AI processing. In addition, their entire memory architecture is ill-suited to the real time latency requirements of deep neural networks. The team at Graphcore have innovated a new kind of hardware that lets innovators create the next generation of machine intelligence. There has never been anything like the Graphcore IPU solution and equally important is the way the development of the machine learning graph-based tool chain, Poplar, has been intimately linked to the re-imagining of a processor architecture for graph-based artificial intelligence processing.
Watch latest videos from AutoSens on YouTube
► Testing automotive camera modules – the difference between theory and reality
► Detectivity – Ranking Cameras for Machine Vision Tasks
► No Bit Lost – High Precision Data Recording in ADAS Development
You are a guitar fan, we’ve seen you design and build guitars in your spare time. How do you go about building an acoustic guitar?
Building an acoustic guitar is much like anything else, you develop your plan and then proceed step by step following the plan. A guitar is interesting in that there are aspects of its construction that have to be amazingly precise, like in the placement of the frets on the fingerboard, while other parts of its construction are more open to free form spontaneity. Ultimately, the most exciting moment is after weeks of work to put the strings on the guitar and hear its voice for the first time. And over the first few weeks of playing that voice evolves as all the component parts of the guitar seem to consolidate into a single whole, a process called “playing in”. If I can wax poetic for a moment, I think there is a similar feeling when you see a team you built beginning to perform as a unit vs an assembly of people. Same thing.
You spoke at AutoSens previously when you were representing NXP, what are you looking forward to about being with Graphcore?
At NXP, I led an excellent team of people innovating and supplying processors into the automotive radar market. There are some significant leaps happening in automotive radar to resolve the high resolution sensing challenge in autonomous vehicles and it was great to be involved in that. At Graphcore, I’m excited about being involved in a revolutionary processing architecture that will make truly autonomous vehicles possible in broad-based deployment. It’s basically a dream job to be smack in the middle of a revolutionary market disruption like the autonomous vehicle with the right solution at the right time to enable it. It doesn’t get better than this in semiconductor.
Why did you choose to speak at AutoSens this year and what do you hope attendees will take away from listening to your presentation?
I have spoken at AutoSens before and found it to be a useful event to make contact with much of the autonomous vehicle community. It’s a great venue to share new messaging and take stock of where the rest of the industry is at and the current topics of discussion. My main goal is to communicate what I see as some primary issues in realizing the necessary AI processing for the autonomous vehicle and suggest new paths forward for the industry’s consideration.
Come and hear Thomas Wilson in Brussels, discussing “Enabling Volume Deployment of the Autonomous Vehicle with a new AI Processor.” Book your tickets here >>
Read latest news and updates from AutoSens
- Image processing, active safety and deep learning complete a technically engaging agenda for AutoSens in Detroit for 2020 ~ 11 February 20
- IDTechEx Attends Autosens: Latest Trends in Automotive Lidars ~ 7 February 20
- From chip to city: the future of autonomous mobility ~ 6 February 20