Take any autonomous R&D vehicle apart and you’re very likely to find a central nervous system, which is understandably bulky. Equipment racks, power smoothing systems, inertia measurement boxes and a spaghetti nest of ethernet and power cables attached to devices scattered around the vehicle, feeding into a fairly powerful compute platform, at least one spare engineer and a tablet or laptop in the front as well.
Let’s forget the R&D aspect, and look instead at the infrastructure within the vehicle and most organisations’ approach to how they are structuring their solution.
Right now, the great thing is we know the technology will shrink, get cheaper, easier to use, and by the time we start to see greater numbers of completed products on the road, out of sight. If anything, it’s better to be bulky now, because it’s much simpler for engineers to buy kits, chassis parts for circuits, and plug and unplug network components. Let’s face it – there’s plenty of space in the back of most cars!
The 50-100 pounds (25-50kg) of extra weight is the equivalent of adding an extra small passenger, but that weight – because it’s electronic equipment, hard wired together – is also consuming lots of energy.
Running at full resolution and with, say, half a dozen sensors feeding into a processor, you should expect to add a dozen or so processor cores, at least, to perform your number crunching.
Most processing platforms provide many more, hundreds in some cases, providing extra oomph for this tricky period of experimentation and unknowns, while systems are created and a high volume of data must be recorded for evaluation. This is an ecosystem developing products to be used by R&D teams, not necessarily creating the volumes (in the millions) that will end up in cars.
The vast majority of work with compute platforms is focused on a computer in one location, but the power, processing and weight this adds (notably in the network to take data back to the main processor) is in direct contradiction to how the automotive industry has learned to use computing.
That habit, formed from many years of playing with tiny data sources and simple signals, which must be extremely reliable, is distributed processing – or as we call it thanks to internet of things technologies, ‘edge computing’, i.e. computing at the ‘edge’ or extremities of the system.
Don’t sweat about energy consumption – because the picture we get from the R&D world is not realistic.
You’ll see that in the first TV ever made versus the first production model… just as the first mobile phone was the size of a car, and within 25 years could be lost in your pocket, even futuristic position measurement devices such as quantum gravimeters will halve in size twice in the next 20 years, and most people haven’t even heard of them!
So we can imagine forgetting the central processor and instead place much smaller processors around the car to do the ‘heavy lifting’ with the largest chunks of data (i.e. video streams), distil the signals down to a stream of segmented or categorised metadata (perhaps a piece of appropriately encrypted text detailing the object and coordinates), and send that wirelessly (or more securely perhaps with a physical connection) to the computer making higher level decisions.
These much smaller systems (like a camera or lidar and attached data-condenser) would be far easier to validate and certificate, as well as becoming part of the existing supply chain ecosystem.
Not only does this devolved approach dramatically reduce the burden on a central processing hub, it also reduces the weight of a wiring loom, changes the nature of requirements for on-board storage (rather than terabytes of data, a rolling cache could be stored in each edge unit for diagnostic purposes) and encourages a far better degree of supply chain competition to supply different parts of the system.
While there are hundreds of autonomous vehicle development and testing guidelines, and plenty of standards being thought about regarding legislation, there are very few which stipulate engineering performance standards in the technical systems which make up the whole.
It’s only by taking the view from the suppliers and consumers of individual components that problems can be resolved.
That approach could make component and therefore larger systems inherently easier to certify, simpler to diagnose failures (isolate the component, find the flaw) and also allows hardware to be more easily swapped out and replaced if damaged, at far lower cost – all concerns that are oft shared and regularly swept under the carpet.
There are two schools of thought that have developed: 1) centralised, or; 2) distributed computing (spread around in the vehicle, at least).
Personally, it makes slightly more sense to me to favour distributed computing, but I’m open to both… so will wait to be accosted at some nearby future event and perhaps I’ll change my mind!
I accept one of the downsides is a more complicated system which might have many more software compononents, but while that introduces a lot more choice to the supply chain, it also opens the system to being vulnerable through sheer complexity. With 100 ECUs in the most advanced cars on the road, there are millions of lines of code.
As we see in the many flavours of car on the road (rear wheel drive, mid engine, front wheel drive, front engine, etc) – all can be successful and happily work within large geographically and technologically diverse supply chains.
Nobody working on autonomous vehicles actually makes every chip in every component, or the servos that control the mechanical assets, or even understand the analogous world that most essential sensors harvest – but there’s a lot of opportunity, and lessons from the supply chain – that disruptors could really take advantage of when they start thinking about bringing their autonomous software stack, sensors or hardware system to market.
I’m certainly no expert, but what’s your view, centralised or distributed computing?
Learn more about Autonomous Vehicle technology at AutoSens in Detroit (14-17 May) and Brussels (17-20 September) – Booking open here >>