There is a lot of money sloshing around in the autonomous vehicle market place right now, hundreds of millions, if not billions of whatever currency you want to think in.
This money is buying new technology which will be pivotal to the future of transportation for millions of people, whether they are travelling in personal pods, public transport units, or have deliveries of materials mined by robot diggers, transported by driverless trucks, harvested by driverless tractors, picked by driverless warehouse robots or brought to their door by fleets of driverless vans.
The problem with all of this development is the testing, indeed the industry considers this the most important problem. Fleets of Google cars, now dozens in number, travelling in a multitude of states across America, are testament to that project’s approach of “What do we do if…” rather than hoping that huge amounts of machine learning and computational data could possibly account for the vast diaspora of strangeness, quirks of human and environmental events, and ‘life as we know it’ happening anywhere near a Google car.
Will the Google car deal appropriately with the following circumstances:
- person unloading car reverses the car, with boot open, into the garage (removing the rear door in the process)
- car comes up behind horse when a crisp packet blows into road, causing the horse to buck its rider onto the hood (what exactly can the driver or his autonomous replacement, do about that?)
- while travelling down a country road, in Summer, at about 50 mph, a young deer lands in the driver’s lap having already leapt through the open window (the driver was unharmed, although sadly the deer did not survive)
These might all sound absurd and once in a life-time events, and for me, they are – all three of them have occurred in the past 20 years to members of my family (in order, my grandfather, me, my father).
So, stories aside for a moment, apart from my own driving experience and country up-bringing, there’s a serious point to make. I’ve been working at Sense Media since January, and been fortunate enough to have the time and encouragement to do a not-inconsiderable amount of reading on the topic, leaving me with a moderately rounded view of the subject and a fair understanding of some of the issues. In that time, I have not seen any research into how autonomous cars interact with equestrian road users.
On realising this, I contacted the British Horse Society, an organisation with whom I had encountered in a previous role and I knew as a small but effective member organisation representing the needs of equestrian enthusiasts, believing that if anyone had the definitive answer on this, they would. A day or so later, I received a call from Alan Hiscox, the newly appointed BHS Director of Safety, a role created partly to deal with precisely this vacuum.
We agreed that this was an important if commercially non-viable piece of research. The nature of these edge cases, rare as they are and humorous too on occasion, is not that they don’t have value to society or indeed the reduction of accident numbers involving horses (or indeed any large animal using the road network… and not necessarily in first-world countries) – it’s that they don’t hold an inherent value to car manufacturers. That means straight away that the commercial sector will be less inclined to put a bundle of money or effort into real world or virtual testing of these rare but potentially catastrophic incidents.
With that in mind, we began to formulate an idea whereby a machine vision system, already in place in many vehicles emerging on to the market, was enhanced with software algorithms specifically tuned with the necessary programming to deal with the subtle, nuanced inflections of hand gesture and facial recognition employed by equestrian road users.
When you start exploring this problem from a technical perspective, it becomes gradually more challenging.
- Equestrians are several feet outside the optimal visual space for a front facing camera;
- their facial expression and hand gestures, along with the body language of their steed can be difficult to differentiate even for a competent and experienced country-side driver;
- rural lanes are often fringed with hedges or trees which place the rider in pools of darkness or light which contrast with the in-car camera’s main purpose, perceiving the road surface and road signs.
It was in considering these problems, alongside the UK government’s announcement of a significant R&D funding pot administered by Innovate UK, that, along with the British Horse Society, we decided to seek partners to create a consortium to tackle this problem.
This group will, we hope, construct a bid to embark upon a research project which defines this problem robustly, realises a solution, and then releases as much of it as possible into a public domain. There may be some intellectual properties which partners may wish to keep for themselves, that’s inevitable, but we hope to be able to keep that approach to a minimum.
The pitch was presented at the Innovate UK / CCAV competition fund initialisation event on Thursday 8th September, and both Alan Hiscox and I will be attending consortium-building events over the coming fortnight.
These research projects should have a snazzy acronym, so as a placeholder, we’re calling it AVERT – Autonomous Vehicle Equestrian Recognition Technology.
If you’re part of an organisation that is interested in getting involved in the project, whether you are in the UK or elsewhere, please get in touch by email [email protected] – please make sure you include a little bit of information about why this project proposal is of interest to your organisation, and how you’re able to get involved.