Our start-up feature provides a low cost route to market for small companies at the earliest stage of their operation, and critical exposure to our audience, both online (on our increasingly busy website) and face-to-face at the AutoSens conference and exhibition event.
Start-ups receive a 3-minute pitch slot on the first day of the main AutoSens agenda, space for a pop-up display and two seats at the AutoSens vehicle perception conference, which provides access and exposure to a relevant technical audience at the heart of the automotive supply chain.
Our Start-ups in 2019…
Metamoto’s Simulation as a Service offering delivers a safe path to automated vehicle (AV) deployment. The AV simulation solution is cloud-based, massively scalable and available now to any stakeholder in the mobility ecosystem looking to cost-effectively train, test, debug and validate their autonomous technologies.
- Discover optimized features from your data.
- Automatically build machine learning models using discovered features.
- Allow for cloud-based testing and validation, before exporting to embedded environments.
- Generate inference code, compiled for your specific target, allowing machine learning models to run in real-time, on edge devices.
Reality AI holds 12 patents and 6 patents-pending, all in the field of machine learning as applied to sensors and signals.
Our Start-ups in 2018…
Advantages in brief:
Oryx’s LiDAR technology has a number of unique advantages:
- It’s a coherent system with x1M better sensitivity (signal-to-noise ratio) compared to other LiDARs
- It’s a flash system, avoiding any kind of laser steering and therefore as simple as a digital camera
- It’s a true solid-state system, highly durable and subject to silicon economies of scale
- It’s a long-range system, detecting objects 200m away
- It has superior data quality, detecting both range and velocity at each reading
Our Start-ups in 2017…
Perception: Surrounding Monitoring System, Software Smart Camera,
Visualization: Augmented Guidance/Navigation for LCD IVI, HUD as immediate targets.
We are working with scientific groups worldwide to unlock potential of new ideas and bring it to productions. At later stages platform will allow to contribute solidly to in-vehicle and cloud-based Autonomous Driving systems.
At understand.ai, we push the boundaries of Technology and AI to solve the training and verification data problem for Autonomous Driving. Currently, the quality and quantity of ground-truth annotations is the bottleneck which is holding humans back from solving perception for fully-autonomous driving. That’s why we combine proprietary algorithms, human intelligence and domain expertise for maximum quality, efficiency and scalability of ground-truth annotations for image, video and Lidar data. With this, we want to enable further breakthroughs in AI and Autonomous Driving.
Our Start-ups in 2016 included…
Our first commercialized technology is focused on drowsiness monitoring from eye images. This software can be integrated into various devices providing images of the eye. We’ve on-going projects related to use other physiological signals/images (e.g. facial expressions, heart rate, etc.) for drowsiness monitoring. We’ve also on-going projects to extend the analysis of ocular parameters to monitor other states (e.g. mind wandering). The MIT Technology Review awarded us for our innovations in drowsiness monitoring.
Its solid state laser technology allows vehicles to accurately digitize and understand the road and the environment in real-time and in all weather and lightening conditions.