imec highlights the importance of cooperation

wilfried philips
Wilfried Philips, Image Processing and Interpretation, imec-UGent

Wilfried Philips, Scientific Lead Image Processing and Sensor Fusion, imec-UGent joins the prestigious AutoSens speaker line-up to present a session in Brussels this September. Wilfried Philips is a senior full professor at Ghent University, where he leads the Image on Processing and Interpretation (IPI) research group and the research consortium iKnow, which has realized 9 spin off companies. At imec, he is scientific lead of the Center of Excellence on Image Processing and Sensor Fusion. Wilfried Philips’s and IPI’s research interests relate to image real-time computer vision and fusion of “rich” sensor data (radar/video/LiDaR/thermal/hyperspectral), with a strong focus on industrial applications. IPI has founded three spin off companies. It has also created “Quasar,” a brand new high-performance GPU-programming solution that offers greatly reduced development time.

Wilfried takes time out of his busy schedule to help us understand the importance of cooperation with the width and breadth of our industry.

Could you outline your specific research interests.

In my research group, “Image Processing and Interpretation,” I co-supervise some of the research on real-time video processing, sensor fusion and industrial inspection. However, I have always been interested in multi-disciplinary research and in a wide range of technologies, such as signal processing, mathematics, communications, internet-of-things, software defined radio, big data analytics, parallel processing. I have therefore also participated in research on biomedical signal processing, IoT sensor networks for air pollution monitoring.

Cooperation with industry

Most of the cooperation with industry is through funded research projects funded by the Flemish government or by the European Union. At the EU level, we have participated in several “ECSEL”  (Electronic Components and Systems for European Leadership) and Horizon 2020 projects on various topics: video quality improvement and assessment, sensing for unmanned aerial vehicles, sensing for elderly care. In Flanders, we have participated in numerous “ICON” projects (projects with a balanced contribution by industry and academia) and bilateral R&D projects on multimedia, video analytics, image analysis, video processing.

We also cooperate directly with local and international companies through strategic partnerships or to solve specific technical problems, e.g., related to industrial inspection, video quality improvement and data analytics. Some of our technology has also been licensed to companies.  Last but not least, our technology is also brought to the market through the foundation of spin off companies.

How does imec and the Ghent University work together?

Imec is a strategic research center founded by the Flemish government. It aims to be the world-leading R&D and innovation hub in nanoelectronics and digital technologies. As Flanders is a small region in a small country, imec closely cooperates with all Flemish universities to increase critical mass. Ghent University is one of the major universities in Flanders and is proud to be ranked in the top 100 of the Shanghai institutional ranking of world universities since 2010.

My research group, Image Processing and Interpretation (IPI) is a “core” research group of imec at Ghent University. This means that we receive some structural funding by both imec and by Ghent University. However, we still receive most of our research funding on a competitive basis.  Also, almost all cooperation with industry is performed through imec. Using imec’s worldwide network has also allowed us to cooperate with major international companies.

To what extent were you involved in the creation of “Quasar”? 

Quasar is a framework for very fast development of parallel algorithms on heterogeneous hardware (GPU, multiprocessors, vector processors.) as found in embedded systems, desktop systems and super computers. It was conceived and developed by IPI professor Bart Goossens. The initial goal was to speed up our own research, and it delivered what it promised. Now Quasar is being licensed to companies.

My personal contribution has been to assist prof. Goossens in bringing Quasar to the market by talking to potential customers about the current “pains” in the industry (such as the too lengthy development cycles), the obstacles in adopting Quasar and its required feature set. This helped to fine tune Quasar and its business model. I also help to acquire research funding for demonstrating and promoting Quasar in challenging and diverse industrial use cases and for continuing its development. For instance, Quasar was used in EU and locally funded projects on real-time sensing for autonomous driving, 4K video processing, and multi-camera body shape mapping. In 2020, with EU funding, a major project on high speed garbage sorting will start.

I want to emphasize that all of this has been a team effort. The innovation should be credited to prof. Goossens, but as a strong believer in Quasar’s future, I do my best to facilitate its development as a product.

You are co-founder of the IoT company Senso2Me, can you tell us more about this company?

Senso2me develops end-to-end ICT solutions for assistance of elderly people living at home and in elderly care facilities. Often these people develop health problems, which makes it risky to leave them alone without nearby assistance. This includes people with poor mobility, or with diseases such as Alzheimer or Parkinson.

Senso2me provides a wireless personal monitoring and alerting system to connect individuals in need with family and caregivers. The system includes active alert buttons (a button on a key chain which can be pressed for help), and intercom facilities. However, the really innovative part is a passive monitoring system based on wireless motion sensors. The data provided by these sensors is continuously fused and analyzed. Automated alerts are raised when potentially dangerous situations are detected, e.g., not being at home or spending an unusually long time in the bath room. The solution also provides graphical displays which help to detect changes in life style patterns, e.g., excessively sleeping after an inappropriate medication change.

Senso2me was founded as a spinoff of Ghent University and is located in the city of Antwerp in the “Startup Village”. The company has been growing rapidly and currently counts ten employees, with expertise on electronics, communication, analytics and sales and marketing. It has formed a strategic partnership with “Zorgbedrijf Antwerpen,” a major care institution in Flanders. Understand more here: https://senso2.me/

Your presentation at AutoSens relates to sensor fusion; how would you define early and cooperative sensor fusion?

Complexity is a major problem in current automotive sensing: sensors such as radars, cameras and LiDARs run complicated analytics. Their reliability fluctuates depending on scene content and on weather and lighting conditions. Fusion of multiple types of sensors is essential to achieve the extreme reliability requirements for autonomous driving.

The standard engineering approach to handle complexity is modularity: each “smart” sensor operates as a black box and the high level “decisions” or other outputs are fused in a separate “late fusion” module. However, this approach in which individual sensors’ decisions are combined at a late stage is sub optimal. For instance, difficult to detect pedestrians may produce weak responses in all individual sensors and may therefore remain undetected by any single sensor, while  the combined evidence by these sensors would still allow detection.

Early fusion combines only partially processed sensor data, rather than their final “decisions.” For instance, each sensor outputs a “likelihood of presence” probability map, rather than a list of detections. While provably optimal in terms of  detecting, this approach has a major downside: it requires a lot more processing at the fusion center and lot more data communication from the sensors to the fusion center. It also requires very tight cooperation between sensor processing and fusion developers.

Cooperative fusion is a compromise between early and late fusion. It proposes loosely coupled sensor and inference modules, which communicate over low bandwidth, low latency links. As in late fusion, the “smart” sensors take care of most of the processing and produce heavily condensed outputs, e.g. candidate decisions and associated probabilities. However, the sensors now also receive auxiliary inputs from other sensors and/or from the fusion center. This allows sensors to verify weak evidence by other sensors and to tune their internal algorithms. For instance, a radar can fine tune its detection thresholds or even learn to discriminate between static objects and static pedestrians by taking into account the results of deep learning video analytics.

How far away do you think we are to realising autonomous driving?

Well, that depends on the definition of “autonomous driving”. Even in the 1920s some experiments took place with remotely operated and thus driverless but not really autonomous cars. In the 1980s and 1990s the first successful demonstrations of “real” autonomous driving on existing road networks were made. Recently the first experimental autonomous driving systems have gone commercial, albeit with human backup drivers.

So in one sense autonomous (fully automated) driving is already here. However, I estimate many years will pass before we see a majority or even a large minority of autonomous cars on the street. On the one hand, technical obstacles remain. Driving in cities is complicated even for humans and it is notoriously hard for automated systems, even those with plenty of sensors and masses of compute power. However, I do not see this as the main obstacle, as I can easily see a future in which human road users learn to live with the remaining dangers caused by self-driving cars. Other important obstacles relate to system cost and power consumption, public acceptance, and to possible hacking ; gradually these will be overcome but it is difficult to predict when.

An intriguing question is whether or not autonomous cars – as we think of them today – will be replaced by other technology, e.g., autonomous flying taxis which will not be hindered by annoying human road users. Autonomous driving is also no solution for road congestion, parking problems etc. So we will probably see an evolution towards highly flexible public transport. If that is indeed the future, then traffic will become much more predictable and some of the current obstacles to autonomous driving may be greatly reduced.

What are you looking forward to most about speaking at AutoSens?

It gives me an opportunity to highlight imec’s activities in the domain of sensor fusion for driver assistance and autonomous driving to the world’s leading community of experts gathered at AutoSens. imec is internationally well known for its nano-electronics and sensor hardware, but less so for its more recent algorithms and software related activities. So the opportunity to speak at AutoSens is quite welcome. I also hope to obtain some feedback from industrial players on the best directions for our research and to gain some contacts for future cooperation.


Come and hear “Early and cooperative sensor fusion – benefits and practical experience” at AutoSens in Brussels with Wilfried Philips, Scientific Lead Image Processing and Sensor Fusion at Ghent University and imec. Book your tickets to join here >>

Shopping cart0
There are no products in the cart!
0
2024 ADAS Guide

The state-of-play in today’s ADAS market

With exclusive editorials from Transport Canada and SAE;  the ADAS Guide is free resource for our community. It gives a detailed overview of features in today’s road-going vehicles, categorized by OEM, alongside expert analysis.