The Dreams4cars' Co-ordinator Introduces the Project

01/03/2018

Professor Mauro Da Lio is professor of Mechanical systems with the University of Trento. His research interests span from vehicle dynamics to driver modelling, from driver assistance systems to automated driving, from modelling of human sensorimotor control to artificial cognitive systems and robotics. He has been involved in several EU framework programmes 6, 7, 8 in the Intelligent Vehicles domain. He is the coordinator of the H2020 “Dreams4Cars” project. The following article is based on two interviews with him in early 2018.

Mauro_da_lio

The world eagerly awaits fully autonomous vehicles
Autonomous vehicles are undoubtedly one of the hot technological topics of the decade. The dream of being chauffeured around by our cars is now tangible enough to have triggered huge investments from the globes most entrepreneurial and forward-looking companies. The world is awaiting with bated breath for its much anticipated transportation upgrade and from the confident reports we see in the media it seems as if the technology is almost there; it’s as if there are just a few loose ends to tie up.

Is that really where the technology is though? Given the latest accidents and first fatality at the hands of semi-autonomous vehicles we are perhaps guilty of over zealously fast-tracking their road certification, such is our haste to elevate ourselves beyond driving and reap the vast profits of being able to deliver such an experience.

Computers are amazing, but not conscious, and this is where the problem lies
One of the major issues with the technology is developing a control agent that is able to react correctly in all situations, as humans normally would. That is much more complicated than you would think, simply because humans are conscious beings with an incredible capacity to react and adjust to multiple parameters simultaneously. Computers, on the other hand are not conscious, no matter how many times they are quicker than us at processing data.

All autonomous vehicle development projects have something in common: every possible scenario and response must be programmed into the agent’s software. Road testing is showing again and again that when a car meets a scenario it doesn’t know, it fails, which would translate into human fatalities in real situations. Autonomous cars are not expected to make accidents a thing of the past, but they are expected to perform better than humans, which means one accident in one hundred million miles. At the moment no project is even near this threshold, despite what is reported to us in the media. The scenarios are endless and engineers are having to programme and reprogramme the software which simply obeys what it’s been preprogrammed to do. The following question still haunts autonomous driving projects – what if the car meets a scenario it doesn’t know?

The way forward – teaching cars to learn from their experience
What if it was possible for the vehicle agent to learn from its experience on the road, just as we ourselves do? What if engineers didn’t have to come up with all the scenarios the agent may meet on the road, but have the agent discover them for itself, remember experiences and choose the best response? This, in a nutshell is what the Dreams4cars project is about. It aims to engineer an artificially intelligent driver which is strongly bio inspired because it implements human learning mechanisms that are related to reanalysing its own experience. It is being funded by the European Union’s Horizon2020 programme and its partners include 4 universities and 2 research institutes across the continent.

Dreams4cars is pioneering self-learning neural networks in autonomous vehicles
Since 2012 artificial neural networks based on human neural networks have been able to recognise patterns such as traffic lights at human or even super human speed. It is their ability, through algorithms, to learn to perceive based on previous experience that is so useful in autonomous driving. If an autonomous driving agent can learn from its driving experiences, it can call on previous experience to inform it towards the smartest choice every time it meets a new scenario. Computers are now fast enough to process all the data in real-time.

Dreams4cars is developing the neural network to perceive the present and then “dream” the most likely future
The University of Trento in Italy, one of the project partners, has been working on a neural network perception system, based on the dorsal stream (a part of the brain reconnected to vision.) By taking a simple video of three moving shapes – a circle, box and triangle - they have been able to train the network to recognise and reproduce these shapes and their movements exactly as the generated video produced them, with only minor distortions.

The network has learned what a circle, a triangle and a square are, as well as their trajectories. The next step at this current stage is to stop the moving video and have the network imagine, what the next frame and the next and so on would be. So in the real world the agent would not only have what is happening in real time to process in order to take the best course of action, but be informed by imagining what is most likely to occur.

From recognising shapes to recognising the automobile environment
The university is in the initial stages of giving the network real road videos to process and work on and imagine what would come next when the video is stopped. Although the dream images at this stage are distorted they are recognisable. Since the network is self-learning, over time it will learn to recognise and therefore better represent what it expects to see. It will then be able to imagine, or dream how it should also react to real road situations.

The significance of this development is that for the first time an artificial neural network is able to produce visual imagery in video format. It imagines scenarios and responses, from braking, steering and responses of third parties. It is adhering and aligning and even learning from human biology.

Visual reproduction from the agent means better perception all round
This exciting stage of having the network visually reproduces the automobile environment means that a system can be set up which is capable of estimating the depth and the movement seen by the vehicles’ cameras, which is absolutely essential to any autonomous vehicle system.

The next step the university is working on is being able to stop a real road video from a dash cam and have the agent imagine the continuation; one that would appear to be simply the continuation of the real video, with such accuracy that if you were to view the whole video you wouldn’t be able to tell where the real video stopped and the prediction started. Because the neural network is all about self-learning we can expect to see sharper, clearer and less distorted reproductions or dreams, the more the network is exposed to the road. It is based on the Demasio idea that you’re running the dorsal stream backwards and is episodic simulation.

In the short-term the agent will be able to perceive and reproduce the environment in the background when the car manoeuvers because it is able to perceive depth.

How the network actually learns – Giving meaning to shapes
In the beginning the neural network doesn’t know what a car or tree is until it perceives their shapes and then sees them again and can identify them. At some point it may realise that a part of the image is a rigid body moving in the background, so less information would need to be reproduced and we introduce the concept of data compression, which is really creating the concept of data prioritisation.

The fundamental mechanism, explained by Demasio architecture, is that if you force the neural network to compress this information and at the same time force the network to reproduce the sensory data, the network is learning the concepts of what is important to reproduce (from) the sensory data. So it learns by itself that a tree or car must be an object. It didn’t know before that it was an object, but it recognises a pattern in its size and behaviour as it passes across the visual sensors and the visual data delivered to the network. So automatically, as the network is exposed to more and more time on the road objects become meaningful concepts, because it has seen them before and remembers them.

For example, if it passes close to a pedestrian, it recognises that it could have killed the pedestrian so then processes that data. The agent will learn by self-instigating scenarios, which is of course what happens with humans. We learn from experience. In this way the internal state of the agent or neural network is updated with a more accurate view of reality. In the deep neural network community this kind of architecture is known as an autoencoder.

The network will learn to filter data, just as humans do
In common autonomous cars each and every object in any frame of view is classified for them by engineers. It may see a truck and then half a second later there is a balloon. This is not surprising for the agent, but it should know that the truck cannot turn into a balloon, but cannot “see” otherwise until it is told. However, the Dreams4cars agent is able to remember the past because the internal cortexes it mimics from biology are dynamic fields. So they retain the information and they have their own dynamics. We can say that genetically they have learned that big objects cannot turn into any object. They have therefore educated themselves with some predefined model of the world; they have learnt what regularity is. What it is doing actually, is filtering the sensory input.

So the Dreams4cars agent learns to reproduce the sensory input and formulate notions of objects and then filter it with the regularities that it becomes aware exist in the world. In these aspects Dreams4cars is on the frontier of research, because its scientists are experts in brain functionality, whereas 99.9% of others working on deep neural networks are not primarily involved with brain functionality, but mathematics and IT.

Cars will be able to learn from one another and become more streetwise
Given the self-learning potential of the Dreams4cars neural network agent we can confidently expect it to become smarter and smarter on the road because it will not only collect and learn from its own set of data, but will have access to other Dreams4cars data, as if it had driven those miles itself.

It is important to realise that no autonomous vehicle in the foreseeable future is expected to be able to do away with accidents and in the case that a vehicle crashes, or is even a write-off, its data will be useful for other agents to download and be taught by. Aren’t we all better drivers 5 or 10 years after we have passed our driving test? Dreams4cars vehicles will be too, with every new generation.

Dreams4cars will simulate accidents from other autonomous vehicles and show how it would have responded otherwise
Early 2018 was witness to the first autonomous vehicle fatality and one or two other autonomous vehicle accidents. Plans are in place to simulate the accidents and vary the parameters, such as vehicle speed and pedestrian behaviour in order to thoroughly examine what went wrong, but also show how the Dreams4cars agent would have responded differently in a variety of similar situations.

The Dreams4cars agent is driving a simulated car – By June it will be on the road
Dreams4cars now has the software it needs and is driving a simulated car, where some simulations have already been carried out, including a video comparison of the Dreams4cars vehicle and a conventional autonomous vehicle stopping at a traffic light. This video can be viewed on the dreams4cars.eu website.

In April 2018 a London meeting between project partners will put the dreaming mechanism together to further enhance the agent and the Swedish partners will make a system that creates imaginary roads. Data has also been taken from people stopping at the stop sign or a yield line. About 700 real-life manoeuvres have been taken, which are being recombined to generate imaginary stop manoeuvres, so a simulation can be made where the system drives with a car in front that will stop in a hypothetical manoeuvre.

A workshop has already taken place in Turin with CRF, the Fiat Research Centre and project partner, for appropriate lateral and longitudinal control for the Dreams4cars test vehicle. They are now testing the system hardwired in the loop, which means that there is a computer that emulates the real car and the rest of the system is realistic. Then, the software will go into the hardware emulator of the real car and road testing will begin with very simple manoeuvres like following and changing the lane.

Dreams4cars has the integrity to function across the whole spectrum from perception to action
Dreams4cars has a unified approach to deal with everything in the real car from perception to control. This may not sound like anything significant. However, due to the complexity of systems and official autonomous rating levels for each piece of equipment, integrating all the systems on a real car is proving not so straightforward. You won’t be able to plug and play into another device so easily. Car manufacturers are realising this and considering developing products themselves that they had previously relied on parts manufacturers for. They have to master the complete process, and have no need to trust or rely on another’s system, which is not complete anyway.

Test drive Dreans4cars for yourself in complete safety
The system will be available on Open DS (Driving Simulator), and available for the public to view and test the system for themselves. This is a significant development as far as allowing passengers to be able to reassure themselves regarding Dreams4cars’ reliability and safety.

A further simulator, Car Maker, is a similar tool, but is professional and used by the automotive industry. CRF are in the process of validating Dreams4cars using Car Maker in order to be able to demonstrate its integrity to the automotive industry.


See more publications




This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 731593.