By Lakshmi Sandhana
A team of researchers is looking at the next generation of autonomous vehicle.
Cars that can stay in a motorway lane without the help of a human driver are being developed by researchers at North Carolina State University.
Software developed by the researchers helps a computer keep a car within a lane on a highway while staying aware of other lanes and vehicles travelling alongside. It can even read road signs.
The technology is an improvement on current vision systems which are typically only capable of finding lanes and nothing more.
Many of those basic systems were used on vehicles entered in the DARPA Grand challenge, a competition for driverless vehicles, and which relied on GPS co-ordinates to know where they were going.
Combined with the GPS were other sensing systems such as from roof-mounted light detection and ranging (LIDAR) units, video cameras, and inertial guidance systems (with gyroscopes and accelerometers) to help vehicles navigate off-road terrain.
The high cost of these sensors mean this approach is not appropriate for privately-owned cars. The Darpa Challenge was also about driving scenarios far removed from the everyday experience of human drivers.
By contrast, the technology developed by the NC State researchers relies completely on computer vision programming, which allows a computer to understand what a video camera is looking at - whether it is a stop sign or a pedestrian.
The program uses algorithms to sort visual data and make decisions related to finding the lanes of a road, detecting how those lanes change as a car is moving, and controlling the car to stay in the correct lane. It does this - while avoiding other vehicles and without becoming confused by multiple lanes.
"The algorithm finds lanes reasonably well and it finds all the lanes at once." said Dr Wesley Snyder, lead researcher on the project.
"The novelty is primarily in how we accumulate evidence. Our approach uses evidence from many locations to vote for where the lanes are and which direction they are facing."
It's a big step towards a more reliable and accurate vision-based driving system.
The ultimate goal is to develop a fully autonomous driverless vehicle. The initial aim is more modest - develop a system that could take control of a vehicle should a driver suffer from a sudden complication such as a heart attack, stroke or seizure.
They want the system to be able to signal the fact, slow down and pull off the road, effectively driving autonomously, if only for a limited time.
Observational skills and judgement are difficult to mimic.
However, even that limited autonomy presents stiff challenges.
Present day computers are too slow to carry out the real-time image processing required. Powerful computers that could do the job are not portable.
A laptop can only analyse two pictures a second - too slow to control a speeding car.
ALso driving requires many different observational skills, which are difficult to recreate computationally.
"Researchers understand that how humans interpret data and recognize patterns is very different from the current mainstream approaches of computer vision used today," said Professor Dennis Hong, from the department of mechanical engineering at Virginia Tech.
"When driving, a human has general situational knowledge and uses this to infer where they are and what is happening around the car.
"For example, if we see a kid on one side of the street with an excited facial expression, facing the other side of the street where there is a candy store, then we immediately infer with high probability that the kid might run across the street in front of the car, and use this expectation in driving a car.
"This is an example of a situation where a computer system will have a difficult time understanding. Without this kind of general knowledge for understanding a scene, it is difficult to directly mimic how humans use vision for driving."
"Until we crack the issue of creating a vision system that can handle situational awareness fully autonomous driving will remain only a possibility.
"We need to be able to understand how humans use their eyes and brains and mimic that with computer vision systems. We'll also need more intelligent sensors, radars, smart cameras and efficient computing power."
"Autonomous driving is a goal, but not something we are going to see in the immediate future, unless you put restrictions on the nature of the road and traffic," said Dr Snyder.
"For example, driving on a freeway with light or no traffic will be possible in the reasonably near future."
Creating such a system is only the first step. How ready will people be to surrender complete control of their vehicles to a program?
"We need to continue to make progress to the point of vehicles having complete situational awareness with zero margin for error," said Professor Azim Eskandarian, from the Center for Intelligent Systems Research at George Washington University.
"We also need to do this in a way that is affordable and commercially viable. We are moving in the right direction on both of these and making incredible progress; both will come eventually, but another issue we face is getting acceptance from the driving public of such a system."
"People are so used to controlling their vehicles and after the science is developed there will be a period of adoption and acceptance.
"As we progress in assisted driving, the public will become more receptive to the idea of conceding control of their vehicles."