AImotive aiDrive Level 5 Self-Driving Car Comes to US

AImotive is a full stack software company delivering AI-based software for self-driving cars.


AdasWorks, an autonomous vehicle startup based in Hungary, has changed its name to “AImotive” and opened an office in Mountain View, Calif. to bring its AI-powered software for self-driving cars to the US.

AImotive isn’t focusing on hardware and chips, instead it has built a full-stack system called aiDrive that is designed to be a Level 5 system, meaning passengers simply need to input their destination and the car does the rest. aiDrive consists of a recognition engine, location engine, motion engine and control engine. AImotive global COO Niko Eiden tells Robotics Trends that the recognition engine is the heart of aiDrive as it’s connected to all the sensors in the car.

“Having one centralized computer will make things easier and much less expensive,” Eiden says. “The recognition engine is vision-based. We believe the way to design an architecture is to concentrate on vision and a human-like approach to driving. Traffic signs and traffic lights are made for human vision. Instead of relying on 3D mapping, we want our cars to see the roads as humans would so we can drop our car into any part of the road and it’ll drive.”

Founded in 2015, AImotive has grown from 15 engineers to 120 engineers and researchers.

How aiDrive Works

The name change reflects AImotive’s vision of creating self-driving cars that work worldwide in any weather. To do this, the recognition engine takes information from between six and twelve cameras and breaks it down. Eiden says the recognition engine is a continuously learning engine has a pixel-precise segmentation tool that can recognize up to 100 different objects, including pedestrians, bicycles, animals, buildings and obstacles.

The location engine uses standard GPS and data captured by the recognition engine to identify where the self-driving car is on the road. The motion engine then takes all that information and tracks, in real time, moving objects and predicts the speed and future path of all moving objects to allow the self-driving car to take the optimal route. The control engine is the execution component that manages acceleration, braking, steering, gear shifting, and auxiliary functions such as turn signals, headlights and the car horn.

AImotive’s training technique is also scalable, with a real-time simulator tool that trains the AI for a wide variety of traffic scenarios and weather conditions.

Eiden says aiDrive is hardware agnostic, meaning it can work with any camera and computer chip. To keep the costs down, AImotive uses off-the-shelf components. Eiden says its Toyota Prius test car, for example, has about $2,000 worth of self-driving electronics (cameras, processing unit), but that figure could eventually drop to $500 over time.

AImotive has been working with the Kronos Group to create standards for the deployment and acceleration of neural network technology. AImotive recently said it “saw the growing need for platform-independent neural network-based software solutions in the autonomous driving space. We cooperate closely with chip companies to help them build low-power, high-performance neural network hardware and believe firmly that an industry standard, which works across multiple platforms, will be beneficial for the whole market. We are happy to see numerous companies joining the initiative.”

AImotive aiDriveNVIDIA-based computer in the trunk of AImotive’s self-driving car. (Credit: AImotive)

AImotive Still Needs California Driver’s License

AImotive, which has conducted street tests in Hungary, doesn’t have permission yet to test its aiDrive-powered self-driving cars on public roads in California, but Eiden says opening the Silicon Valley office will help get the process started. “It’s a complicated process,” Eiden says. “We need to have employees in the US before we apply for the license. Because we’re not established in the US, it just takes time.”

The move to California brings AImotive closer to one of its investors, NVIDIA. The two companies could also be seen as competitors, but Eiden says the relationship with NVIDIA is great. CEO and founder Lazlo Kishonti spun AImotive out of his automotive testing company, Kishonti Ltd., after helping NVIDIA provide self-driving car technology to Tesla, which uses NVIDIA computers in its latest cars and Autopilot.

NVIDIA and AImotive are working together on a self-driving car project for Volvo. AIMotive has raised a total of $10.5 million. Other investors include Robert Bosch Venture Capital, Inventure, Draper Associates, Day One Capital Fund Management and the Tamares Group.

AImotive’s self-driving car uses inexpensive cameras to detect objects on the road. (Credit: AImotive)

Eiden says the company plans to open offices in Japan and China in 2017. Eiden says AImotive also wants to eventually open an office in Finland because it “has the best autonomous driving regulations in place. If you can ensure the safety of the self-driving car, you can legally drive on the streets without a safety driver in place.”

When Will aiDrive be Ready

Eiden says it’s impossible to predict when fully autonomous vehicles will be available to the masses.

“The pace at which neural networks are developing is breath-taking,” says Eiden. “Nobody has been using AI inside cars on public roads like we’re planning. We can move faster than corporations at this point because our size. But, there are many from a legislation standpoint that need to develop. We think we could have a car on the roads by 2020-2021, but there are things around us that we can’t control. So it’ll most likely be beyond that when we have a car available for people on the roads. But have far beyond that? I can’t really say.”

AImotive is also listed as an exhibitor at CES 2017 in Las Vegas. Self-driving cars have had a huge presence at CES in recent years, maybe AImotive will be showcasing its aiDrive self-driving car system up and down the Strip.




About the Author

Steve Crowe · Steve Crowe is managing editor of Robotics Trends. Steve has been writing about technology since 2008. He lives in Belchertown, MA with his wife and daughter.
Contact Steve Crowe: scrowe@ehpub.com  ·  View More by Steve Crowe.




Comments

Totally_Lost · November 15, 2016 · 10:16 am

Several problems with your assumptions.

First is that people do not, and in some cases, can not slow down proportional to stopping distance. For instance many freeways have minimum speeds posted that are typically 60-80% of the maximum speeds. Typical in this area is 55mph minimum with 65-75mph maximum, which applies to wet, snowy, and icy roads. For instance icy city streets never have traffic slowing from 35mph to under 5mph as you suggest will maintain the same time to collision/stop. What does happen is that following distances increase significantly, and rate of change for the vehicle is significantly slowed by the driver (in both turns and speeds) to manage the ballistics problem for the body to better match the available traction. This also significantly increases the mandate that the driver practice enhanced defensive driving, by increasing the range of objects to be monitored, and fully expect that some vehicles will run (or slide through) a light that has just changed.

The array of narrower field of view cameras, does remove lesser important edges from the distance problem, but doesn’t solve the problem in areas where turns and grade changes are more than the extremely narrow field of view you suggest above.

More important is that to match the defensive driving skills of a human at 75mph on highways that are not straight, it really is necessary to have optics that have a similar resolution and sensitivity to motion detection as the human. This skill is absolutely necessary every day for drivers, and includes looking for stopped cars on the 2,000ft away on far side of a freeway interchange that will have a near blind corner later, because of turn radius and change in grade. In mountain areas it requires looking for visible traffic and animals on the road a head of you, that is above, below, and to the sides of your current track, because of corners. This is especially true at night in rural areas because of deer, elk, moose, and livestock.

By the time you have added enough narrow view cameras to fill in the coverage, you are back at nearly the same number of pixels, but have increased the problem by having to correlate pixels at the edges of adjacent cameras to detect objects shared on the edges of two/four camera field of views. If not done well, there will be blind spots along those edges, that will match turn radius and grade change radius, that a human doesn’t have.

kishontilaci · November 15, 2016 · 9:29 am

Hi Totally_Lost,

Thanks for the comments, I try to explain:

Obviously you need to slow down on wet and icy roads, like humans do, so the time to collision is roughly the same in every weather. That simple.

On the other hand, if you want to increase your view distance the additional processing work is much smaller, because you do not need NxN times more resolution, you just need N additional narrow field cameras to increase the view distance. For example:

If you have a single 2Mpixel camera with 90 degree view, your view distance might be limited to 80m, with an additional 45 degree camera it doubles to 150+, if you add another 22.5 degree (the 3rd one) it could be 300+ meters. Obviously this increases your processing by only by 3x, and only at the front view.
It would also increase the saftety of the system.

Please let me know if you had further questions!

Regards
Laszlo
CEO of aiMotive

Totally_Lost · November 14, 2016 · 8:49 pm

Hmm ... from what I’ve read in a half dozen other articles about the firm, they are not committed to taking it to market, but are instead trying to create the impression of a product with a lot of hype to increase the value of the company in some anticipated acquisition in the next few years.

So back to my standard criticism of this technology ... it is only safer than a human, when it can exceed the ability of a human to detect critical objects/events and formulate/execute a correct defensive driving strategy. The solutions for this problem are exponentially harder with increased speeds ... 25mph like Google is relatively easy with current sensor and processing technology, because the number of objects, and their relative size for sensors limit the problem significantly.

The physics are relatively simple, in that the AI system needs to only examine objects inside the stopping distance of the vehicle, plus a small margin for recognition and strategy processing latencies.

At Google’s 25mph that is 31-35ft on dry pavement, 62-70ft for wet pavement, and 330-400ft on icy roads. There is a reason that Google doesn’t test on wet or icy roads, and that is they can not manage the sensor processing, recognition, and strategy tasks out to 400ft.

At 60ft for wet pavement, it requires 4X number of pixels and processing to maintain the same ability to recognize the same size objects at twice the distance. It also means the recognition and strategy functions have about 4x more objects to track, identify, and prepare defensive driving strategies for.

At 25mph that becomes 400ft for icy pavement stopping distance, since they are claiming all weather solutions. That requires 256x more sensor pixels to recognize objects at this distance. It also means that the number of objects to be identified and tracked, grow by this factor as well.

Double the speed to 50mph, and the stopping distance on icy roads is now 1250 to 1500f, increasing the required sensor resolution and processing requirements by more than 1024x, and closer to 2048x.

let’s just say that is impossible with $14 cameras today, and requires a lot more processing hardware and power requirements than will fit in the trunk.

This is why this product, and Otto for trucks, will fail to meet safety review.


Totally_Lost · November 14, 2016 at 8:49 pm

Hmm ... from what I’ve read in a half dozen other articles about the firm, they are not committed to taking it to market, but are instead trying to create the impression of a product with a lot of hype to increase the value of the company in some anticipated acquisition in the next few years.

So back to my standard criticism of this technology ... it is only safer than a human, when it can exceed the ability of a human to detect critical objects/events and formulate/execute a correct defensive driving strategy. The solutions for this problem are exponentially harder with increased speeds ... 25mph like Google is relatively easy with current sensor and processing technology, because the number of objects, and their relative size for sensors limit the problem significantly.

The physics are relatively simple, in that the AI system needs to only examine objects inside the stopping distance of the vehicle, plus a small margin for recognition and strategy processing latencies.

At Google’s 25mph that is 31-35ft on dry pavement, 62-70ft for wet pavement, and 330-400ft on icy roads. There is a reason that Google doesn’t test on wet or icy roads, and that is they can not manage the sensor processing, recognition, and strategy tasks out to 400ft.

At 60ft for wet pavement, it requires 4X number of pixels and processing to maintain the same ability to recognize the same size objects at twice the distance. It also means the recognition and strategy functions have about 4x more objects to track, identify, and prepare defensive driving strategies for.

At 25mph that becomes 400ft for icy pavement stopping distance, since they are claiming all weather solutions. That requires 256x more sensor pixels to recognize objects at this distance. It also means that the number of objects to be identified and tracked, grow by this factor as well.

Double the speed to 50mph, and the stopping distance on icy roads is now 1250 to 1500f, increasing the required sensor resolution and processing requirements by more than 1024x, and closer to 2048x.

let’s just say that is impossible with $14 cameras today, and requires a lot more processing hardware and power requirements than will fit in the trunk.

This is why this product, and Otto for trucks, will fail to meet safety review.

kishontilaci · November 15, 2016 at 9:29 am

Hi Totally_Lost,

Thanks for the comments, I try to explain:

Obviously you need to slow down on wet and icy roads, like humans do, so the time to collision is roughly the same in every weather. That simple.

On the other hand, if you want to increase your view distance the additional processing work is much smaller, because you do not need NxN times more resolution, you just need N additional narrow field cameras to increase the view distance. For example:

If you have a single 2Mpixel camera with 90 degree view, your view distance might be limited to 80m, with an additional 45 degree camera it doubles to 150+, if you add another 22.5 degree (the 3rd one) it could be 300+ meters. Obviously this increases your processing by only by 3x, and only at the front view.
It would also increase the saftety of the system.

Please let me know if you had further questions!

Regards
Laszlo
CEO of aiMotive

Totally_Lost · November 15, 2016 at 10:16 am

Several problems with your assumptions.

First is that people do not, and in some cases, can not slow down proportional to stopping distance. For instance many freeways have minimum speeds posted that are typically 60-80% of the maximum speeds. Typical in this area is 55mph minimum with 65-75mph maximum, which applies to wet, snowy, and icy roads. For instance icy city streets never have traffic slowing from 35mph to under 5mph as you suggest will maintain the same time to collision/stop. What does happen is that following distances increase significantly, and rate of change for the vehicle is significantly slowed by the driver (in both turns and speeds) to manage the ballistics problem for the body to better match the available traction. This also significantly increases the mandate that the driver practice enhanced defensive driving, by increasing the range of objects to be monitored, and fully expect that some vehicles will run (or slide through) a light that has just changed.

The array of narrower field of view cameras, does remove lesser important edges from the distance problem, but doesn’t solve the problem in areas where turns and grade changes are more than the extremely narrow field of view you suggest above.

More important is that to match the defensive driving skills of a human at 75mph on highways that are not straight, it really is necessary to have optics that have a similar resolution and sensitivity to motion detection as the human. This skill is absolutely necessary every day for drivers, and includes looking for stopped cars on the 2,000ft away on far side of a freeway interchange that will have a near blind corner later, because of turn radius and change in grade. In mountain areas it requires looking for visible traffic and animals on the road a head of you, that is above, below, and to the sides of your current track, because of corners. This is especially true at night in rural areas because of deer, elk, moose, and livestock.

By the time you have added enough narrow view cameras to fill in the coverage, you are back at nearly the same number of pixels, but have increased the problem by having to correlate pixels at the edges of adjacent cameras to detect objects shared on the edges of two/four camera field of views. If not done well, there will be blind spots along those edges, that will match turn radius and grade change radius, that a human doesn’t have.


Log in to leave a Comment



Editors’ Picks

How Many Robots Does it Take to Screw in a Light Bulb?
Watch a Fetch robot with a custom soft robotic gripper use a...

Disney: Focus on the Robot Experience
The robot experience included in a business strategy is important not only...

Flirtey Wants Drones to Deliver Defibrillators in Nevada
Flirtey and REMSA have partnered to use drones to delivery automated external...

NVIDIA: Drive PX Pegasus AI Computer Powers Level 5 Autonomous Vehicles
NVIDIA introduced its Drive PX Pegasus AI computer that it claims can...