Wednesday, March 28, 2018

"It wasn't the LiDAR"

I don't know enough about the specifics about the whole case to speak definitively, but Velodyne is probably right when they say is wasn't their LiDAR --it was doing what it was supposed to do.

however, in a way, it was -- because the software can only process the data it collects, and it seems pretty clear that there wasn't enough data to the software to make the decision to stop in time.

The Velodyne LiDAR is expensive. (too expensive to put multiple units on a car --- actually more expensive than the car.)

Having a number of inexpensive LiDAR units positioned around the vehicle with dense and directable point clouds -- that can even recognize facial features... that would make things a lot safer, and that's what the MVIS LiDAR can do.

Whatever the case, Driverless Cars are coming


To be totally fair, I've seen the video of that crash. I suspect the person who got hit was suicidal, and I'd be very surprised if any human driver (even one who was paying attention) would have been able to avoid hitting that pedestrian.
Velodyne Lidar Sample:

VeloView: LiDAR Viewer for Velodyne HDL from Kitware on Vimeo.

Velodyne lidar Sample. (Pretty clearly enough to detect a moving bicycle, but not enough to identify the moving bicycle -- and maybe I'm wrong about that.)

Microvision  (Look at Slide 27)

5.5 - 16.5 Million points per second. "Ability to resolve small features."

From Recent MVIS CC

"So the capabilities I'm describing in this perceptive element exist within the context of the 1-meter, 1.5-meter interactive display, the 10-meter display and the 30-meter display and perhaps the -- sorry, not display, LiDAR. Perhaps the best way to think of these, Henry, is to think of them as having sort of different levels of vocabulary. The things that the 1 meter display -- or the 1 meter LiDAR will have to recognize will be a relatively small number of things, gestures, point, touch, compression, squeezing the picture, flipping the page. And the 10-meter LiDAR, you can see how that number of things that would have to be recognized will increase for the device to be able to send a message that says it's your child walking towards an open door versus your dog running through a dog port would be an example of how those differences Maybe it's you walking down the hall past the bookcase, so don't turn the lights on for the bookshelf, or your wife walking towards the bookcase to get a book, go turn the lights on there and illustrate it. So those -- you can see that the language or the vocabulary perhaps of the device would increase and then within the automotive space would increase again."

LiDAR this good, can recognize a pedestrian and stop the car.

Velodyne point cloud



They're also still talking about expense in $4,000 - $75,000 increments. Microvision can deploy multiples at those prices, with significantly better point cloud

Uber Cuts back on safety sensors

"In scaling back to a single lidar on the Volvo, Uber introduced a blind zone around the perimeter of the SUV that cannot fully detect pedestrians, according to interviews with former employees and Raj Rajkumar, the head of Carnegie Mellon University’s transportation center who has been working on self-driving technology for over a decade."

No comments:

Post a Comment