Wednesday, May 16, 2018

My bet on "Black Box"

This is just an educated guess. I readily admit that I could be very wrong about this. This is just dot connecting from publicly available information.

This is what I suspect and why.


In the fall we were told about the "Black Box" project that once the development was complete, we would be able to sell the results to anyone.

If someone is paying for development, it's exceptionally odd that they would allow anyone else to use what they paid to develop -- unless of course they would benefit if anyone else used it.

So who would benefit from anyone's use of it? Companies that do things on a variety of devices -- 

There are various common computer operating systems, stationary and mobile: Windows, Linux, iOS and Android (google)

What services are likely to be used on all operating systems? and are "Tier 1"

Google
Facebook
Amazon

So, that's my short list. 

It doesn't matter what operating system you're using... people will still use all three of these. You would connect to Google if you used an Amazon device or to Amazon if you used a Google device. Same for Facebook. No matter who produces the device, you would still use the other service -- and in my mind, that's the only way you'd pay to develop some hardware, and then allow (and encourage) your development partner to sell it to anyone and everyone they want to.

If you add other system software to it you add Microsoft;  Apple is also doing some interesting things with AR and sharing it, they're opening up. 

So Microsoft and Apple are also contenders, but less, in my mind than the other three.

So, go back to look at the 4th Quarter Conference call. (delivered during Q1)  

What are all of those companies working on? (all five). 

Smart homes -- The description delivered during the conference call sounds like that. Review the Question and Answer portion of that CC:

Microvision Q1 2018
Perry Mulligan / Q1 Conference Call

It is. So the capabilities I'm describing in this perceptive element exist within the context of the 1-meter, 1.5-meter interactive display, the 10-meter display and the 30-meter display and perhaps the -- sorry, not display, LiDAR. Perhaps the best way to think of these, Henry, is to think of them as having sort of different levels of vocabulary. The things that the 1 meter display -- or the 1 meter LiDAR will have to recognize will be a relatively small number of things, gestures, point, touch, compression, squeezing the picture, flipping the page. And the 10-meter LiDAR, you can see how that number of things that would have to be recognized will increase for the device to be able to send a message that says it's your child walking towards an open door versus your dog running through a dog port would be an example of how those differences Maybe it's you walking down the hall past the bookcase, so don't turn the lights on for the bookshelf, or your wife walking towards the bookcase to get a book, go turn the lights on there and illustrate it. So those -- you can see that the language or the vocabulary perhaps of the device would increase and then within the automotive space would increase again.

....think of our display engine embedded in your voice-only device. So that as you shave in the morning not only do you listen to the news, you see it displayed on the washroom wall, and that becomes a little bit more meaningful experience. As you walk down the hall towards the kitchen, our sensing device knows it's you that's walking down the hall. It adjusts the coffee and turns the lights on appropriately. And then an interactive display that's invisible, but when you call it up, it comes out as an Alexa-type device or something of that nature that allows you to interact with it because of the sensing capabilities, gesture recognition and then disappears when it's not required. So we really see this as sort of a suite of solution that helps AI platforms with their user interface. 

So, look at what is happening with these companies:

Google  Integrating Smart Home Devices --- Clearly designed so that people can integrate google Smart Home into OTHER manufacturers devices. This is thorough explanation of how to do it.

GOOGLE SMART HOME

Amazon Connect your devices to Alexa  At CES I had a very interesting conversation with one of the people doing Amazon Alexa certification. She reported that anyone can make a device that uses Alexa, but they must submit it to be an "approved" Alexa using device and sell it on Amazon. 

Facebook Portal  Remember they had a privacy snafu, and delayed the announcement. 

Visit an Amazon Smart Home (It's on my near-term agenda)

Cortana Smart Home


Apple Home Kit

Amazon Smart Home Devices (clearly willing to work with anyone)
USA TODAY
C|Net

Hints about "machine learning"

 there are solutions out there today that do 3D scanning, perhaps as an example, for facial recognition. They require high compute energy and use approximately 30,000 points to do that calculation. Our range of solutions will provide between 5 million and 20 million points per second of resolution in the 10-meter space. So the density of the information we have at the sensor allows us to make simple messaging analytics or messaging content that enables users to do so much more with the device than simply trying to plug them with this plethora of data. It is almost diametrically opposed to the way most entities are solving sensing applications today. Almost everybody is trying diligently to get more information from the sensor, pass it down the pipe to a centralized processor that allows it to do a calculation and figure out what's going on. We have so much information at the sensor. We have the luxury of sending messaging, which just makes it much easier for the entire system to be responsive. And it will be a shame not to capture that.

No comments:

Post a Comment