Many seem to look at this looking in the rear-view mirror -- which is fine if the company is huge and stable, not a good way to look at it when you're looking at a bleeding edge company that is transitioning.
They pretty much come out and say they're going to have orders by mid-year, they said in the last conference call that "we may sell this to anyone." I have two that are strong suspects of a lineup of six. (Answer the question "who is a major technology company that isn't primarily a hardware company? They have to be able to get a return on this project through other companies' products.
On Ragentek... a delay between first production run and the second production run doesn't surprise me at all.
Impatient? Not maturing quickly enough? Sorry.
(This can happen with a company working with bleeding edge technology.) I've made it pretty clear that I think this is a bleeding edge company.
From CC Transcript & Recording (pulled quotes from transcript)
"We released development kits of our interactive display engine. The feedback we received was that the display needed to be brighter. We are implementing plans to produce a brighter engine, but that change also cost us some time."
Perry Mulligan
"I am encouraged by the progress we are making on our previously announced $24 million development contract with a Tier 1 technology company. As I mentioned in our opening remarks, we are on track with this project and currently expect this project to translate into product revenue in 2019. I am also excited by the interest that prospective Tier 1 customers have expressed about the products we demonstrated last month at CES in Las Vegas. We believe that successful follow on activity over the next six months could position us to reach revenue levels where we can begin to post profits during 2019."
Perry confirms MORE than one tier one company:
Glenn Mattson
Okay, last thing Perry you spoke to building complete solutions that are better easily adaptable so I guess the thing to take away from that is that the next round of orders when they come there wouldn’t be this kind of year -long development agreements type arrangements, it would be more for products that can be built into ready products is that correct?
Perry Mulligan
Yes, the interpretation that you've reflected there is exactly what we are expecting Glenn. What I would caution though is as we work with each of these Tier it becomes interesting to see how they want to make their solutions unique, we are here trusting that the capabilities of the products for providing enable them to unlock the value that we want, that we think they can achieve by using them in as close to standard form as possible but I respect their wishes to have fifth form and function derivatives that allow them to go-to-market with something that's looks a little deep. So, translated loosely, it means I agree with your statement and we expect the commercialization of these things to require minimal amounts or marginal amounts of additional development work and hopefully the time to market comes much more compressed.
Perry Mulligan
Perry Mulligan (I listened to this again and adjusted it.)
It is, so the capability I am describing is perceptive element exist within the context of the 1 meter, 1.5-meter interactive display, the 10-meter display and the 30-meter display. Perhaps the best way to think of these Henry is to think of them as having sort of different levels of vocabulary.
The things that the 1-meter display -- sorry the 1 meter LiDAR will have to recognize will be a relatively small number of things, gestures, point, touch, compressions, squeezing the picture, flipping the page.
In the 10-meter LiDAR you can see how that number of things that would have to be recognized will increase. For the device to be able to send the message that says it’s your child walking towards an open door versus your dog running through a dog port, with the example of how those differences made it. It’s you walking down the hall, pass the bookcase, so don’t turn the lights on for the bookshelf or your wife walking towards to the bouquet to get a book, turn the lights on here and illuminate it. So those you can see that the language or the vocabulary perhaps of the device would increase and then within the automotive space would increase the gap.
Perry MulliganIn the 10-meter LiDAR you can see how that number of things that would have to be recognized will increase. For the device to be able to send the message that says it’s your child walking towards an open door versus your dog running through a dog port, with the example of how those differences made it. It’s you walking down the hall, pass the bookcase, so don’t turn the lights on for the bookshelf or your wife walking towards to the bouquet to get a book, turn the lights on here and illuminate it. So those you can see that the language or the vocabulary perhaps of the device would increase and then within the automotive space would increase the gap.
Think about our display imaging embedded in your voice only device so that as you shave in the morning not only do you listen to the music to see it display on the washroom wall, that becomes a little bit more meaningful experience as you walk down the hall towards the kitchen I assume the device knows it's you that’s walking down that hall, adjust the coffee and just turn slice appropriately and then interactive display that’s invisible but when you call it up, comes out of an Alexa type device so there is something of that nature that allows you to interact with it, because of the sensing capabilities just to recognition and then disappears when it's not required. So, we really see this as sort of a suite of solution that helps AI platform with their user interface.
Recent Post:
At WPG
You can stop about halfway through...
Another Pull of significance.
Perry Mulligan (I adjusted from transcript with recording)
What we've uncovered Henry as we look at the product is what if perhaps if I can put this in context for you. There are solutions out there today that do 3D scanning perhaps as an example for facial recognition. They require high energy and use approximately 30,000 points to do that calculation. Our range of solutions will provide between 5 million and 20 million points per second of resolution in the 10-meter space. So, the density of the information we have at the sensor allows us to mix simple messaging analytics or messaging content that enables users to do so much more with the device, they’re simply trying to flood them with this plethora of data. It is almost diametrically close to the way those entities are solving sensing applications today, almost everybody is trying diligently to get more information from the sensor, pass it down the pipe to a centralized processor that allows it do a calculation and figure out what’s going on.
We have so much information as the sensor. We have the luxury of sending messaging which just makes it much easier for the entire system to be responsive.
No comments:
Post a Comment