Saturday, September 29, 2018

Another smoking gun....

Putting this up before I get my translations -- but this looks like all of our verticals (possibly +1) in the same place, with a prime suspect for our 10 million exclusive projection agreement. More support for the Sharp/Foxconn theory.

My translations from Japanese are a little slow.

Request any readers fluent in Japanese to translate a little. The Katakana on item 3 (looks like interactive projection, and projection alone, the Kanji by the camera, and the 3D item at the bottom right I'm particularly interested in the particulars.)

Thank you Joe


シャープ 半導体レーザのご提案 (proposal of Sharp Semiconductor Laser)




Friday, September 28, 2018

Nice stuff from Microsoft

This has been sitting in my un-done file because I've been slammed lately. Unfortunately has to stay a little less done than I'd like it to be.]

Background info and conclusion, in photographic form at the top.

Have a nice weekend.

Thanks Joe






MSpoweruser Method for getting 4K in near eye 


However, current MEMS technology places an upper limit on mirror scan rates, in turn limiting display resolution. As an example, a 27 kHz horizontal scan rate combined with a 60 Hz vertical scan rate may yield a vertical resolution of 720p. Significantly higher vertical resolutions (e.g., 1440p, 2160p) may be desired, particularly for near-eye display implementations, where 720p and similar vertical resolutions may appear blurry and low-resolution. While an increase in the horizontal and/or vertical scan rate would increase display resolution, the former may be technologically infeasible while the latter increases power consumption. Further, high scan rates may at least partially constrain mirror scan angle and aperture, where larger values are also desired. Additionally, supporting higher resolution also may require a larger mirror size due to the diffraction limit associated with smaller “pixel” sizes. The use of such a larger mirror may further increase the difficulties in achieving higher resolutions with scanning displays, as the larger mirror leads to a lower scanning frequency.

*********
The technology is applicable to a variety of output methods, including waveguides such as in the HoloLens, but also laser projection TVs and other systems which uses lasers and MEMS.

From the patent
[0051] Another example provides a method of displaying an image comprising directing light from two or more offset lasers toward a scanning mirror system, and scanning light from the two or more offset lasers in a first direction at a higher frequency and in a second direction at a lower frequency to thereby scan the laser light in an interlaced pattern and form the image. In such an example, the method alternatively or additionally may comprise mapping a gaze direction determined via an eye tracking sensor to a region in the image, and adjusting one or more of the scan rate in the second direction and the phase offset based on the region in the image. In such an example, scanning the light from the two or more offset lasers in the first direction alternatively or additionally may comprise scanning the light in the first direction at a frequency of 27 kHz to 35 kHz. In such an example, scanning the light from the two or more offset lasers alternatively or additionally may comprise scanning the light to form the image at a resolution between 1440p and 2160p. 


From Free Patents online
Embodiments of the present technology using the beam scanning assembly 100 described above will now be described with reference to the views of FIGS. 3-15 and the flowchart of FIG. 16. FIGS. 3a, 3and 3illustrate a structure and operation of a first embodiment of the MEMS laser scanner 200 according to the present technology. FIG. 3shows image light incident on the optical element 102 at two different times. This figure illustrates the zero order diffracted light present since the diffraction gratings are not 100% efficient. This light does not contribute to the final image and would appear as a ghost so therefore some mechanism like a blocking aperture would be used to block the light from reaching the subsequent components in the optical system. FIGS. 3and 3show image light incident on the optical element 102 at two different times. These figures illustrate the formation of two separate fields of view by diffracting the display light onto the MEMS mirror 168 by two separate Bragg polarization gratings 170 and 171 as explained below. The image light is generated by a display engine 140 which emits image light in a step 300 that is modulated on a pixel-by-pixel basis by the controller 124. In embodiments, the display engine 140 may be a commercially available assembly, such as for example the PicoP™ display engine from Microvision, Inc. of Redmond, Wash.

Tuesday, September 25, 2018

Sometimes

Sometimes you have to walk away from buying custom tailoring, take the money you save and buy more MVIS stock....

Great video about AR / Apple

MVIS has the best near-eye display.

At the end there's a fairly obnoxious embedded Audible ad, and that's when the interesting stuff is done.

Monday, September 24, 2018

Magic Leap, Hololens & the US Army


Often big tech advances (like the internet itself) begins as a military project, then goes consumer.
Near eye display likely little different, MicroVision was working on this stuff a long time ago.



HindustanTimes


Magic Leap Inc. is pushing to land a contract with the U.S. Army to build augmented-reality devices for soldiers to use on combat missions, according to government documents and interviews with people familiar with the process.The contract, which could eventually lead to the military purchasing over 100,000 headsets as part of a program whose total cost could exceed$500 million, is intended to“increase lethality by enhancing the ability to detect, decide and engage before the enemy,” according to an Army description of the program .A large government contract could alter the course of the highest-profile startup working on augmented reality, at a time when prospects to produce a consumer device remain uncertain.

Building tools to make soldiers more deadly is a far cry from the nascent consumer market for augmented reality. But the army’s program has also drawn interest from Microsoft Corp., whose HoloLens is Magic Leap’s main rival.The commercial-grade versions of both devices still face significant technological hurdles, and its not clear the companies can fulfill the army’s technical requirements. If recent history is any guide, a large military contract is also sure to be controversial within the companies.

Magic Leap declined to comment. Microsoft confirmed it had attended a meeting in which officials from the Army met with potential bidders. The Army’s Contracting Command is currently reviewing proposals, said Ed Worley, a spokesman.

Friday, September 21, 2018

First Holographic call with 5G


Some of the other products that appear in the first video are really interesting.

The call isn't that cool, until you realize that the soccer player isn't actually there.


Facebook Portal

Time table on this is interesting. I had expected it to be announced in the spring. Apparently they added some features.

The original reporting apparently comes from Cheddar... still watching.


Cheddar

Mashable

c|net

Thursday, September 20, 2018

Amazon Event Today -- UPDATES

No one knows anything yet, so it's ALL speculation, (just FYI)

Live Blog Link (c|net)


Wired

Engadget

The Verge



Amazon's Alexa device event today: Here's what you need to know from CNBC


Alexa Hunches "Predictive AI.."

.... well, doesn't that sound recently familiar?

Alexa wall clock (shows timers)


Alexa "guard" devices will have "guard mode"
Introducing "Echo Guard." Say something like "Alea, I'm leaving," and she'll move your Echoes into Guard mode. You'll get a notification if they hear breaking glass or the sound of an alarm.

Will also turn smart lights on and off to make it look like you're there.

Launching "Smart Screen SDK" [software development kit] Can add echo show like functionality to anything with a display. (I'll be looking for this, this will be good.)





Lenovo is bringing Alexa to their tablets.

Autos: BMW, Ford, Toyota, Buick, Chevrolet, Infiniti, Hyundai, Nissan, GMC, SEAT, Cadillac, Dodge, Mercedes, Jeep, Ram, Genesis, Mitsubishi, Kia


Wednesday, September 19, 2018

Bezos says some very interesting things...

Very interesting things early on.

Particularly how he decided to sell books.

Look at how many screens there are around you.

Think about how many more there will be when they're tiny, interactive, energy efficient and inexpensive.


Then buy a little more MVIS.

Tuesday, September 18, 2018

Microsoft, Chevron, Mixed Reality

Microsoft betting a lot on the space.

Thanks Ben


Forbes

Chevron is testing two to three dozen use cases for HoloLens. “It signals a change in how we work, which to me is a pretty big deal,” says Bill Braun, Chevron's chief innovation officer. While Chevron anticipates saving a lot of money from the switch—each HoloLens costs less than a round-trip flight for an expert, both executives note—the numbers arent significant enough for Chevron to update analysts about financial impact, Braun says. Moving away from having people in person at all times, however, represents a “pretty substantial change,” he argues, and one that will “build the base platform where the next features and enhancements we make will be incrementally less expensive.” It also puts edge computing devices in the hands of field workers in the form of HoloLens, a shift in how technology is used in the field, Nadella notes.
Chevron’s adoption of HoloLens in production wasn’t a direct consequence of its wider-ranging cloud partnership with Microsoft, the company says, but did help accelerate adoption through the sharing of engineering expertise and Chevron’s familiarity with Azure and Dynamics 365, both of which are used by HoloLens’ apps. Chevron was intrigued by HoloLens before the cloud partnership, says Moore. “There isn’t really any competition in what we consider true augmented reality,” he says.

Northland Capital Call....

Call with Northland Capital yesterday

Heard some excellent stuff here.

I'm liking the way they're talking about markets, upgrade cycles, making money for customers, price points.... and product volumes. (7 figure and 8 figure product volumes.)

That coupled with acknowledging that there are limited companies that do this... ans we're working with all of them. (Specific mentions of Amazon, Google and Samsung's "Bixby.")


To make it easier to find....

What Mentioned Who Mentions TIME
Amazon (echo) PM 3:23
Amazon, Google, Samsung (Bixby) SS 3:49
Amazon voice services (just the beginning) SS 4:46
NvidiaSS 12:50
Alexa SkillsSS 14:00
Amazon business SS 14:30
We're talking to all of them PM 19:50
iRobot ML 22:15
iRobot (cloud mapping) SS 23:50
7 & 8 figure volumes PM 26:35

An edge computer. Before the computer can infer something.... The computers / voice services need more input. Visual and touch. Subset of machine learning in the modules which

Saving sending lots of of stuff up the connection.
Point Cloud
Predictive travel (relatively low power analytics at the device)
Automotive environment / Latency advantage
Lean, launchable, right price point

We can now detect motion, and focus the engine on the motion.

Is the platform we're creating scalable?

Seamless interaction between the user and projection.

If you're a busy person can you order $100 in groceries in 4 minutes?
Can you order an Uber without seeing where it is?

Smart speakers 50 million installed / Benefits of display

Those companies have invested huge in this technology, this is how they will monetize this.

Competing solutions, better with lower cost

Component costs: "Any components that are too high priced?"

Leverages standard processes and components, and a working with a world-class supplier.

7 figure and 8 figure volumes is how they define success.

Saturday, September 15, 2018

Google Show as Kitchen Assistant

The best part of this isn't the content so much as where it is: mainstream click-bait.

We're going to have a lot of fun when everyone starts talking about it. 

Pure Wow



Interactive Coasters



Thursday, September 13, 2018

Does new tech catch on?

When something is new, it's often difficult to know if it will catch on and stay caught on. 

You could "facetime" in 1964, but it didn't catch on. (for you young people out there, you still had to sit by the phone, because it was attached to the wall either directly or by a cord.) Sometimes technology will wait a long time before the timing is right. (Picture phones had a problem, because there had to be at least two of them before they made any sense... like Fax machines.)


Eighteen years ago (that long?) The Segway Arrived to a lot of fanfare, with claims being made that it would completely change the way people travel. There was a lot of hype, and when it was introduced it got popular and then faded quickly. Not quite as good as people thought. It still gets some use, and can be really useful, but it's the huge solution to the transport problem people thought it would be.

Voice is one of those things that we'd be concerned about. It's an extremely recent addition technologically speaking, and if it were going to be fading it would be showing signs of fading by now.


But it's still increasing. That's good for us -- because screens are better for delivering information after using voice to ask for it. And people are talking to their phones, and asking them for information at an increasing rate.

As far as projection for screens? Anytime you can make something smaller and more portable and just as good with additional features... you have no worries, and that's what Laser projection is going to do.

When combined with a screen, the number of things that people will commonly ask for will increase, and it will be significant increase in the functionality of voice.

Thanks Ron

CMO.com

Fad or fab? When it comes to voice devices and services, survey says: Fab.
Indeed, consumers’ use of voice services is on the rise, according to new research by Adobe Analytics, which surveyed over 1,000 U.S. consumers. The study found the most common voice activities are asking for music (70%) and the weather forecast (64%) via smart speakers. Other popular activities include asking fun questions (53%), online search (47%), checking the news (46%), basic research/confirming info (35%), and asking directions (34%). 
The study also uncovered some newer voice-based tasks. Thirty-six percent of consumers surveyed said they use voice to make a call, 31% do so for smart-home commands, 30% for shopping/ordering items, 17% for food delivery/takeout, and 16% for flight/hotel research.
“Technology trends come and go, but we think voice is here to stay,” said Colin Morris, director of product management for Adobe Analytics. (Adobe is CMO.com’s parent company.) “Consumers continue to embrace voice as a means to engage their devices and the Internet. It’s a trend that has fundamentally changed the face of computing.”
Smartphone speaker ownership, the study found, is driving voice usage. Thirty-two percent of consumers reported owning a smart speaker in August 2018, compared with 28% in January 2018—a 14% increase in just a few months. The growth is considerable given that 79% of smart speaker sales occur in Q4, Morris pointed out.
Additionally, use of voice assistants is up, with 76% of smart speaker owners who cited an increase in the past year. Seventy-one percent of smart speaker owners reported using them at least daily—44% of which said “multiple times a day.” Only 8% of owners reported almost never using them.  

Wednesday, September 12, 2018

Friday, September 7, 2018

Voice User Interface

A week ago I attended a meetup that featured an Amazon engineer (echo show focus... yes it was fascinating... ) the presentation she gave was about how they program the device for speech recognition.

It's WAY more involved (tedious?) than programming for visuals. When you're programming a typical program you can easily lay out all of the options that you will allow for a user, and they must choose one of them. The ways all of us speak varies a lot, and that makes programming for voice recognition particularly challenging.

The programming example she used was a "skill" for choosing a dog, and Alexa asks the user "what size dog do you want?"

With a visual program, you could handle this in a variety of ways that make getting a usable answer easy: You show options: small, medium, large. You could even use a sliding scale from "tiny" to "huge."

The engineer described the difficulty of programming for voice... so they want a big dog... how do you process all the possible words for "big?" If they're doing a good job of "conversing" you're more likely to get something strange. 

{big, large, a hundred pounds, colossal, enormous, gigantic, huge, immense, massive, substantial, vast, a whale of a, ginormous, hulking, humongous, jumbo, mammoth, the size of a baby elephant, super colossal}

... and if the system is to respond, each option must be programmed into the skill...

Which makes me quite sure that visuals need to be integrated into even the best voice programmed devices -- trying to program for all of the nuances of voice communication would be extremely difficult. 

We make requests, build relationships, and get some information through speech.

We get most of our information visually. I don't ask a smart speaker for the weekly weather report, because receiving the speech necessarily requires more time -- I would use speech to request the weekly forecast if I could then SEE it -- because making the request by voice can save me time, and seeing it saves me time.

Voice plus display will save time, and be more efficient...


Medium.com

VUI allows for hands free, efficient interactions that are more ‘human’ in nature than any other form of user interface. “Speech is the fundamental means of human communication,” writes Clifford Nass, Stanford researcher and co-author of Wired for Speech, “…all cultures persuade, inform and build relationships primarily through speech.” In order to create VUI systems that work, developers need to fully understand the intricacies of human communication. Consumers expect a certain level of fluency in human idiosyncrasies, as well as a more conversational tone from the bots and virtual assistants they’re interacting with on a near-daily basis.


Natural Language Processing — we’re not currently capable of developing a VUI with an inbuilt, natural and complex understanding of human communication, not yet. Regional accents, slang, conversational nuance, sarcasm… some humans struggle with these aspects of communication, so at this point can we really expect much more from a machine?

Visual feedback — including an element of visual feedback helps to reduce the level of frustration and confusion in users who aren’t sure whether or not the device is listening to or understanding what they’re saying. Alexa’s blue light ring, for example, visually communicates the device’s current status e.g. when the device is connecting to the WIFI network, whether or not ‘do not disturb mode’ has been activated, and when Alexa is getting ready to respond to a question…etc.

Thanks Carmen O!

Thursday, September 6, 2018

Hololens -- Larger field of View..... and an excellent description of Microvision Tech

So, if you've been hanging around here for any length of time, this isn't that new. We've seen these patents before. This patent makes my confidence quite high that MicroVision's Tech WILL be part of it. (It isn't yet.)

If you haven't checked out Hololens in person, do it. When you see it you will realize that it is going to significantly impact the world, and will be tremendously popular. Particularly look at what Thysenkrup is doing with it.






The patent is for a “MEMS laser scanner having enlarged FOV”, and describes how a MEMS laser scanner can be used for a near-eye display that increases the FOV. It describes how “using light of different polarizations, the MEMS laser scanner is able to expand the FOV without increasing the range over which the mirror of the scanner oscillates”.
One of the names attached to the patent is Sihui He. As MSPowerUser points out, she is an optical engineer at Microsoft.
While the existence of this patent doesn’t necessarily mean that the next generation of HoloLens will definitely have a wider field of view, we’d be very surprised if Microsoft wasn’t working hard on improving the new HoloLen’s performance in that area.
We’ve recently heard rumors that Microsoft is working on a new HoloLens, codenamed Sydney, which could be released in 2019. These rumors also suggest that the next HoloLens will also be significantly cheaper than the original, which was another major criticism.





Samsung Smart Speaker Patent

I don't know if this was posted last year as well, but I suspect so.

So, also of note is that there have been some issues with Bixby, but recent news is that it may collaborate with Google on Bixby. 


It's also a reminder of how many moving parts there are for systems like this, and how important timing is -- the timing is very good right now.


Thank you R!

Patently Apple
Patently Apple posted a report on Wednesday titled "Samsung's President of Mobile Communications has confirmed that a Smart Speaker is in fact coming to Market." We first reported on this possibility back in early July. Samsung had also been granted design patents for a possible future smart speaker that we covered in a previous report. Then yesterday a utility patent from Samsung came to light that revealed a new radical smart speaker system in detail. The system includes a master unit designed to communicate with a series of secondary units spread throughout the home. The interactive smart speaker includes a video projector that's in sync with its digital assistant called 'Bixby.' The system is designed in a certain way to resemble the head and face of a robot which enables the device to physically move when spoken to as well as redirect video responses or the playing of video content unto a nearby wall with its built-in projector.

***

Where there is an optimal distance range or distance limit between the projector and the projection surface, the device may be able to move itself (e.g., using wheels or rollers attached to the base) to the optimal position to display the image.

Furthermore, by using the sensor, the device can find a flat, blank surface for the large-screen display mode. This will avoid visual content from being displayed on an artwork or any other surface that would not be suitable.

Wednesday, September 5, 2018

Presentation beginning very soon.... Webcast link

Webcast Link

Smart Speaker Demo

New Investor Presentation Link

Morning Behavior...


Often, there's something going on when we get big moves like this morning. I suspect we're getting some result of this, have no idea if there's any substance to the rumors.

I do strongly suspect that there actually IS something going on between Amazon and MicroVision(for a variety of reasons) but see no trigger for these particular rumors.

From Stocktwits

(Interesting that this article appears today.)




Tuesday, September 4, 2018

Microvision Interactive Smart Speaker Demo

I'm thinking we'll hear more in October.

I recognize that speaker's voice....

And.... don't forget... at the distributor.... WPG Holdings



Microvision

Today’s global smart speaker market is rapidly expanding, up 187% year over year in Q2 20182. Many companies are seeking ways to differentiate their offerings to gain market share and increase monetization of their platforms since most users use only a few simple skills that are not tied to monetization.

Some companies are starting to add physical screens to their smart speakers to increase the consumer use of the VDA. These displays make it easier for the users to interact with VDAs, especially when there are multiple choices or lists. However, the industrial design and form factor suffer as these devices start to look and act just like large tablets. In many cases, the addition of a physical screen is not an option due to space, size and cost limitations.

MicroVision’s advanced interactive display technology can produce a large, on-demand, cost-effective interactive display without growing the form factor of a typical smart speaker. Very importantly, MicroVision’s technology provides the user with a natural way to interact with the VDA and is designed to increase the monetization opportunities for platform providers. The video below highlights an example use-case with a touch interface enabled by MicroVision’s Prototype Interactive Display Engine.