On my way to watch "Ready Player One" yesterday, I walked through the mall and saw them advertising the heck out of a Sprint / Hulu connection. Very nice for a certain kind of new phone hardware.
Yes, Sprint’s Unlimited plan just got better! Only Sprint gives you the BEST price for Unlimited. That’s Unlimited streaming data, talk and text. And NOW it includes the TV you love, with the Hulu Limited Commercials plan. Your Sprint Unlimited plan also includes a 10GB mobile hotspot so you can watch on the go, where you go. You’ll be able to watch the most popular next-day TV, original series, thousands of hit shows, movies and more.
FB is one of my suspects for the BB project. There are others, but they're a very interesting one.
Of course, the recent attention their use of data has attracted is probably not a positive, I don't think it's much of a negative either. In the end, I think it will have little effect.
Just now, in my e-mail I received a notification from Microsoft. They are updating their agreements. (New Microsoft Services Agreement.) In there they say what you can do from them, what you bought, what they owe you, what you agree not to do, that you can't reverse-engineer their software, etc. Lots of stuff to cover various backsides. This E-mail Arrived
As part of this service agreement -- at the top, is a Privacy Statement. Here's the FIRST section of that statement.
Microsoft collects data to operate effectively and provide you the best experiences with our products. You provide some of this data directly, such as when you create a Microsoft account, administer your organization’s licensing account, submit a search query to Bing, register for a Microsoft event, speak a voice command to Cortana, upload a document to OneDrive, purchase an MSDN subscription, sign up for Office 365, or contact us for support. We get some of it by recording how you interact with our products by, for example, using technologies like cookies, and receiving error reports or usage data from software running on your device.
We also obtain data from third parties. We protect data obtained from third parties according to the practices described in this statement, plus any additional restrictions imposed by the source of the data. These third-party sources vary over time, but have included:
Data brokers from which we purchase demographic data to supplement the data we collect.
Social networks when you grant permission to a Microsoft product to access your data on one or more networks.
Service providers that help us determine a location based on your IP address in order to customize certain products to your location.
Partners with which we offer co-branded services or engage in joint marketing activities, and
Publicly-available sources such as open government databases or other data in the public domain.
You have choices about the data we collect. When you are asked to provide personal data, you may decline. But if you choose not to provide data that is necessary to provide a product or feature, you may not be able to use that product or feature.
The data we collect depends on the context of your interactions with Microsoft, the choices you make, including your privacy settings, and the products and features you use. The data we collect can include the following:
Name and contact data. We collect your first and last name, email address, postal address, phone number, and other similar contact data.
Credentials. We collect passwords, password hints, and similar security information used for authentication and account access.
Demographic data. We collect data about you such as your age, gender, country, and preferred language.
Payment data. We collect data necessary to process your payment if you make purchases, such as your payment instrument number (such as a credit card number), and the security code associated with your payment instrument.
Device and Usage data. We collect data about your device and how you and your device interact with Microsoft and our products. For example, we collect:
Product use data. We collect data about the features you use, the items you purchase, and the web pages you visit. This data includes your voice and text search queries or commands to Bing, Cortana, and our chat bots. This also includes the settings you select and the software configurations you use most.
Device, connectivity and configuration data. We collect data about your device and the network you use to connect to our products. It includes data about the operating systems and other software installed on your device, including product keys. It also includes IP address, device identifiers (such as the IMEI number for phones), regional and language settings.
Error reports and performance data. We collect data about the performance of the products and any problems you experience with them. This data helps us to diagnose problems in the products you use, and to improve our products and provide solutions. Depending on your product and settings, error reports (sometimes called “crash dumps”) can include data such as the type or severity of the problem, details of the software or hardware related to an error, contents of files you were using when an error occurred, and data about other software on your device.
Troubleshooting and Help Data. When you engage Microsoft for troubleshooting and help, we collect data about you and your hardware, software, and other details related to the incident. Such data includes contact or authentication data, the content of your chats and other communications with Microsoft, data about the condition of the machine and the application when the fault occurred and during diagnostics, and system and registry data about software installations and hardware configurations.
Interests and favorites. We collect data about your interests and favorites, such as the teams you follow in a sports app, the programming languages you prefer, the stocks you track in a finance app, or the favorite cities you add to a weather app. In addition to those you explicitly provide, your interests and favorites may also be inferred or derived from other data we collect.
Contacts and relationships. We collect data about your contacts and relationships if you use a Microsoft product to manage contacts, for example Outlook.com, or to communicate or interact with other people or organizations, for example Visual Studio Team Services.
Location data. For products with location-enhanced features, we collect data about your location, which can be either precise or imprecise. Precise location data can be Global Navigation Satellite System (GNSS) data (e.g., GPS), as well as data identifying nearby cell towers and Wi-Fi hotspots, we collect when you enable location-based products or features. Imprecise location data includes, for example, a location derived from your IP address or data that indicates where you are located with less precision, such as at a city or postal code level.
Content. We collect content of your files and communications when necessary to provide you with the products you use. For example, if you transmit a file using Skype to another Skype user, we need to collect the content of that file to display it to you and the other user as you direct. If you receive an email using Outlook.com, we need to collect the content of that email to deliver it to your inbox, display it to you, enable you to reply to it, and store it for you until you choose to delete it. Other data we collect to provide communication services to you include the:
subject line and body of an email,
text or other content of an instant message,
audio and video recording of a video message, and
audio recording and transcript of a voice message you receive or a text message you dictate.
Video. If you enter Microsoft Store locations or other facilities, or attend a Microsoft event, your image may be captured by our security cameras.
If you use Spend, at your direction, we may also collect payment card information, receipt data, or financial transaction data, to provide the service.
We also collect information you provide to us and the content of messages you send to us, such as feedback and product reviews you write, or questions and information you provide for customer support. When you contact us, such as for customer support, phone conversations or chat sessions with our representatives may be monitored and recorded.
Product-specific sections below describe data collection practices applicable to use of those products.
**********************************
Most of the data collection they use is for Advertising, a very broad category..... This was released by Microsoft without a prompt, so they are trying to get ahead of the issue. We will likely see something similar from Facebook.
*********************************
Advertising. Microsoft does not use what you say in email, chat, video calls or voice mail, or your documents, photos or other personal files to target ads to you.
We use data we collect through our interactions with you, through some of our products and on third-party web properties, to show advertising. For example:
Microsoft may use your data to select and deliver some of the ads you see on Microsoft web properties, such as Microsoft.com, MSN and Bing.
When the advertising ID is enabled in Windows 10 as part of your privacy settings, Microsoft apps can access and use the advertising ID (much the same way that websites can access and use a unique identifier stored in a cookie) to select and deliver ads in such apps.
We may share data we collect with third parties, such as Oath, AppNexus, or Facebook (see below), so that they can select and deliver some of the ads you see in our products, their products, or other sites and apps serviced by these partners.
Advertisers may choose to place our web beacons on their sites in order to allow Microsoft to collect information on their sites such as activities, purchases and visits; we use this data on behalf of our advertising customers to help target their ads. We also share data directly with service providers, such as Oath, AppNexus, or Facebook, to permit them to provide services on our behalf or to partner with us in selecting and serving ads for our advertising partners.
The ads that you see may be selected based on data we process about you, such as your interests and favorites, your location, your transactions, how you use our products, your search queries, or the content you view. For example, if you view content on MSN about automobiles, we may show advertisements about cars; if you search “pizza places in Seattle” on Bing, you may see advertisements in your search results for restaurants in Seattle.
The ads that you see may also be selected based on other information learned about you over time using demographic data, location data, search queries, interests and favorites, usage data from our products and sites, as well as the sites and apps of our advertisers and partners. We refer to these ads as "interest-based advertising" in this statement. For example, if you view gaming content on xbox.com, you may see offers for games on MSN. To provide interest-based advertising, we combine cookies placed on your device using information that we collect (such as IP address) when your browser interacts with our websites. If you opt out of receiving interest-based advertising, data associated with these cookies will not be used.
I don't know enough about the specifics about the whole case to speak definitively, but Velodyne is probably right when they say is wasn't their LiDAR --it was doing what it was supposed to do.
however, in a way, it was -- because the software can only process the data it collects, and it seems pretty clear that there wasn't enough data to the software to make the decision to stop in time. The Velodyne LiDAR is expensive. (too expensive to put multiple units on a car --- actually more expensive than the car.) Having a number of inexpensive LiDAR units positioned around the vehicle with dense and directable point clouds -- that can even recognize facial features... that would make things a lot safer, and that's what the MVIS LiDAR can do. Whatever the case, Driverless Cars are coming ************** To be totally fair, I've seen the video of that crash. I suspect the person who got hit was suicidal, and I'd be very surprised if any human driver (even one who was paying attention) would have been able to avoid hitting that pedestrian.
Velodyne Lidar Sample:
Velodyne lidar Sample. (Pretty clearly enough to detect a moving bicycle, but not enough to identify the moving bicycle -- and maybe I'm wrong about that.)
5.5 - 16.5 Million points per second. "Ability to resolve small features."
From Recent MVIS CC
"So the capabilities I'm describing in this perceptive element exist within the context of the 1-meter, 1.5-meter interactive display, the 10-meter
display and the 30-meter display and perhaps the -- sorry, not display, LiDAR. Perhaps the best way to think of these, Henry, is to think of them as
having sort of different levels of vocabulary. The things that the 1 meter display -- or the 1 meter LiDAR will have to recognize will be a relatively
small number of things, gestures, point, touch, compression, squeezing the picture, flipping the page. And the 10-meter LiDAR, you can see how
that number of things that would have to be recognized will increase for the device to be able to send a message that says it's your child walking
towards an open door versus your dog running through a dog port would be an example of how those differences Maybe it's you walking down the hall past the bookcase, so don't turn the lights on for the bookshelf, or your wife walking towards the bookcase to get a book, go turn the lights
on there and illustrate it. So those -- you can see that the language or the vocabulary perhaps of the device would increase and then within the
automotive space would increase again."
LiDAR this good, can recognize a pedestrian and stop the car.
They're also still talking about expense in $4,000 - $75,000 increments. Microvision can deploy multiples at those prices, with significantly better point cloud Uber Cuts back on safety sensors
"In scaling back to a single lidar on the Volvo, Uber introduced a blind zone around the perimeter of the SUV that cannot fully detect pedestrians, according to interviews with former employees and Raj Rajkumar, the head of Carnegie Mellon University’s transportation center who has been working on self-driving technology for over a decade."
Out of the world’s estimated 7 billion people, 6 billion have access to mobile phones. Only 4.5 billion have access to working toilets
On the eve of World Water Day last week, the U.N. offered a sobering statistic: according to its recent study, more people on earth have access to cell phones than toilets.
Out of the world’s estimated 7 billion people, 6 billion have access to mobile phones. Far fewer — only 4.5 billion people — have access to working toilets. Of the 2.5 billion who don’t have proper sanitation, 1.1 billion defecate in the open, according to the study.
The primary value of augmented reality is that it brings components of the digital world into a person’s perception of the real world, and does so not as a simple display of data, but through the integration of immersive sensations that are perceived as natural parts of an environment.
Translated, that means “making the real world all that more cooler.”
U2 is always looking to top themselves when it comes to their tour spectacles. For the upcoming Songs of Experience roadtrip, head tour dude Willie Williams–the guy who has designed pretty much every U2 tour of any consequence–is apparently turning to AR.
Enter the 5th type of augmented reality tech: Projection based AR. This type of AR projects artificial light onto real world surfaces, forming digital images. It can be interactive, or non-interactive.
Another interesting application of projection based AR utilises laser plasma technology to project a 3D interactive hologram into mid-air.
“What’s really astounding to me is a lot of the guys at Oculus VR and other companies who were creating VR tell me that Ready Player One is one of their primary inspirations in getting into virtual reality,” Cline says. “I’ve been to Oculus a few times to do book signings and things there and they tell me Ready Player One is like required reading for new employees. Opens Wednesday March 28.... Fortune
From Ready Player One: "The wireless one-size-fits-all OASIS visor was slightly larger than a pair of sunglasses. It used harmless low-powered lasers to draw the stunningly real environment of the OASIS right onto the wearer's retinas, completely immersing their entire field of vision in the online world. The visor was light-years ahead of the clunky virtual-reality reality goggles available prior to that time...." Facebook AR Hub Oculus Careers
Excerpts: Apple's invention relates to a mixed reality system that may include a mixed reality device such as a headset, helmet, goggles, or glasses (referred to herein as a head-mounted display (HMD)) that includes a projector mechanism for projecting or displaying frames including left and right images to a user's eyes to thus provide 3D virtual views to the user.
The 3D virtual views may include views of the user's environment augmented with virtual content (e.g., virtual objects, virtual tags, etc.).
The mixed reality system may include world-facing sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and user-facing sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.).
In some embodiments, the world sensors may include one or more "video see through" cameras (e.g., RGB (visible light) cameras) that capture high-quality views of the user's environment that may be used to provide the user with a virtual view of their real environment.
In some embodiments, the world sensors may include one or more world mapping sensors (e.g., infrared (IR) cameras with an IR illumination source, or Light Detection and Ranging (LIDAR) emitters and receivers/detectors) that, for example, capture depth or range information for the user's environment. In some embodiments, the world sensors may include one or more "head pose" sensors (e.g., IR or RGB cameras) that may capture information about the user's position, orientation, and motion in the environment; this information may, for example, be used to augment information collected by an inertial-measurement unit (IMU) of the HMD.
The HMD may implement any of various types of virtual reality projection technologies such as a near-eye VR system that projects left and right images on screens in front of the user's eyes that are viewed by a subject, such as DLP (digital light processing), LCD (liquid crystal display) and LCoS (liquid crystal on silicon) technology VR systems.
As another example, the HMD may be a direct retinal projector system that scans left and right images, pixel by pixel, to the subject's eyes. To scan the images, left and right projectors generate beams that are directed to left and right reflective components (e.g., ellipsoid mirrors) located in front of the user's eyes; the reflective components reflect the beams to the user's eyes.
This is worth paying close attention to. Microvision is at the cutting edge of technology that makes this work. (I scraped the article because it both looked like a press release, and because I didn't want to lose it.)
Augmented reality is so last year (and the year before).
Or is it?
Though the high-tech glasses that once littered San Francisco’s streets and mobile games that drove hordes to Central Park to “catch ‘em all” have seemingly fallen by the wayside, AR remains a viable part of emerging technology. Startups and tech giants are still vying to get some skin in the AR game. The problem is this: the largely coastal tech hubs seem disconnected from one of AR’s most promising application.
Across the U.S., where blue collar work is more prominent, the state of AR isn’t a pipe dream—it’s already fully functional. Manual laborers, especially those who work in manufacturing and out in the field, are utilizing the technology and improving their company’s bottom line. For example, old, antiquated manuals often slow production, but with the help of AR, workers can increase efficiency, accuracy and safety.
“Your work instructions tend to be these PDFs that are hard to work through, plus they’re static documents, so they may be out of date,” Ash Eldritch, CEO and co-founder of augmented reality software company Vital Enterprises, told Engineering.com. “We take those instructions and make them glanceable in your field of view at all times, hands-free and voice-controlled.”
The software powering AR in blue collar fields like manufacturing and construction tap into a vast (and growing) network that provides workers with assets like updated schematics and the capability to communicate directly via AR hardware with a general contractor or manager who’s not even on site.
While AR advancements in consumer use are still being ironed out, augmented reality’s real-world application may help save industries that traditionally blame technology for its demise. In fact, Forrester Research predicts that by 2025, over 14 million workers in the U.S. will wear AR-powered smart glasses.
This looks very much like most other mass-market rollouts as the product is introduced in various places around the world. Why Logicom? Logicom Vendors
The next bid trend in the supply chain industry may just be augmented reality (AR). International logistics company DHL is developing and implementing smart glasses in warehouses, creating their own augmented reality. Sound interesting? Yes, because of AR's endless possibilities in the supply chain. Augmented reality by DHL can be broken down to performing four unique stages that come together to provide the final output.
1. Scene Capture:
The reality that should be augmented is captured via a video-capture device (such as a camera) or a see-through device (such as a head-mounted display).
2. Scene Identification:
Captured reality is scanned to identify and define the exact position where the virtual content should be embedded.
This position could be identified either by markers (visual tags) or by tracking technologies such as GPS, sensors, infrared, or laser.
3. Scene Processing:
As the scene becomes clearly recognized and identified, the corresponding virtual content is requested, typically from the Internet or from any kind of database.
4. Scene Visualization:
Finally, the AR system produces a mixed image of the real space as well as the virtual context. The two are combined and come together as one.
Together with Magna, we will accelerate the introduction of self-driving vehicles by sharing our technology with automotive (original equipment manufacturers) worldwide,” Lyft CEO Logan Green said in a statement. “This is an entirely new approach that will democratize access to this transformative technology."
Along with its partnership, Magna will invest $200 million in Lyft. The funding is part of a $1.7 billion round led by Alphabet’s investment arm CapitalG, according to The Wall Street Journal, and brings Lyft’s valuation to $11.7 billion.
“Lyft’s leadership in ridesharing and Magna’s automotive expertise makes this strategic partnership ideal to effect a positive change as a new transportation ecosystem unfolds,” Magna Chief Technology Officer Swamy Kotagiri said in a statement.
Bose announced today that it's getting into the augmented reality game. But Bose AR isn't about video, it's about audio and through a "wafer-thin acoustics package," the company says it can introduce a useful and relevant layer of audio into your everyday activities. Bose says the small audio technology produces powerful and clear sound and can be incorporated into wearables like headphones, eyewear and helmets.
Steven Spielberg’s “Ready Player One” will have its world premiere at South by Southwest on Sunday night.
The Warner Bros. release was widely speculated to be this year’s “secret secret” at the Austin film festival, which made an official announcement about it in an email over the weekend.
“We are thrilled to be premiering Ready Player One at SXSW,” said Janet Pierson, director of film at SXSW in an email statement. “The film brings to the screen a story that has captivated millions of readers around the globe, written by Austin’s very own Ernest Cline. And in the hands of Steven Spielberg—inarguably one of our greatest directors—we know the film is going to be a special cinematic event for our attendees.”
“Ready Player One” is based on a 2011 novel by Cline that is set in a dystopian future, where the characters spend most of their lives on a virtual-reality system named OASIS.
The cast includes Tye Sheridan, Olivia Cooke, Ben Mendelsohn, T. J. Miller, Simon Pegg, and Mark Rylance.
In recent years, SXSW has continued to lure big studio projects, as a way to build buzz before opening in theaters. In 2015, the festival’s secret screening was “Furious 7.” Last year, SXSW launched “Baby Driver” and “Atomic Blonde.”
When you have the tech for the best near eye display, you'll have a place in most AR setups.
The companies working on it are working on displays, software, interface, computing hardware and CONTENT.
With the exception of Magic Leap, I've tried them all. The Microvision setup I tried was old, and it was still better than anything else anyone was showing. (Better than Hololense, better than Meta, better than Vuzix. Magic Leap, Royal Shakespeare Company
All the cash Magic Leap is amassing is probably going a long way toward hardware development and manufacturing, but it's also becoming increasingly clear that a large portion of that cash will be devoted to content.The latest proof is a new partnership between Magic Leap and the UK's Royal Shakespeare Company (RSC).
Revealed on Wednesday, March 7, on RSC's official site, the pairing is being described as an effort to "look at how spatial computing can change the way audiences experience theatre." Specifically, the partnership will include a fellowship for recent RSC graduates and students, which will launch in the fall.
Gregory Doran, RSC Artistic Director, said: “We are very excited about our partnership with Magic Leap and the plans we have to collaborate together over the forthcoming years. We are committed to finding new ways for people to experience theatre and Shakespeare, placing the craft of theatre-making in new contexts and keeping live performance at the cutting edge of the 21st century. This partnership will build on our many years of innovation through performance and our pioneering digital work. I am delighted that this journey begins with an opportunity for young people to join us as we explore the vast opportunities that spatial computing will open up to us.”
Andy Lanning, Executive Creative Director, Magic Leap Studios, said: “At Magic Leap, we’ve brought together artists and engineers, dreamers and explorers, scientists and storytellers, who are just beginning to test the limits of a world where digital content is freed from behind screens and brought into your physical environment. We’re thrilled to embark on this journey with RSC and two visionary Fellows, who will define the medium and be encouraged to bring their existing knowledge and experience to take on the challenge of delivering the theatre experience on this next computing platform.”
And so far, the smart people I know are voting no. As a major electronics company in mobile, and a driver of 5G, This deserves to be watched.
Shares of Qualcomm Inc. QCOM, +0.56% surged 4.4% in premarket trade Monday, after Broadcom Ltd. AVGO, -1.56% boosted its bid to buy the semiconductor maker by 17% to its "best and final offer" of $82 a share. The bid consists of $60 a share in cash and the rest in Broadcom stock. Reuters had reported on Sunday that an increased bid was planned. Marketwatch
“Given well-known U.S. national security concerns about Huawei and other Chinese telecommunications companies, a shift to Chinese dominance in 5G would have substantial negative national security consequences for the United States,” a CFIUS official wrote in a March 5 letter to the companies’ lawyers that was reviewed by The Wall Street Journal.
Broadcom appeared to be laying the groundwork to get ahead of potential national-security concerns even before it launched its initial bid for Qualcomm. In a televised appearance last November with President Donald Trump at the White House, Broadcom’s Mr. Tan announced plans to redomicile his Singapore-based company to the U.S., a move that could have rendered CFIUS review inapplicable. Mr. Trump lavished praise on Mr. Tan, calling him “a great, great executive” and saying Broadcom was “one of the really great, great companies.”
******
Days later, Mr. Tan launched an unsolicited $105 billion bid for Qualcomm. Broadcom raised that to more than $120 billion at one point, then cut it back again to the current $117 billion offer after Qualcomm raised its separate bid for European giant NXP Semiconductors NV.
They don't like that it can't connect to google services, but that little thing has zero to do with the projection.
TechRadar You turn the projector on using a simple on-screen switch, and by default, the screen goes dark shortly afterwards to save power – but you can still interact with it and turn off the projector just as easily.
The projector displayed impressive and eerily bright pictures during our testing, although it’s best used in a dark environment and as close as possible to the targeted projection surface (a dedicated projector screen preferably, but a white wall will do).
A sample of movie trailers from YouTube played flawlessly with no visible ghosting or stuttering – this is a device that will feel equally at home projecting content on your bedroom wall, or on a whiteboard in the boardroom of a PLC.
To turn virtual transactions into real revenue, Payscout uses APIs in Oracle Financials Cloud to connect to banks and credit card networks on all types of transaction platforms. And with Oracle ERP Cloud, Payscout is able to streamline transaction flows from purchase to payment receipt within minutes, without manual processing. Payscout’s innovative technology is helping the company differentiate its brand and leapfrog competitors, whose apps are still limited to web and mobile commerce platforms.
The company hopes the new app will increase its share of the international ecommerce market, which is expected to reach $1 trillion in sales by 2020, increasing from today’s $375 billion, according to a recent study from Accenture and AliResearch.
5G will do more than turbocharge our smartphones (though, it’ll certainly do that), it’ll enable the creation of whole new industries, have an economic impact measured in trillions*, and lay the foundation for technologies we’re only beginning to think about.
Far from a speculative vision of the future, the first mobile 5G NR trials will begin in just a few months, paving the way to commercial availability in 2019. Our Qualcomm Snapdragon X50 5G Modem, which was used to demonstrate the world’s first 5G data connection, is helping make that happen. And we’re working with leading OEMs around the world to develop Snapdragon X50-powered smartphones, mobile hotspots, Always Connected PCs, and immersive head-mounted VR displays in time for 5G’s arrival.
As we work to help build the global 5G infrastructure, we continue to evolve LTE. A pivotal part of this effort will be the just-announced Qualcomm Snapdragon X24LTE modem, which makes its public debut at MWC. Snapdragon X24 is our third-generation Gigabit LTE modem that can deliver lightning-fast download speeds up to 2 Gbps. Snapdragon X24 and other Gigabit LTE modems will help ensure a smooth transition into 5G by complementing Snapdragon X50 to provide multimode 4G/5G capability on the same device.