Steve Thomas - IT Consultant

Using the timeworn and trusty narrative of Charles Dickens’ A Christmas Carol as its platform, MWM Immersive, a division of Madison Wells Media, is finally taking location-based virtual reality to its logical conclusion and merging it with an immersive theater experience.

Called Chained: A Victorian Nightmare, the new production combines live actors and an immersive setting with virtual reality to recreate Victorian-era London.

The new production is premiering at experiential studio GreatCo on Friday and will run through January 6, 2019. It include a full Victorian-era set in which one audience member at a time is fitted with a virtual reality headset by a professional actor. The individual audience experience is designed for the viewer to interact the entire time with live actors and objects.

Co-produced by MWM and the virtual reality studio Here Be Dragons, the Chained experience was created and directed by Justin Denton, who was also responsible for the immersive Legion experience that debuted at San Diego Comic Con in 2017. The executive producer for the project was MWM Immersive’s Ethan Stearns, who was the shepherd behind Carne y Arena the Academy Award-winning virtual reality project from the famed Mexican director Alejandro González Iñárritu.

[gallery ids="1752184,1752185,1752186,1752188"]

“By combining the best of VR and immersive theater, Chained surpasses the limitations of each medium and lets the audience see, converse with, and even touch the impossible,” said Justin Denton, in a statement. “I grew up with Charles Dickens’ A Christmas Carol but in my mind’s eye I always imagined the spirits of Christmas Past, Present, and Future as much darker and more intense than most adaptations. Audiences will walk away from Chained as though they have just awoken from a dark and beautiful fever dream full of self discovery, fascination, fear, and wonder.”

Tickets, which cost $40, are available for purchase at Eventbrite.com and the company is planning to bring the experience to more cities after its initial run through January.

Combining immersive theater with virtual reality has long felt (at least to me) like the best use case for the technology. Unlike a cinematic experience, immersive theater benefits from movement in an established environment and interaction with a cast in real time. A production like “Sleep No More” would obviously lend itself to an enhanced experience in virtual reality and if this is successful, the MWM Interactive experiment could become a road map for increasing and encouraging the technology’s adoption among a broader base of users.

It’s a great gateway to the best use cases for virtual reality and something that more companies will likely pursue.

As Roborace accelerates its plans to build an autonomous racing league, the company is finding that its toughest competition are still human drivers.

In this version of the John Henry story, the humans clearly are still winning, but the robots are catching up.

“We’re going to call it a singularity event when an autonomous racing car is faster than any racing driver,” says Lucas Di Grassi, Roborace’s chief executive and one of the world’s best Formula One racecar drivers. “We started the year 20% slower and we are now 6% slower.”

For the company’s long-term vision, the cars need to be better than any human, because part of the company’s pitch is to be the proving ground for autonomous technologies and a platform to put automakers’ best innovations through their paces in extreme conditions.

“We think when the car reaches a level that is better than any human this will create a layer of trust on the roads,” says Di Grassi. 

It’s a vision that has attracted the attention of some of the world’s biggest companies. Earlier this week, Amazon announced its own initiative for autonomous racing cars. And if Amazon is interested, you can be sure other large technology companies are also angling for a pole position in this proving ground for technology’s latest moonshot.

Amazon’s version of autonomous race cars are smaller than Roborace’s full-sized vehicles — and at $399 are far cheaper than the $1 million vehicles that Roborace is planning on putting on tracks.

Beyond the potential corporate competitors, the company’s human competition is more than just a technical obstacle for Roborace. It’s also a critical unknown when it comes to predicting whether anyone actually will want to watch the races.

When asked whether he thinks Roborace can find an audience for races that are divorced of any element of human risk or drama, Di Grassi says “We don’t know.”

To integrate the two worlds of robot racing and human Formula One (or the increasingly popular Formula E series), Roborace has tweaked its competitive model. Earlier this year, the company unveiled a new model of its car that has room for a human driver behind the wheel.

Robocar

Roborace car at Disrupt Berlin 2018

That human driver is critical to Di Grassi’s new vision for how Roborace competition will now work. In the latest iteration of the company’s races, which will see their first flag waved in April or May of 2019, human drivers will play a larger role in the race.

“We are trying to combine humans and computers in a sport,” says Di Grassi. “The races next year will be a combination of drivers racing for the first part of the race and in a pit stop the driver jumps out and the autonomous vehicle will take over. We want to create this reality that the human and the machine are working together for a better outcome.”

Di Grassi hopes that this integration of the human element and autonomy will be enough to attract viewers, but there are other ways that the company plans to bring an audience to the wild world of autonomous robot racing.

“People want to interact,” says Di Grassi. And with the company’s planned robot races, there will be ways for audiences in the stands to shape the course of the race, potentially by throwing augmented reality obstacles onto the track for the autonomous cars to avoid — creating new challenges for technology to be put through its paces.

“We’re going to try and engage and we’re going to try and get different forms of engagement,” Di Grassi says. Including developing an open source platform that would enable viewers to interact with simulated races in virtual reality — encouraging audience participation and competition in virtual racing leagues that could mirror the action among actual racing teams. 

Like traditional Formula One racing, Roborace is serving two audiences. One is the company’s actual customers — the automakers and vendors that are building the software and hardware for electric and autonomous vehicles — and the audience that ideally will be around to see the fruit of all that labor.

Right now, no automakers have signed up as partners, in part, Di Grassi says, because they’re not confident with their technology. “The automakers are afraid because the software is not ready,” says Di Grassi. But the company’s chief executive is undeterred, because of the profusion of technologies required to make autonomous vehicles work. “Autonomous cars are a combination of a lot of different technology segments — sensors, electric motors, batteries. Our customers are sensor processing companies [and] companies like Nvidia, Qualcomm, Intel,” DiGrassi says.

However, at some point Roborace needs that audience so vendors can prove that their technology works, and people can become more comfortable with the safety and capabilities of autonomous vehicles.

“Nobody’s using high precision vehicle model like drifting and sliding and these situations will be very real. There is a whole different segment that we can develop faster in a controlled environment,” says DiGrassi. “The pitch is to compete against each other to develop technology faster and you develop trust among consumers… this will give trust to people to jump into autonomous taxi in the future.”

For the last year or so, Disney has been dabbling with massive virtual reality experiences that let players strap on a portable VR rig and run around in a warehouse-sized environment. In partnership with The VOID (part of Disney’s 2017 accelerator class) and Lucasfilm’s ILMxLab, it launched a Star Wars-themed experience, Secrets of the Empire, at both Downtown Disney (California) and Disney Springs (Florida) back in November of 2017.

The next Disney property getting the VR treatment? Wreck-It Ralph.

Here’s the trailer, released this morning:

Based on the upcoming movie sequel “Ralph Breaks the Internet,” this one will be called, aptly, Ralph Breaks VR. Like Secrets before it, the Ralph game will support four players running around a shared VR environment — but rather than dodging blaster fire and outsmarting stormtroopers, they’ll be having food fights with kittens and outrunning security drones.

While I’m mostly neutral on Ralph, I’m… pretty excited for this. Secrets of the Empire is one of the most ridiculous experiences I’ve ever had in virtual reality. It’s hard to say much without spoiling some of the moments, but my jaw was on the damned floor for half of the time. Alas, there wasn’t much time to speak of; the entire thing only lasts about 25 minutes — which, at $30 per person, felt way too short. Tickets for Ralph cost roughly the same; depending on location, it’ll be $30 or $33 per person.

A representative for Disney confirms that Secrets of the Empire is not going away. It’s an upside of the game taking place almost entirely in VR — retune the physical space to be a bit less Star Wars-y, schedule things just right, and you’re all set.

(Oh, and while details are light: after Ralph, they’re working on a Marvel-themed experience set to debut in 2019.)

Tickets for the Ralph experience are available starting next week. In addition to VOID’s Disneyland/Disneyworld locations, it’ll also be running at their Glendale, Calif. and Las Vegas spots.

“Yeah! Well of course we’re working on it” Facebook’s head of augmented reality Ficus Kirkpatrick told me when I asked him if Facebook was building an AR glasses at TechCrunch’s AR/VR event in LA. “We are building hardware products. We’re going forward on this . . . We want to see those glasses come into reality, and I think we want to play our part in helping to bring them there.”

This is the clearest confirmation we’ve received yet from Facebook about its plans for AR glasses. The product could be Facebook’s opportunity to own a mainstream computing device on which its software could run after a decade of being beholden to smartphones built, controlled, and taxed by Apple and Google.

This month Facebook launched its first self-branded gadget out of its Building 8 lab, the Portal smart display, and now it’s revving up hardware efforts. For AR, Kirkpatrick told me “We have no product to announce right now. But we have a lot of very talented people doing really, really compelling cutting-edge research that we hope plays a part in the future of headsets.”

There’s a war brewing here. AR startups like Magic Leap and Thalmic Labs are starting to release their first headsets and glasses. Microsoft is considered a leader thanks to its early Hololens product, while Google Glass is still being developed for the enterprise. And Apple has acquired AR hardware developers like Akonia Holographics and Vrvana to accelerate development of its own headsets.

Mark Zuckerberg said AR glasses were 5 to 7 years away at F8 2017

Technological progress and competition seems to have sped up Facebook’s timetable. Back in April 2017, CEO Mark Zuckerberg said “We all know where we want this to get eventually, we want glasses”, but explained that “we do not have the science or technology today to build the AR glasses that we want. We may in five years, or seven years”. He explained that “We can’t build the AR product that we want today, so building VR is the path to getting to those AR glasses.” The company’s Oculus division had talked extensively about the potential of AR glasses, yet similarly characterized them as far off.

But a few months later, a Facebook patent application for AR glasses was spotted by Business Insider that detailed using “waveguide display with two-dimensional scanner” to project media onto the lenses. Cheddar’s Alex Heath reports that Facebook is working on Project Sequoia that uses projectors to display AR experiences on top of physical objects like a chess board on a table or a person’s likeness on something for teleconferencing. These indicate Facebook was moving past AR research.

Facebook AR glasses patent application

Last month, The Information spotted four Facebook job listings seeking engineers with experience building custom AR computer chips to join the Facebook Reality Lab (formerly known as Oculus research). And a week later, Oculus’ Chief Scientist Michael Abrash briefly mentioned amidst a half hour technical keynote at company’s VR conference that “No off the shelf display technology is good enough for AR, so we had no choice but to develop a new display system. And that system also has the potential to bring VR to a different level.”

But Kirkpatrick clarified that he sees Facebook’s AR efforts not just as a mixed reality feature of VR headsets. “I don’t think we converge to one single device . . . I don’t think we’re going to end up in a Ready Player One future where everyone is just hanging out in VR all the time” he tells me. “I think we’re still going to have the lives that we have today where you stay at home and you have maybe an escapist, immersive experience or you use VR to transport yourself somewhere else. But I think those things like the people you connect with, the things you’re doing, the state of your apps and everything needs to be carried and portable on-the-go with you as well, and I think that’s going to look more like how we think about AR.”

Oculus Chief Scientist Michael Abrash makes predictions about the future of AR and VR at the Oculus Connect 5 conference

Oculus virtual reality headsets and Facebook augmented reality glasses could share an underlying software layer, though, which might speed up engineering efforts while making the interface more familiar for users. “I think that all this stuff will converge in some way maybe at the software level” Kirkpatrick said.

The problem for Facebook AR is that it may run into the same privacy concerns that people had about putting a Portal camera inside their homes. While VR headsets generate a fictional world, AR must collect data about your real-world surroundings. That could raise fears about Facebook surveiling not just our homes but everything we do, and using that data to power ad targeting and content recommendations. This brand tax haunts Facebook’s every move.

Startups with a cleaner slate like Magic Leap and giants with a better track record on privacy like Apple could have an easier time getting users to put a camera on their heads. Facebook would likely need a best-in-class gadget that does much that others can’t in order to convince people it deserves to augment their reality.

You can watch our full interview with Facebook’s director of camera and head of augmented reality engineering Ficus Kirkpatrick from our TechCrunch Sessions — AR/VR event in LA:

Lyft, the transportation on demand company that is heading to a $15 billion IPO in 2019, is racing ahead with its autonomous vehicle plans. TechCrunch has learned that it is acquiring the London-based augmented reality startup Blue Vision Labs and unveiling its first test vehicle with Ford to advance its vision for self-driving cars.

The first Ford car from Lyft’s Level 5 self-driving initiative will be the Ford Fusion Hybrid. It’s the culmination of a yearlong partnership the two companies had announced last September and will be hitting city streets “soon” the company said.

The Ford Fusion (now with Lyft autonomy!)

While the integration of Lyft’s autonomous technologies and Ford’s hardware is impressive, perhaps more meaningful is the company’s acquisition of Blue Vision Labs, a startup out of London that has developed a way of ingesting street-level imagery and is using it to build collaborative, interactive augmented reality layers — all by way of basic smartphone cameras.

Blue Vision will sit within Lyft’s Level 5 autonomous car division headed up by Luc Vincent (who joined the company last year as VP of engineering after creating and running Google Street View).

The startup and its staff of 39 (everyone is joining Lyft) will also become the anchor for a new R&D operation in London or the San Francisco-based company, focused on that autonomous driving effort. Level 5 is stepping up a gear in another way today, too: Lyft is unveiling a new vehicle that it will be using for testing.

Blue Vision has developed technology that provides both street level mapping and interactive augmented reality that lets two people see the same virtual objects. The company has already built highly detailed maps that developers can now use to develop collaborative AR experiences — it’s like the maps of these spaces become canvasses for virtual objects to be painted on. Over time, we may see various uses of it throughout the Lyft platform, but for now the main focus is Level 5.

“We are looking forward to focusing Blue Vision’s technology on building the best maps at scale to support our autonomous vehicles, and then localization to support our stacks,” Vincent said in an interview. “This is fundamental to our business. We need good maps and to understand where every passenger and vehicle is. To make our services more efficient and remove friction, we want their tech to drive improvements.”

People familiar with the acquisition tell us Blue Vision was acquired for around $72 million with $30 million on top of that based on hitting certain milestones. Lyft has declined to comment on the valuation. Blue Vision had raised $17 million and had only come out of stealth last March, after working quietly on the product for two years. Investors included GV, Accel, Horizons Ventures, SV Angel and more.

This deal is notable in part because this is the first acquisition that Lyft has made to expand its autonomous car operation, which now has 300 people working on it. At a time when many larger companies are snapping up startups that have developed interesting applications or technologies around areas like AR, mapping, and autonomous driving, there may be more to come. “We are always evaluating build versus buy,” Vincent said when asked about more acquisitions. But he also acknowledged that it is a very crowded field today, even when considering just the most promising companies.

“I don’t have a crystal ball but arguably there are quite a few players today, including big tech, startups, OEMs and car makers. There are well over 100 [strong] companies in the space and there is bound to be some consolidation.” Lyft earlier this year also inked an investment and partnership with Magna to integrate its self-driving car system into components it supplies to car makers.

[gallery ids="1736064,1736065,1736066,1736067,1736069,1736070,1736071,1736072"]

But it also might face other pressures. The company counts Didi and GM among its investors, and both of these companies are making their own big strides in self-driving technology and each has inked deals to have more partners using that tech, in part to justify some of their own hefty investment.

Lyft, of course, will hope that acquisitions like Blue Vision will give it more leverage, and make it one of the consolidators, rather than the consolidated.

Blue Vision’s use of smartphones to ingest data to create its street-level imagery and mapping is crucial to Lyft’s quest for scale. In effect, every Lyft vehicle in operation today, with a smartphone on the dashboard, could be commandeered to become a “camera” watching, surveying and mapping the roads that those cars drive on, and how humans behave on them, using that to help Lyft’s autonomous vehicle (AV) platform learn more about driving overall.

In the race for data to “teach” these AI systems, having that wide network of cameras deployed and picking up data so quickly is “game changing,” said Peter Ondruska, the co-founder and CEO of Blue Vision.

“The amount of data you have affects how much you can rely on your system,” Ondruska said in an interview. “What our tech allows us to do is to utilise Lyft’s fleet to train the cars. That is really game changing. I was working on this for eight years and you have to have a lot of data to get to the right level of safety. That is hard and we can get there faster using our technology.”

Lyft up to now has really concentrated its business presence in North America, and so this marks at least one kind of way that it is expanding on the other side of the pond. It opened its first European office in Munich earlier this year, a sign that it’s looking to this part of the world at least for R&D, if not to expand its business footprint to consumers, just yet. Vincent declined to comment on whether Lyft would get involved in autonomous trials in London, nor whether it would expand its transportation service there.

Another key area that is worth noting is that Blue Vision’s “collaborative” VR, which lets people look at the same spot in space and both see and create interactive, virtual figures in it, could be used by Lyft either to help drivers and would-be passengers better communicate, or even help passengers discover more services during a journey or at their destination.

When Ondruska first spoke to TechCrunch earlier this year as the company emerged from stealth, ride hailing applications, in fact, were one of the use cases that we pointed out could be helped by its tech.

Peter Ondruska, the startup’s co-founder and CEO, [said] that Blue Vision’s tech can pinpoint people and other moving objects in a space to within centimeters of their actual location — far more accurate than typical GPS — meaning that it could give better results in apps that require two parties to find each other, such as in a ride-hailing app. (Hands up if you and your Uber driver have ever lost each other before you’ve even stepped foot in the vehicle.)

Blue Vision isn’t the only company working to develop these virtual maps for the world. Startups like 6d.ai, Blippar and the incredibly well capitalized and wildly successful AR technology developer Niantic Labs are also building out these virtual maps on which developers can create applications. Indeed, Niantic’s Pokemon Go game is the most successful augmented reality application to date.

Large media companies have also been investing building content for these platforms, and investors have poured hundreds of millions of dollars into startups like 6d, Niantic, Blue Vision, and others that are building both software and hardware to usher in this new age of how we will, apparently, all soon be seeing the world.

The development of these new platforms will go a long way toward ensuring that more useful applications are just around the corner, waiting for users to pick them up.

“One of the reasons why AR hasn’t really reached mass market adoption is because of the tech that is on the market,” Ondruska told us earlier this year. “Single-user experiences are limiting. We are allowing the next step, letting people see the right place, for example. None of that was possible before in AR because the backend didn’t exist. But by filling in this piece, we are creating new AR use cases, ones that are important and will be used on a daily basis.”

The deal marks Lyft’s tenth acquisition, according to CrunchBase. In 2015, Lyft acquired the disappearing messaging company, Leo, to bring the company’s messaging expertise in house. Two years later, the ride-hailing company went on an acquisition tear, hoovering up FinitePaths, YesGraph, DataScore, and Kamcord. The first three seem like strategic acquisitions to bulk up mapping and marketing efforts  internally; but Kamcord, a social media network for video sharing, seemed a little farther afield.

For more on Lyft’s bigger plans for AV, watch the video below of Vincent talking about the company’s roadmap (so to speak).

In another example of VR bleeding into real life, Cornell University food scientists found that cheese eaten in pleasant VR surroundings tasted better than the same cheese eaten in a drab sensory booth.

About 50 panelists who used virtual reality headsets as they ate were given three identical samples of blue cheese. The study participants were virtually placed in a standard sensory booth, a pleasant park bench and the Cornell cow barn to see custom-recorded 360-degree videos.

The panelists were unaware that the cheese samples were identical, and rated the pungency of the blue cheese significantly higher in the cow barn setting than in the sensory booth or the virtual park bench.

That’s right: cheese tastes better on a virtual farm versus inside a blank, empty cyberia.

“When we eat, we perceive not only just the taste and aroma of foods, we get sensory input from our surroundings – our eyes, ears, even our memories about surroundings,” said researcher Robin Dando.

To be clear, this research wasn’t designed to confirm whether VR could make food taste better but whether or not VR could be used as a sort of taste testbed, allowing manufacturers to let people try foods in different places without, say, putting them on an airplane or inside a real cow barn. Because food tastes differently in different surroundings, the ability to simulate those surroundings in VR is very useful.

“This research validates that virtual reality can be used, as it provides an immersive environment for testing,” said Dando. “Visually, virtual reality imparts qualities of the environment itself to the food being consumed – making this kind of testing cost-efficient.”

Analyst Gartner, best known for crunching device marketshare data; charting technology hype cycles; and churning out predictive listicles of emergent capabilities at software’s cutting edge has now put businesses on watch that as well as dabbling in the usual crop of nascent technologies organizations need to be thinking about wider impacts next year — on both individuals and society.

Call it a sign of the times but digital ethics and privacy has been named as one of Gartner’s top ten strategic technology trends for 2019. That, my friends, is progress of a sort. Albeit, it also underlines how low certain tech industry practices have sunk that ethics and privacy is suddenly making a cutting-edge trend agenda, a couple of decades into the mainstream consumer Internet.

The analyst’s top picks do include plenty of techie stuff too, of course. Yes blockchain is in there. Alongside the usual string of caveats that the “technologies and concepts are immature, poorly understood and unproven in mission-critical, at-scale business operations”.

So too, on the software development side, is AI-driven development — with the analyst sneaking a look beyond the immediate future to an un-date-stamped new age of the ‘non-techie techie’ (aka the “citizen application developer”) it sees coming down the pipe, when everyone will be a pro app dev thanks to AI-driven tools automatically generating the necessary models. But that’s definitely not happening in 2019.

See also: Augmented analytics eventually (em)powering “citizen data science”.

On the hardware front, Gartner uses the umbrella moniker of autonomous things to bundle the likes of drones, autonomous vehicles and robots in one big mechanical huddle — spying a trend of embodied AIs that “automate functions previously performed by humans” and work in swarming concert. Again, though, don’t expect too much of these bots quite yet — collectively, or, well, individually either.

It’s also bundling AR, VR and MR (aka the mixed reality of eyewear like Magic Leap One or Microsoft’s Hololens) into immersive experiences — in which “the spaces that surround us define ‘the computer’ rather than the individual devices. In effect, the environment is the computer” — so you can see what it’s spying there.

On the hardcore cutting edge of tech there’s quantum computing to continue to tantalize with its fantastically potent future potential. This tech, Gartner suggests, could be used to “model molecular interactions at atomic levels to accelerate time to market for new cancer-treating drugs” — albeit, once again, there’s absolutely no timeline suggested. And QC remains firmly lodged in an “emerging state”.

One nearer-term tech trend is dubbed the empowered edge, with Gartner noting that rising numbers of connected devices are driving processing back towards the end-user — to reduce latency and traffic. Distributed servers working as part of the cloud services mix is the idea, supported, over the longer term, by maturing 5G networks. Albeit, again, 5G hasn’t been deployed at any scale yet. Though some rollouts are scheduled for 2019.

Connected devices also feature in Gartner’s picks of smart spaces (aka sensor-laden places like smart cities, the ‘smart home’ or digital workplaces — where “people, processes, services and things” come together to create “a more immersive, interactive and automated experience”); and so-called digital twins; which isn’t as immediately bodysnatcherish as it first sounds, though does refer to “digital representation of a real-world entity or system” driven by an estimated 20BN connected sensors/endpoints which it reckons will be in the wild by 2020

But what really stands out in Gartner’s list of developing and/or barely emergent strategic tech trends is digital ethics and privacy — given the concept is not reliant on any particular technology underpinning it; yet is being (essentially) characterized as an emergent property of other already deployed (but unnamed) technologies. So is actually in play — in a way that others on the list aren’t yet (or aren’t at the same mass scale).

The analyst dubs digital ethics and privacy a “growing concern for individuals, organisations and governments”, writing: “People are increasingly concerned about how their personal information is being used by organisations in both the public and private sector, and the backlash will only increase for organisations that are not proactively addressing these concerns.”

Yes, people are increasingly concerned about privacy. Though ethics and privacy are hardly new concepts (or indeed new discussion topics). So the key point is really the strategic obfuscation of issues that people do in fact care an awful lot about, via the selective and non-transparent application of various behind-the-scenes technologies up to now — as engineers have gone about collecting and using people’s data without telling them how, why and what they’re actually doing with it.

Therefore, the key issue is about the abuse of trust that has been an inherent and seemingly foundational principle of the application of far too much cutting edge technology up to now. Especially, of course, in the adtech sphere.

And which, as Gartner now notes, is coming home to roost for the industry — via people’s “growing concern” about what’s being done to them via their data. (For “individuals, organisations and governments” you can really just substitute ‘society’ in general.)

Technology development done in a vacuum with little or no consideration for societal impacts is therefore itself the catalyst for the accelerated concern about digital ethics and privacy that Gartner is here identifying rising into strategic view.

It didn’t have to be that way though. Unlike ‘blockchain’ or ‘digital twins’, ethics and privacy are not at all new concepts. They’ve been discussion topics for philosophers and moralists for scores of generations and, literally, thousands of years. Which makes engineering without consideration of human and societal impacts a very spectacular and stupid failure indeed.

And now Gartner is having to lecture organizations on the importance of building trust. Which is kind of incredible to see, set alongside bleeding edge science like quantum computing. Yet here we seemingly are in kindergarten…

It writes: “Any discussion on privacy must be grounded in the broader topic of digital ethics and the trust of your customers, constituents and employees. While privacy and security are foundational components in building trust, trust is actually about more than just these components. Trust is the acceptance of the truth of a statement without evidence or investigation. Ultimately an organisation’s position on privacy must be driven by its broader position on ethics and trust. Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing.”

The other unique thing about digital ethics and privacy is that it cuts right across all other technology areas in this trend list.

You can — and should — rightly ask what does blockchain mean for privacy? Or quantum computing for ethics? How could the empowered edge be used to enhance privacy? And how might smart spaces erode it? How can we ensure ethics get baked into AI-driven development from the get-go? How could augmented analytics help society as a whole — but which individuals might it harm? And so the questions go on.

Or at least they should go on. You should never stop asking questions where ethics and privacy are concerned. Not asking questions was the great strategic fuck-up condensed into Facebook’s ‘move fast and break things’ anti-humanitarian manifesto of yore. Y’know, the motto it had to ditch after it realized that breaking all the things didn’t scale.

Because apparently no one at the company had thought to ask how breaking everyone’s stuff would help it engender trust. And so claiming compliance without trust, as Facebook now finds itself trying to, really is the archetypal Sisyphean struggle.

At our one-day TC Sessions: AR/VR event in LA on October 18, we’ll be joined by Walt Disney Imagineering’s R&D Studio Executive Jon Snoddy.

We’re going to talk about how Disney is using augmented and virtual reality in their parks and other projects and how they’re coupling those technologies with physical spaces and robotics in ways that no other company is attempting. Disney has shipped a bunch of ambitious projects lately like their robotic acrobats, a series of autonomous robots to add life to queues and attractions and a variety of different applications of AR.


Here’s some more info on Snoddy via Disney:

Jon Snoddy has lived on the leading edge of entertainment technology his entire career. Prior to leading Research & Development for Walt Disney Imagineering, Jon worked at NPR, Lucasfilm, started his own companies, and pulled a previous stint at Imagineering developing ride concepts such as Indiana Jones as well as founding the original Disney VR Studio. 

Jon’s work spans industries as well as continents. Starting off as a recording engineer for NPR, he went on to help launch the THX system at Lucasfilm, install Captain EO at Disneyland, and spearheaded GameWorks LLC with DreamWorks, Sega, and Universal Studios. Additionally, he’s led redevelopment projects like Centum City in Pusan, Kr.; created movie theater games with TimePlay Entertainment; and enabled personalized video sharing with Big Stage Entertainment. 

Jon Snoddy is currently the SVP of Disney Research and Walt Disney Imagineering Research & Development Studio Executive. He oversees a cross-disciplinary group of scientists, artists, and engineers inventing the future of entertainment. His teams work across robotics, AI, displays, visual computing, materials, and interactive storytelling to create the next generation of Disney characters, rides, experiences, and more.


Final tickets are now on sale — book yours here and you’ll save 35 percent on general admission tickets. Student tickets are $45.

 

 

What if you could peek behind what’s in your photos, like you’re moving your head to see what’s inside a window? That’s the futuristic promise of Facebook 3D photos. After announcing the feature at F8 in May, Facebook is now rolling out 3D photos to add make-believe depth to your iPhone portrait mode shots. Shoot one, tap the new 3D photos option in the status update composer, select a portrait mode photo, and users on the desktop or mobile News Feed as well as in VR through Oculus Go’s browser or Firefox on Oculus Rift. Everyone can now view 3D photos and the ability to create them will open to everyone in the coming weeks.

Facebook is constantly in search of ways to keep the News Feed interesting. What started with text and photos eventually expanded into videos and live broadcasts, and now to 360 photos and 3D photos. Facebook hopes if it’s the exclusive social media home for these new kinds of content, you’ll come back to explore and rack up some ad views in the mean time.

So how exactly do 3D photos work? Our writer Devin Coldeway did a deep-dive earlier this year into how Facebook uses AI to stitch together real layers of the photo with what it infers should be there if you tilted your perspective. Since portrait mode fires off both of a phone’s cameras simultaneously, parralax differences can be used to recreate what’s behind the subject.

To create the best 3D photos with your iPhone 7+, 8+, X or XS, Facebook recommends you keep your subject three to four feet away, and have things in the foreground and background. Distinct colors will make the layers separate better, and transparent or shiny objects like glass or plastic can throw off the AI.

Originally, the idea was to democrative the creation of VR content. But with headset penetration still relatively low, it’s the ability to display depth in the News Feed that will have the greatest impact for Facebook. In an era where Facebook’s cool is waning, hosting next-generation art forms could make it a must-visit property even as more of our socializing moves to Instagram.

Congressman Ro Khanna’s proposed Internet Bill of Rights pushes individual rights on the Internet forward in a positive manner. It provides guidelines for critical elements where the United States’ and the world’s current legislation is lacking, and it packages it in a way that speaks to all parties. The devil, as always, is in the details—and Congressman Khanna’s Internet Bill of Rights still leaves quite a bit to subjective interpretation.

But what should not be neglected is that we as individuals have not just rights but also moral obligations to this public good—the Internet. The web positively impacts our lives in a meaningful fashion, and we have a collective responsibility to nurture and keep it that way.

Speaking to the specific rights listed in the Bill, we can likely all agree that citizens should have control over information collected about them, and that we should not be discriminated against based on that personal data. We probably all concur that Internet Service Providers should not be permitted to block, throttle, or engage in paid prioritization that would negatively impact our ability to access the world’s information. And I’m sure we all want access to numerous affordable internet providers with clear and transparent pricing.

These are all elements included in Congressman Khanna’s proposal; all things that I wholeheartedly support.

As we’ve seen of late with Facebook, Google, and other large corporations, there is an absolute need to bring proper legislation into the digital age. Technological advancements have progressed far faster than regulatory changes, and drastic improvements are needed to protect users.

What we must understand, however, is that corporations, governments, and individuals all rely on the same Internet to prosper. Each group should have its own set of rights as well as responsibilities. And it’s those responsibilities that need more focus.

Take, for example, littering. There may be regulations in place that prevent people from discarding their trash by the side of the road. But regardless of these laws, there’s also a moral obligation we have to protect our environment and the world in which we live. For the most part, people abide by these obligations because it’s the right thing to do and because of social pressure to keep the place they live beautiful—not because they have a fear of being fined for littering.

We should approach the protection of the Internet in the same way.

We should hold individuals, corporations, and governments to a higher standard and delineate their responsibilities to the Internet. All three groups should accept and fulfill those responsibilities, not because we create laws and fines, but because it is in their best interests.

For individuals, the Internet has given them powers beyond their wildest dreams and it continues to connect us in amazing ways. For corporations, it has granted access to massively lucrative markets far and wide that would never have been accessible before. For governments, it has allowed them to provide better services to their citizens and has created never before seen levels of tax revenue from the creation of businesses both between and outside their physical borders.

Everyone — and I mean everyone — has gained (and will continue to gain) from protecting an open Internet, and we as a society need to recognize that and start imposing strong pressure against those who do not live up to their responsibilities.

We as people of the world should feel tremendously grateful to all the parties that contributed to the Internet we have today. If a short-sighted government decides it wants to restrict the Internet within its physical borders, this should not be permitted. It will not only hurt us, but it will hurt that very government by decreasing international trade and thus tax revenue, as well as decreasing the trust that the citizens of that country place in their government. Governments often act against their long-term interests in pursuit of short-term thinking, thus we have 2 billion people living in places with heavy restrictions on access to online information.

When an Internet Service Provider seeks full control over what content it provides over its part of the Internet, this, again, should not be allowed. It will, in the end, hurt that very Internet Service Provider’s revenue; a weaker, less diverse Internet will inevitably create less demand for the very service they are providing along with a loss of trust and loyalty from their customers.

Without the Internet, our world would come grinding to a halt. Any limitations on the open Internet will simply slow our progress and prosperity as a human race. And, poignantly, the perpetrators of those limitations stand to lose just as much as any of us.

We have a moral responsibility, then, to ensure the Internet remains aligned with its original purpose. Sure, none of us could have predicted the vast impact the World Wide Web would have back in 1989—probably not even Sir Tim Berners-Lee himself—but in a nutshell, it exists to connect people, WHEREVER they may be, to a wealth of online information, to other people, and to empower individuals to make their lives better.

This is only possible with an open and free Internet.

Over the next five years, billions of devices—such as our garage door openers, refrigerators, thermostats, and mattresses—will be connected to the web via the Internet of Things. Further, five billion users living in developing markets will join the Internet for the first time, moving from feature phones to smartphones. These two major shifts will create incredible opportunities for good, but also for exploiting our data—making us increasingly vulnerable as Internet users.

Now is the time to adequately provide Americans and people around the world with basic online protections, and it is encouraging to see people like Congressman Khanna advancing the conversation. We can only hope this Internet Bill of Rights remains bipartisan and real change occurs.

Regardless of the outcome, we must not neglect our moral obligations—whether individual Internet users, large corporations, or governments. We all shoulder a responsibility to maintain an open Internet. After all, it is perhaps the most significant and impactful creation in modern society.

Last year 30 leading venture investors told us about a fundamental shift from early stage North American VR investment to later stage Chinese computer vision/AR investment — but they didn’t anticipate its ferocity.

Digi-Capital’s AR/VR/XR Analytics Platform showed Chinese investments into computer vision and augmented reality technologies surging to $3.9 billion in the last 12 months, while North American augmented and virtual reality investment fell from nearly $1.5 billion in the fourth quarter of 2017 to less than $120 million in the third quarter of 2018. At the same time, VC sentiment on virtual reality softened significantly.

What a difference a year makes.

Dealflow (dollars)

What VCs said a year ago

When we spoke to venture capitalists least year, they had some pretty strong opinions.

Mobile augmented reality and Computer Vision/Machine Learning (“CV/ML”) are at opposite ends of the spectrum — one delivering new user experiences and user interfaces and the other powering a broad range of new applications (not just mobile augmented reality).

The market for mobile AR is very early stage, and could see $50 to $100 million exits in 2018/2019. Dominant companies will take time to emerge, and it will also take time for developers to learn what works and for consumers and businesses to adopt mobile AR at scale (note: Digi-Capital’s base case is mobile AR revenue won’t really take off until 2019, despite 900 million installed base by Q4 2018). Tech investors are most interested in native mobile AR with critical use cases, not ports from other platforms.

Computer vision and visual machine learning is more advanced than mobile AR, and could see dominant companies in the near-term. Here, investors love  startups with real-world solutions that are challenging established industries and business practices, not research projects. Firms are investing in more than 20 different mobile augmented reality and computer vision and visual machine learning sectors, but there is the potential for overfunding during the earliest stages of the market.

What VCs did in the last 12 months

Perhaps the most crucial observation is the declining deal volumes over the last year.

Deal Volume (number of deals by category)

(Source: Digi-Capital AR/VR/XR Analytics Platform)

Deal volume (the number of deals) declined steadily by 10% per quarter over the last 12 months, and was around two-thirds the level in Q3 2018 that it was in Q4 2017. Most of the decline happened in the US and Europe, where VCs increasingly stayed on the sidelines by looking for short-term traction as a sign of long-term growth. (Note: data normalized excluding HTC ViveX accelerator Q4 2017, which skews the data)

Deal Volume (number of deals by stage)

The biggest casualties of this short-termist approach have been early stage startups raising seed (deal volume down by more than half) and some series A (deal volume down by a quarter) rounds. This trend has been strongest in North America and Europe, but even Asia has not been entirely immune from some early stage deal volume decline.

Deal Value (dollars)

(Source: Digi-Capital AR/VR/XR Analytics Platform)

While deal volume is a great indicator of early-stage investment market trends, deal value (dollars invested) gives a clearer picture of where the big money has been going over the last 12 months. (Note: investment means new VC money into startups, not internal corporate investment – which is a cost). Global investment hit its previous quarterly record over $2 billion in Q4 2017, driven by a few very large deals. It then dropped back to around $1 billion in the first quarter of this year. Since then deal value has steadily climbed quarter-on-quarter, to reach a new record high well over $2 billion in Q3 2018.

Over $4 billion of the total $7.2 billion in the last 12 months was invested in computer vision/AR tech, with well over $1 billion going into smartglasses (the bulk of that into Magic Leap) . The next largest sectors were games around $400 million and advertising/marketing at a quarter of a billion dollars. The remaining 22 industry sectors raised in the low hundreds of millions of dollars down to single digit millions in the last 12 months.

A tale of two markets

Deals by Country and Category (dollars)

American and Chinese investment had an inverse relationship in the last 12 months. American investors increasingly chose to stay on the sidelines, while Chinese investor confidence grew to back up clear vision with long-term investments. The differences in the data couldn’t be more stark.

North American Deals (dollars)

North American investment was almost triple Asian investment in Q4 2017, with a record high of nearly $1.5 billion dollars for the quarter. Despite 2018 being a transitional year for the market (Digi-Capital forecast that market revenue was unlikely to accelerate until 2019), North American quarterly investment fell over 90% to less than $120 million in Q3 2018. American VCs appear to have taken a long-term solution to a short-term problem.

China Deals (dollars)

Meanwhile, Chinese VCs have been focused on the long-term potential of the intersection between computer vision and augmented reality, with later-stage Series C and Series D rounds raising hundreds of millions of dollars a time. This trend increased dramatically in the last 12 months, with SenseTime Group raising over $2 billion in multiple rounds and Megvii close behind at over $1 billion (also multiple rounds).

Smaller investments (by Chinese standards) in the hundreds of millions have gone into companies Westerners might not know, including Beijing Moviebook Technology, Kujiale and more. All this saw Chinese quarterly investment grow 3x in the last 12 months. (Note: some recent Western opinions about market investment trends were based on incomplete data)

Where to from here?

With our team’s investment banking background, experience shows that forecasting venture capital investment is a fool’s errand. Yet it is equally foolish to ignore hard data, and ongoing discussions with leading investors along Sand Hill Road and China indicate some trends to watch.

American tech investors might continue to wait for market traction before providing the fuel needed for that traction (even if that seems counterintuitive). While this could pose an existential threat to some early stage startups in North America, it’s also an opportunity for smart money with longer time horizons.

Conversely, Chinese VCs continue to back domestic companies which could dominate the future of computer vision/augmented reality. The next 6 months will determine if this is a long-term trend, but it is the current mental model.

If mobile AR revenue accelerates in 2019 as critical use cases and apps emerge (as in Digi-Capital’s base case), this could become a catalyst for renewed investment by American VCs. The big unknown is whether Apple enters the smartphone tethered smartglasses market in late 2020 (as Digi-Capital has forecast for the last few years). This could be the tipping point for the market as a whole (not just investment). However, Apple timing is hard to predict (because Apple), with any potential launch date known only to Tim Cook and his immediate circle.

Steve Jobs said, “You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something – your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.”

Chinese investors embraced a Jobsian approach over the last 12 months, with Western VCs increasingly dot-connecting (or not). It will be interesting to see how this plays out for computer vision/AR investment over the next 12 months, so watch this space.

Oculus showed off the future of working in VR today. Rather than just splaying out 2D software on an infinite desktop, the new Oculus Hybrid Apps system lets you see both 2D screens and 3D models in VR at the same time. That means you could use a traditional image editing suite to change the look of a piece of a 3D object while also being able to rotate, move, and look around that object.

Hybrid Apps were announced at the Oculus Connect 5 conference today in San Jose where Facebook also revealed the new Oculus Quest headset, forward compatability for Oculus content, a mobile app for discovering and remotely installing software on the Rift, and the debut of the YouTube VR app on Oculus Go.

more Oculus Connect 5 coverage