Steve Thomas - IT Consultant

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Facebook is losing its last Oculus co-founder

Nate Mitchell, the final Oculus co-founder remaining at Facebook, announced in an internal memo that he’s leaving the company and “taking time to travel, be with family, and recharge.” His role within the company has shifted several times since Oculus was acquired, but his current title is head of product management for virtual reality.

This follows the departures of former Oculus CEO Brendan Iribe and co-founder Palmer Luckey.

2. Twitter tests ways for users to follow and snooze specific topics

The company isn’t getting rid of the ability to follow other users, but it announced yesterday that it will start pushing users to start following topics as well, which will feature highly engaged tweets from a variety of accounts.

3. WeWork’s S-1 misses these three key points

WeWork just released its S-1 ahead of going public, but Danny Crichton argues we still don’t know the health of the core of the company’s business model or fully understand the risks it is undertaking. (Extra Crunch membership required.)

4. CBS and Viacom are merging into a combined company called ViacomCBS

The move is, in some ways, a concession to a turbulent media environment driving large-scale M&A, with AT&T buying Time Warner and Disney acquiring most of Fox — both deals are seen as consolidation in preparation for a streaming-centric future.

5. Nvidia breaks records in training and inference for real-time conversational AI

Nvidia’s GPU-powered platform for developing and running conversational AI that understands and responds to natural language requests has achieved some key milestones and broken some records, with big implications for anyone building on their tech.

6. Corporate carpooling startup Scoop raises $60 million

Scoop, which launched back in 2015, is a corporate carpooling service that works with the likes of LinkedIn, Workday, T-Mobile and more than 50 other companies to help their employees get to and from work.

7. Domino’s launches e-bike delivery to compete with UberEats, DoorDash

Domino’s will start using custom electric bikes for pizza delivery through a partnership with Rad Power Bikes.

A recent validation study conducted by the David Geffen School of Medicine indicates that virtual reality could provide significant benefits to surgical training.

The study, which was financed by the virtual reality surgical training startup Osso VR, indicates that participants who used the company’s training methods improved their overall surgical performance by 230 percent.

That’s a huge number, but research into the efficacy of virtual reality training is still early.

Still, the “Randomized, Controlled Trial of a Virtual Reality Tool to Teach Surgical Technique for Tibial Shaft Fracture Intramedullary Nailing” takes the first step at providing evidence to back up the long-held assertions that learning in virtual reality has benefits that accrue in real world scenarios.

The promise of virtual reality training is its “anytime”, “anywhere” applicability according to Osso VR, and the results of this study indicate that in certain controlled scenarios, the company may be right.

UCLA performed its test to see whether the Osso VR technology was worth bringing into the school for additional testing, validation and potential rollout.

In the UCLA study, 20 participants were divided between a traditionally trained group and a group that underwent VR training to a specified level of proficiency. Each participant then performed a tibial intramedullary nailing on a sawbones simulation, graded by an observer who did not know which participant had been in which group.

Students who’d had the VR training completed the procedure 20 percent faster and completed more steps correctly according to the procedure-specific checklist that participants were scored against.

“As an orthopaedic surgeon, it’s critical to me that our technology is evidence-based. As we roll out a completely new way to train, we want our users and customers to continue to see this platform as effective and reliable,” said Justin Barad, MD, CEO and co-founder of Osso VR, in a statement. “These study results are just the beginning as we tackle one of the biggest challenges facing the healthcare industry today. Our goal is to unlock the value our providers and industry are working to bring to patients around the world.”

Here’s what I knew when I visited the Museum of Future Experiences: The startup is part of the current batch of companies at Y Combinator, and it’s doing work with virtual reality. Beyond that, I had no idea what to expect.

The MOFE is currently located in New York’s SoHo neighborhood. To reach it, I walked through unmarked door, up a dimly-lit flight of stairs and into a waiting room — where I was greeted by founder and CEO David Askarayan, and then by two men in shiny lab coats, who explained that they would be my guides.

Along with two other guests, I was led downstairs, where our guides quizzed us about our hopes and fears. We were told that our answers would reveal the current state of our subconscious, which in turn would shape the content that we were about to see.

So my MOFE experience probably won’t match yours, but I’ll try to describe it anyway: In the next room, after I put on a VR headset, I found myself flying around stark, beautiful, black-and-white lake while a voiceover discussed the meaning of death. When that segment ended, I was surrounded by the outlines of enormous, ghostly dancers. 

Then the headset came off, and I assumed my visit was over, but instead I was led into yet another room, where — after a brief pause — we were told that our next experience would be be more communal, based on the group’s collective answers. This one turned out to be slightly more explicable, with visions of a nuclear holocaust and post-apocalyptic landscape.

MOFE

David Askarayan

As you can tell, the experience isn’t easy to describe. Afterwards, as I walked back out onto the bright, muggy New York evening, I felt equal parts amused, excited and unsettled, and I knew this wasn’t like any other VR I’d seen.

A few days later, I met with MOFE founder and CEO David Askarayan to get more details about what, exactly, he’s trying to do. Askarayan has a background as a product manager at Bridgewater Associates, as well as an MBA from Harvard Business School, but he told me he’s also “been involved in the creative and art communities for the past eight years as an artist and friend of the community.”

Askarayan said that towards end of his time at Bridgewater, he was running an experimental virtual lab, where he became convinced that most VR startups were struggling with a fundamental problem — they’re “dependent on durable consumer infrastructure, which simply wasn’t there yet.” Put more simply, “People just don’t have VR headsets at home.”

So he became interested in creating an out-of-home VR experience, but wasn’t inspired by the existing VR arcades, which he said are “essentially commoditized — it’s shooting a zombie.” (I’d argue that some of these game-like experiences can be pretty fun, but it’s true that they’re a far cry from the more “story-driven, experiential” approach that Askarayan is going for.)

He explained that the VR on display at the MOFE was created by an artist named Flatsitter, and that the startup currently has enough content that you could visit “up to four times” without any repeats.

MOFE

But it’s not just about the VR — the design of the space and the interaction with the guides is part of what made my visit so memorable. Askarayan said he wanted to “incorporate elements of immersive theater,” while also creating a “white glove” experience, where staff members are helping you at every step: “I want it to be magical and really special … That’s separate from a cool technology demo.”

As for the quiz, Askarayan explained that it’s a “simple recommendation engine” that determines which VR content each visitor sees.

“You’re surrendering to an experience,” he said. “By employing the questionnaire device, I take optionality off the table. I get people to get introspective about themselves through these strange-but-deep questions that map to the different experiences in my inventory.”

If you want to check it out for yourself, the MOFE is currently operating as a pop-up until August 26, and for $49, you can reserve a one-hour slot online. Askarayan said consumer response to the pop-up will determine the startup’s next steps — whether it focuses on establishing “a permanent institution” in New York, or expanding to other cities with more pop-up locations.

The John S. and James L. Knight Foundation is looking for pitches on how to enhance and augment traditional creative arts through immersive technologies.

Through a partnership with Microsoft the foundation is offering a share of a $750,00 pool of cash and the option of technical support from Microsoft, including mentoring in mixed-reality technologies and access to the company’s suite of mixed reality technologies.

“We’ve seen how immersive technologies can reach new audiences and engage existing audiences in new ways,” said Chris Barr, director for arts and technology innovation at Knight Foundation, in a statement. “But arts institutions need more knowledge to move beyond just experimenting with these technologies to becoming proficient in leveraging their full potential.”

Specifically, the foundation is looking for projects that will help engage new audiences; build new service models; expand access beyond the walls of arts institutions; and provide means to distribute immersive experiences to multiple locations, the foundation said in a statement.

“When done right, life-changing experiences can happen at the intersection of arts and technology,” said Victoria Rogers, Knight Foundation vice president for arts. “Our goal through this call is to help cultural institutions develop informed and refined practices for using new technologies, equipping them to better navigate and thrive in the digital age.”

Launched at the Gray Area Festival in San Francisco, the new initiative is part of the Foundation’s art and technology focus, which the organization said is designed to help arts institutions better meet changing audience expectations. Last year, the foundation invested $600,000 in twelve projects focused on using technology to help people engage with the arts.

“We’re incredibly excited to support this open call for ways in which technology can help art institutions engage new audiences,” says Mira Lane, Partner Director Ethics & Society at Microsoft. “We strongly believe that immersive technology can enhance the ability for richer experiences, deeper storytelling, and broader engagement.”

Here are the winners from the first $600,000 pool:

  • ArtsESP – Adrienne Arsht Center for the Performing Arts

Project lead: Nicole Keating | Miami | @ArshtCenter

Developing forecasting software that enables cultural institutions to make data-centered decisions in planning their seasons and events.

  • Exploring the Gallery Through Voice – Alley Interactive

Project lead: Tim Schwartz | New York | @alleyco@cooperhewitt@SinaBahram

Exploring how conversational interfaces, like Amazon Alexa, can provide remote audiences with access to an exhibition experience at Cooper Hewitt, Smithsonian Design Museum.

  • The Bass in VR – The Bass

Project lead: T.J. Black | Miami Beach | @TheBassMoA

Using 360-degree photography technology to capture and share the exhibit experience in an engaging, virtual way for remote audiences.

  • AR Enhanced Audio Tour – Crystal Bridges Museum of American Art

Project lead: Shane Richey | Bentonville, Arkansas | @crystalbridges

Developing mobile software to deliver immersive audio-only stories that museum visitors would experience when walking up to art for a closer look.

  • Smart Label Initiative – Eli and Edythe Broad Art Museum at Michigan State University

Project lead: Brian Kirschensteiner | East Lansing, Michigan | @msubroad

Creating a system of smart labels that combine ultra-thin touch displays and microcomputers to deliver interactive informational content about artwork to audiences.

  • Improving Arts Accessibility through Augmented Reality Technology – Institute on Disabilities at Temple University, in collaboration with People’s Light

Project lead: Lisa Sonnenborn | Philadelphia | @TempleUniv,@IODTempleU@peopleslight 

Making theater and performance art more accessible for the deaf, hard of hearing and non-English speaking communities by integrating augmented reality smart glasses with an open access smart captioning system to accompany live works.

  • ConcertCue – Massachusetts Institute of Technology (MIT); MIT Center for Art, Science & Technology

Project lead: Eran Egozy | Cambridge, Massachusetts | @EEgozy,@MIT,@ArtsatMIT@MIT_SHASS

Developing a mobile app for classical music audiences that receives real-time program notes at precisely-timed moments of a live musical performance.

  • Civic Portal – Monument Lab

Project lead: Paul Farber and Ken Lum | Philadelphia | @monument_lab@PennDesign@SachsArtsPhilly@paul_farber

Encouraging public input on new forms of historical monuments through a digital tool that allows users to identify locations, topics and create designs for potential public art and monuments in our cities.

  • Who’s Coming? – The Museum of Art and History at the McPherson Center

Project lead: Nina Simon | Santa Cruz, California | @santacruzmah@OFBYFOR_ALL

Prototyping a tool in the form of a smartphone/tablet app for cultural institutions to capture visitor demographic data, increasing knowledge on who is and who is not participating in programs.

  • Feedback Loop – Newport Art Museum, in collaboration with Work-Shop Design Studio

Project lead: Norah Diedrich | Newport, Rhode Island | @NewportArtMuse

Enabling audiences to share immediate feedback and reflections on art by designing hardware and software to test recording and sharing of audience thoughts.

  • The Traveling Stanzas Listening Wall – Wick Poetry Center at Kent State University Foundation

Project lead: David Hassler | Kent, Ohio | @DavidWickPoetry,@WickPoetry,@KentState@travelingstanza

Producing touchscreen installations in public locations that allow users to create and share poetry by reflecting on and responding to historical documents, oral histories, and multimedia stories about current events and community issues.

  • Wiki Art Depiction Explorer – Wikimedia District of Columbia, in collaboration with the Smithsonian Institution

Project lead: Andrew Lih | Washington, District of Columbia | @wikimedia@fuzheado

Using crowdsourcing methods to improve Wikipedia descriptions of artworks in major collections so people can better access and understand art virtually.

The Void, a developer of immersive virtual reality entertainment centers, is partnering with the multi-national, multi-hyphenate mall developer Unibail-Rodamco-Westfield to build twenty five new locations around the world.

Location-based virtual reality has become the default gateway into the consumer market for virtual reality headsets given that adoption of the consumer wearable device hasn’t been all that robust.

Utah-based The Void has some big intellectual property behind its immersive experiences including ‘Star Wars: Secrets of the Empire’ from Lucasfilm; Walt Disney Animation’s ‘Ralph Breaks theInternet’; and ‘Ghostbusters: Dimension’.

Through the partnership with Westfield in the U.S. the company intends to launch pop-ups at the Westfield World Trade Center in New York,  the Westfield San Francisco Centre, Westfield Santa Anita in the outskirts of Pasadena, and Westfield UTC in San Diego. The Void notes that all of those locations will become permanent going forward.

The companies also intend to take the show on the road with openings planned for Paris, London, Amsterdam, Chicago, Cophenhagen, Oberhausen, San Jose, Calif., Stockholm, and Vienna.

This partnership between the two companies reflects some harsh realities for both businesses. For virtual reality it’s the limited home adoption of headset entertainment and for shopping malls, it’s the rise of ecommerce and the conversion of these public spaces from shopping destinations to broader entertainment hubs.

It’s a fact that Unibail-Rodamco-Westfield chief executive Chrisophe Cuvillier acknowledged in a statement about the partnership. “Over the past years, our industry has evolved dramatically. In a connected world, shopping is not enough anymore,” Cuvillier said in a statement. “Today, our customers expect to be entertained and brought together to share memorable, engaging sensory experiences.”

It’s the 50th anniversary of the 1969 Apollo 11 Moon landing, and Nvidia is using the anniversary to showing off the power of its current GPU technology. Using the RTX real-time ray tracing, which was the topic of the day at its recent GTC Conference.

Nvidia employed its latest tech to make big improvements to the moon-landing demo it created five years ago and refined last year to demonstrate its Turing GPU architecture. The resulting simulation is a fully interactive graphic demo that models sunlight in real-time, providing a cinematic and realistic depiction of the Moon landing complete with accurate shadows, visor and metal surface reflections, and more.

Already, Nvidia had put a lot of work into this simulation, which runs on some of its most advanced graphics hardware. When the team began constructing the virtual environment, they studied the lander, the actual reflectivity of astronaut’s space suits and the properties of the Moon’s surface dust and terrain. With its real-time ray-tracing, they can now scrub the sun’s relative position back and forth and have every surface reflect light the way it actually would.

[gallery ids="1857795,1857794,1857793,1857792"]

Idiot conspiracy theorists may still falsely argue that the original was a stage show, but Nvidia’s recreation is the real wizardry, potentially providing a ‘more real than archival’ look at something only a dozen people have actually experienced.

Just shy of three years ago, Pokémon GO took over the world. Players filled the sidewalks, and crowds of trainers flooded parks and landmarks. Anywhere you looked, people were throwing Pokéballs and chasing Snorlax.

As the game grew, so did the company behind it. Niantic had started its life as an experimental “lab” within Google — an effort on Google’s part to keep the team’s founder, John Hanke, from parting ways to start his own thing. In the months surrounding GO’s launch, Niantic’s team shrank dramatically, spun out of Google, and then rapidly expanded… all while trying to keep GO’s servers from buckling under demand and to keep this massive influx of players happy. Want to know more about the company’s story so far? Check out the Niantic EC-1 on ExtraCrunch here.

Now Niantic is back with its next title, Harry Potter: Wizards Unite. Built in collaboration with WB Games, it’s a reimagining of Pokémon GO’s real-world, location-based gaming concept through the lens of JK Rowling’s Harry Potter universe.

I got a chance to catch up with John Hanke for a few minutes earlier this week — just ahead of the game’s US/UK launch this morning. We talked about how they prepared for this game’s launch, how it’s built upon a platform they’ve been developing across their other titles for years, and how Niantic’s partnership with WB Games works creatively and financially. Here’s the transcript:

Greg Kumparak: Can you tell me a bit about how all this came to be?

John Hanke: Yeah, you know.. we did Ingress first, and we were thinking about other projects we could build. Pokémon was one that came up early, so we jumped on that — but the other one that was always there from the beginning, of the projects we wanted to do, was Harry Potter. I mean, it’s universally beloved. My kids love the books and movies, so it’s something I always wanted to do.

Like Pokémon, it was an IP we felt was a great fit for [augmented reality]. That line between the “muggle” world and the “magic” world was paper thin in the fiction, so imagining breaking through that fourth wall and experiencing that magic through AR seemed like a great way to use the technology to fulfill an awesome fan fantasy.

Carnegie Mellon researchers working with peers from the University of Minnesota have made a big breakthrough in brain-computer interface (BCI) and robotic technology: They’ve developed a way for a person to to control a robot arm with their minds – with no surgery or invasive procedures required to make it possible.

The mind-controlled robot in this experiment also showed a high degree of motor control, as it’s able to track a computer cursor as it moves across a screen. This is obviously a huge step forward in the field, since it proves the viability of controlling computers with your brain more generally, which could have all kinds of potential applications, not least of which are providing people with paralysis or other kinds of disorders that affect movement an alternative way to operate computerized devices.

To date, successful, highly precise demonstrations and executions of BCI tech in people has depended on systems that incorporate brain implants, which pick up the signals from inside the user. Implanting these devices is not only dangerous, but also expensive and not necessarily fully understood in terms of their long-term impact. This has led to them not being very widely used, which means only a few people have been able to benefit from their impact.

The CMU and University of Minnesota research team’s breakthrough is to develop a system that can deal with the lower signal quality that comes from using sensors that are used outside of the body applied to the skin instead. They were able to employ a combination of new sensing and machine learning technologies to grab signals from the user that are from deep within the brain, but without the kind of ‘noise’ that typically comes with noninvasive techniques.

This groundbreaking discovery might not even be that far away from changing the lives of actual patients – the research team intends to start clinical trials soon.

For the past nineteen years, Ioannis Tarnanas, the founder and chief scientific officer at Altoida, has been developing virtual and augmented reality tools to offer predictions about the onset of mental illness in older patients.

The company, whose tools have been approved by the Food and Drug Administration for predicting Alzheimer’s, claims that it can determine whether someone will present with the disease six-to-ten years before the onset of mild cognitive impairment symptoms with a 94% accuracy.

In 2019, Alzheimer’s and other dementias will cost the U.S. nearly $290 billion and that figure could rise as high as $1.1 trillion by 2050, according to Altoida.

The number of people living with Alzheimer’s disease is rapidly growing. In 2019 alone, Alzheimer’s disease and other dementias will cost the nation $290 billion. By 2050, these costs could rise as high as $1.1 trillion, but Altoida says that these costs can be prevented if the disease is caught early enough.

Altoida uses an iPad or a tablet accelerometer, a gyroscope, and touch screen sensors to detect what the company calls “micro-errors” as patients complete a series of AR and VR challenges. It’s basically a game of hide-and-seek where patients put virtual objects in different physical spaces in a clinical environment and then try to collect them.

Right now, the company’s technology is only available as a clinically supervised test in a doctor’s office, but the company is beginning to look at bringing its diagnostic tools into the home.

“In this field there are two major waves. Passive digital biomarkers and active digital biomarkers. With passive biomarkers you collect data from sensors,” says Tarnanas. “To give you an example of what this means in real life. [With passive digital biomarkers] you wind up collecting huge amounts of data and you see spikes and associate that with more everyday function or not… you are never sure whether this is due to day to day activity.”

Tarnanas started conducting longitudinal clinical trials around cognitive testing in the early 2000s while he was working on his Masters at the University of Sussex. He then moved to San Diego and worked in the Virtual Reality Medical Center before moving on to Bern Switzerland to conduct additional research. Tarnanas finally settled in Houston, where Altoida is now based.

“Developing enhanced methods to objectively evaluate cognitive function is a critical component of the next generation digital medicine — a component that is required to not only advance the basic research in neurodegenerative disease, but also one that is required for the development of improved clinical interventions,” said Dr. Walter Greenleaf, PhD, a neuroscientist and Distinguished Visiting Scholar working at the Stanford University Virtual Human Interaction Lab, in a statement. “Understanding neurodegenerative biotypes will dramatically improve our ability to conduct a differential diagnosis at the primary care level.  Improved diagnostics will provide healthcare professionals with the key information necessary to precisely adapt clinical interventions to personalize the patient’s cognitive care. This will ultimately lead to improved outcomes of care and to reduced healthcare costs.”

Some influential healthcare investors are already on board. Altoida has raised $6.3 million in a new round of financing from investors led by M Ventures, the corporate investment arm of the pharmaceutical company Merck, with additional participation from Grey Sky Venture Partners, VI Partners AG, Alpana Ventures, and FYRFLY Venture Partners.

“The beauty of active digital biomarkers is that they can actually expand to more conditions,” says Tarnanas. The company is looking at expanding its prognostic toolkits to determining lasting impacts from traumatic brain injuries, and post-operative cognitive disorder, he says.

“As the world’s effort to introduce meaningful therapies for Alzheimer’s disease inches closer and closer to success, it is clear that the greatest benefit will come to those whose disease is detected at a very early stage,” said Jonathan L. Liss, MD, Director at Columbus Memory Center and Founder of Columbus Memory Project, who has been using Altoida’s technology since September 2018. “The Altoida Neuro-Motor Index (NMI) device offers an ingenious way in which to detect early disease and track progression without prolonged cognitive testing, tissue sampling, or radiologic intervention. The Altoida NMI device is a welcome advancement to the field of cognitive health.”

Altoida isn’t alone in trying to find a way to diagnose Alzheimer’s earlier. Recently, MyndYou, a New York-based company announced a partnership with Mizuho to bring its passive prognostic toolkit to Japan. That company recently secured roughly $2 million to build out its own solution.

 

For the past nineteen years, Ioannis Tarnanas, the founder and chief scientific officer at Altoida, has been developing virtual and augmented reality tools to offer predictions about the onset of mental illness in older patients.

The company, whose tools have been approved by the Food and Drug Administration for predicting Alzheimer’s, claims that it can determine whether someone will present with the disease six-to-ten years before the onset of mild cognitive impairment symptoms with a 94% accuracy.

In 2019, Alzheimer’s and other dementias will cost the U.S. nearly $290 billion and that figure could rise as high as $1.1 trillion by 2050, according to Altoida.

The number of people living with Alzheimer’s disease is rapidly growing. In 2019 alone, Alzheimer’s disease and other dementias will cost the nation $290 billion. By 2050, these costs could rise as high as $1.1 trillion, but Altoida says that these costs can be prevented if the disease is caught early enough.

Altoida uses an iPad or a tablet accelerometer, a gyroscope, and touch screen sensors to detect what the company calls “micro-errors” as patients complete a series of AR and VR challenges. It’s basically a game of hide-and-seek where patients put virtual objects in different physical spaces in a clinical environment and then try to collect them.

Right now, the company’s technology is only available as a clinically supervised test in a doctor’s office, but the company is beginning to look at bringing its diagnostic tools into the home.

“In this field there are two major waves. Passive digital biomarkers and active digital biomarkers. With passive biomarkers you collect data from sensors,” says Tarnanas. “To give you an example of what this means in real life. [With passive digital biomarkers] you wind up collecting huge amounts of data and you see spikes and associate that with more everyday function or not… you are never sure whether this is due to day to day activity.”

Tarnanas started conducting longitudinal clinical trials around cognitive testing in the early 2000s while he was working on his Masters at the University of Sussex. He then moved to San Diego and worked in the Virtual Reality Medical Center before moving on to Bern Switzerland to conduct additional research. Tarnanas finally settled in Houston, where Altoida is now based.

“Developing enhanced methods to objectively evaluate cognitive function is a critical component of the next generation digital medicine — a component that is required to not only advance the basic research in neurodegenerative disease, but also one that is required for the development of improved clinical interventions,” said Dr. Walter Greenleaf, PhD, a neuroscientist and Distinguished Visiting Scholar working at the Stanford University Virtual Human Interaction Lab, in a statement. “Understanding neurodegenerative biotypes will dramatically improve our ability to conduct a differential diagnosis at the primary care level.  Improved diagnostics will provide healthcare professionals with the key information necessary to precisely adapt clinical interventions to personalize the patient’s cognitive care. This will ultimately lead to improved outcomes of care and to reduced healthcare costs.”

Some influential healthcare investors are already on board. Altoida has raised $6.3 million in a new round of financing from investors led by M Ventures, the corporate investment arm of the pharmaceutical company Merck, with additional participation from Grey Sky Venture Partners, VI Partners AG, Alpana Ventures, and FYRFLY Venture Partners.

“The beauty of active digital biomarkers is that they can actually expand to more conditions,” says Tarnanas. The company is looking at expanding its prognostic toolkits to determining lasting impacts from traumatic brain injuries, and post-operative cognitive disorder, he says.

“As the world’s effort to introduce meaningful therapies for Alzheimer’s disease inches closer and closer to success, it is clear that the greatest benefit will come to those whose disease is detected at a very early stage,” said Jonathan L. Liss, MD, Director at Columbus Memory Center and Founder of Columbus Memory Project, who has been using Altoida’s technology since September 2018. “The Altoida Neuro-Motor Index (NMI) device offers an ingenious way in which to detect early disease and track progression without prolonged cognitive testing, tissue sampling, or radiologic intervention. The Altoida NMI device is a welcome advancement to the field of cognitive health.”

Altoida isn’t alone in trying to find a way to diagnose Alzheimer’s earlier. Recently, MyndYou, a New York-based company announced a partnership with Mizuho to bring its passive prognostic toolkit to Japan. That company recently secured roughly $2 million to build out its own solution.

 

For over 100 years entrepreneurs have come to Hollywood to try their luck in the dream factory and build an empire in the business of storytelling.

Propelled by new technologies, new businessmen have been landing in Los Angeles since the invention of the nickelodeon to create a studio that would dominate popular entertainment. Over the past five years, virtual reality was the latest new thing to make or break fortunes, and the founding team behind the Korean company AmazeVR are the latest would-be dream-makers to take their turn spinning the wheel for Hollywood fortunes.

Despite billions of dollars in investment, and a sustained marketing push from some of the biggest names in the technology industry, virtual reality still doesn’t register with most regular consumers.

But technology companies keep pushing it, driven in part by a belief that maybe this time the next advancement in hardware and services will convince consumers to strap a headset onto their face and stay for a while in a virtual world.

There are significant economic reasons for companies to persist. Sales of headsets in the fourth quarter of 2018 topped 1 million for the first time and new, low cost all-in-one models may further move the needle on adoption. Hardware makers have invested billions to improve the technology, and they’d like that money to not go to waste. At the same time, networking companies are spending billions to roll out new, high speed data networks and they need new data-hungry features (like virtual reality) to make a compelling case for consumers to upgrade to the newer, more expensive networking plans.

Sitting at the intersection of these two market forces are companies like AmazeVR, which is hoping to beat the odds.

Founded by a team of ace Korean technologists who won fame and fortune as early executives of the multi-billion dollar messaging service Kakao (it’s the Korean equivalent of WhatsApp or WeChat), AmazeVR is hoping it can succeed in a marketplace littered with production studios like Baobab Studios, Here Be Dragons, The Virtual Reality Company, and others.

The company was formed and financed with $6.3 million from its founding team of Kakao co-founder and co-chief executive, JB Lee, who serves as Amaze’s chief product officer; its head of strategy, Steve Lee, AmazeVR’s chief executive; Jeremy Nam, the chief technology officer at AmazeVR and the former senior software engineer of Kakao; and finally, Steve Koo, who led KakaoTalk’s messaging team and is now head of engineering at AmazeVR.

“What we saw as the problem is the content creation itself,” says Lee.

Encouraged by the potential uptake of the Oculus Go and spurred on by $7 million in funding led by Mirae Asset Group with participation from strategic investors including LG Technology Ventures, Timewise Investment, and Smilegate Investment, AmazeVR is looking to plant a flag in Hollywood to encourage producers and content creators to use its platform and get a significant library of content up and running. 

For LG, it’s strategically important to get some applications up on its newly launched 5G subscription network back in Korea, and AmazeVR is already rolling up new content for its VR platform.

In fact, AmazeVR has already partnered with LG U+, the telecommunications network arm of LG to produce virtual reality content. LG U+ will host AmazeVR content on its service use the company’s proprietary content generation tools to make VR production easier as it looks to roll out 1500 new pieces of virtual reality “experiences”.

AmazeVR sells its content as a $7 per-month subscription, with 3 month bundles for $18 and 6 month bundles for $24. So far, they’ve got more than 1,000 subscribers and expect to add more as consumers start opening their wallets to pick up more devices. The company already has 20 different interactive virtual reality experiences available and is in Los Angeles to connect with top talent for additional productions, the company said.

“We believe cloud-based VR is the future, and AmazeVR has developed elegant technology that enables users to create and share interactive content very easily,” said Dong-Su Kim, CEO of LG Technology Ventures, in a statement. “We are incredibly excited about how the AmazeVR platform will enable innovative, quality content to be generated at unprecedented scale and speed.”

AmazeVR uses a proprietary backend to stitch 360-degree video and provide editing and production tools for content creators in addition to building its own cameras for video capture, the company said.

As it builds out its library, AmazeVR is giving video creators a cut of the sales from the company’s subscriptions and individual downloads of their virtual reality experiences.

“We see no reason that VR content shouldn’t be compelling enough to support a Netflix model. To get there, we must devise mechanisms to inspire, assist, and reward content creators,” said Steve Lee, CEO of AmazeVR. “Our approach, commitment to quality, industry-leading technology, and strategic investors provide a path forward to make VR/AR the next great frontier for entertainment and personal displays.”

Consumer VR might not have taken off in the mainstream but it’s still fun to use, and it’s even more fun to use in groups. There is more of an arcade renaissance for VR going on right now, as well as location-based multi-user VR experiences.

That’s the premise behind Munich-based HolodeckVR which is using proprietary tech to blend radio frequency, IR tracking and on-device IMUs to bring multi-user positionally tracked VR to mobile headsets.

How would you like to do VR in a big group, and on fairground dodgems/bumper cars? That’s the kind of this startup is cooking up.

As a spin-off from the prestigious Fraunhofer Institute for Integrated Ciruits IIS, it uses its own technology which allows its visitors to experience virtual reality in groups of up to 20 people and move around in an empty space of 10x20m, all just wearing VR goggles.

Holodeck says it can be used for different types of events (entertainment, birthday parties and corporate team building) and work through several thousands of guests per day.

It’s now raised €3 million from strategic partner ProSiebenSat.1, the leading German entertainment player. This will allow Holodeck to expand its open content platform and extend its network of locations.

The Munich-based media company owns a potential distribution channel for scaling Holodeck VR locations at leisure- and activity parks, while other synergies related to ProSiebenSat.1, including live broadcasting and VR content generation.

With 7Sports, the sports business units of ProSiebenSat.1, Holodeck VR plans eSports events leveraging the Holodeck VR platform.

Jonathan Nowak Delgado says: “With this investment, we’ll aim to become the VR touchpoint for the next generation by offering exciting new experiences that are simple, social, and fun.”

Holodeck VR’s experiences combine the real world and digital world so that you can take a ride in bumper cars or on a rollercoaster.

I hope they will have plenty of sick bags at the ready.