Steve Thomas - IT Consultant

Oculus doesn’t want to deter developers by making their games and content obsolete when the next version of Rift comes out. So today at the Oculus Connect 5 conference, Facebook CEO Mark Zuckerberg announced that “Future versions of our product are going to be compatible with the old ones. All of the content that works for Rifts is going to work on the next version.”

That next version of the Oculus headset will be called Quest and will come out in the Spring. It’s wireless, will ship with Touch controllers, and there will be over 50 titles available at launch.

The compatability strategy ties into Zuckerberg’s prediction that the VR industry needs about 10 million users on any given hardware platform in order to drive enough sales for content creation to be sustainable. Discussing the slow adoption of VR, Zuckerberg noted that last year Oculus set forth a goal of getting 1 billion people into VR. He joked that this journey is “one percent finished”, or “maybe less than one percent”.

Right now, most VR titles are built by scrappy indie studios and are funded by initiatives like Oculus’ push to invest $3 billion in VR over the next decade. That’s because there’s not nearly enough headset owners buying content to make the business profitable. There are 1100 Rift titles already available, but they were at risk of becoming unplayable as hardware advances.

Of course, if Oculus really was all-in about compatibility, it’d try to work with Playstation VR and HTC Vive to make experiences easier to port across platforms. But for now, just knowing they won’t have to re-code their content each year could make developers more confident about building for the immersive medium.

more Oculus Connect 5 coverage

HTC continues to bet big on VR, today announcing the launch of pre-orders for the Vive Wireless Adapter. The adapter allows Vive and Vive Pro owners to cut the cord, so to speak, and allow users to tether wirelessly to their PC.

The Base Adapter works with both the Vive and Vive Pro, though the Vive Pro requires an extra $60 compatibility pack that includes a connection cable for the Vive Pro, foam padding, and an attachment device that works with the Vive Pro.

The Vive Wireless Adapter itself retails for $299.

According to the blog post, installation works like this:

Installation of the Vive Wireless Adapter occurs in minutes by installing a PCI-e card and attaching a sensor from the PC that broadcasts to and from the newly wireless Vive headset. The adapter has a broadcast range of 6 meters with a 150 degree field of view from the sensor and runs in the interference-free 60Ghz band using Intel’s WiGig specification, which, combined with DisplayLink’s XR codec, means low latency and high performance with hours of battery life.

The Adapter is powered by the HTC QC 3.0 PowerBank, which doubles as a portable charger for a smartphone, and is included in the price with the Adapter.

This isn’t the only wireless adapter for the HTC Vive . TPCast unveiled an adapter in 2016 for $220, as well as an enterprise version of the adapter that delivers 2k content to several HTC Vive units with sub-2ms latency.

Pre-orders for HTC’s own Adapter will begin on September 5 from retailers like Amazon, Best Buy, Microsoft, NewEgg, and Vive.com.

New fifth-generation “5G” network technology will equip the United States with a superior wireless platform unlocking transformative economic potential. However, 5G’s success is contingent on modernizing outdated policy frameworks that dictate infrastructure overhauls and establishing the proper balance of public-private partnerships to encourage investment and deployment.

Most people have heard by now of the coming 5G revolution. Compared to 4G, this next-generation technology will deliver near-instantaneous connection speed, significantly lower latency – meaning near-zero buffer times – and increased connectivity capacity to allow billions of devices and applications to come online and communicate simultaneously and seamlessly.

While 5G is often discussed in future tense, the reality is it’s already here. Its capabilities were displayed earlier this year at the Olympics in Pyeongchang, South Korea, where Samsung and Intel  class="m_4430823757643656150MsoHyperlink">showcased a 5G enabled virtual reality (VR) broadcasting experience to event goers. In addition, multiple U.S. carriers including Verizon, AT&T and Sprint have announced commercial deployments in select markets by the end of 2018, while chipmaker Qualcomm unveiled last month its new 5G millimeter-wave module that outfits smartphones with 5G compatibility.

BARCELONA, SPAIN – 2018/02/26: View of the phone company QUALCOMM technology 5G in the Mobile World Congress.
The Mobile World Congress 2018 is being hosted in Barcelona from 26 February to 1st March. (Photo by Ramon Costa/SOPA Images/LightRocket via Getty Images)

While this commitment from 5G commercial developers is promising, long-term success of 5G is ultimately dependent on addressing two key issues.

The first step is ensuring the right policies are established at the federal, state and municipal levels in the U.S. that will allow the buildout of needed infrastructure, namely “small cells”. This equipment is designed to fit on streetlights, lampposts and buildings. You may not even notice them as you walk by, but they are critical to adding capacity to the network and transmitting wireless activity quickly and reliably. 

In many communities across the U.S., 20th century infrastructure policies are slowing the emergence of bringing next-generation networks and technologies online. Issues including costs per small cell attachment, permitting around public rights-of-way and deadlines on application reviews are all less-than-exciting topics of conversation but act as real threats to achieving timely implementation of 5G according to recent research from Accenture and the 5G Americas organization.

Policymakers can mitigate these setbacks by taking inventory of their own policy frameworks and, where needed, streamlining and modernizing processes. For instance, current small cell permit applications can take upwards of 18 to 24 months to advance through the approval process as a result of needed buy-in from many local commissions, city councils, etc. That’s an incredible amount of time for a community to wait around and ultimately fall behind on next-generation access. As a result, policymakers are beginning to act. 

13 states, including Florida, Ohio, and Texas have already passed bills alleviating some of the local infrastructure hurdles accompanying increased broadband network deployment, including delays and pricing. Additionally, this year, the Federal Communications Commission (FCC) has moved on multiple orders that look to remedy current 5G roadblocks including opening up commercial access to more amounts of needed high-, mid- and low-band spectrum.

The second step is identifying areas in which public and private entities can partner to drive needed capital and resources towards 5G initiatives. These types of collaborations were first made popular in Europe, where we continue to see significant advancement of infrastructure initiatives through combined public-private planning including the European Commission and European ICT industry’s 5G Infrastructure Public Private Partnership (5G PPP).

The U.S. is increasing its own public-private levels of planning. In 2015, the Obama Administration’s Department of Transportation launched its successful “Smart City Challenge” encouraging planning and funding in U.S. cities around advanced connectivity. More recently, the National Science Foundation (NSF) awarded New York City a $22.5 million grant through its Platforms for Advanced Wireless Research (PAWR) initiative to create and deploy the first of a series of wireless research hubs focused on 5G-related breakthroughs including high-bandwidth and low-latency data transmission, millimeter wave spectrum, next-generation mobile network architecture, and edge cloud computing integration.

While these efforts should be applauded, it’s important to remember they are merely initial steps. A recent study conducted by CTIA, a leading trade association for the wireless industry, found that the United States remains behind both China and South Korea in 5G development. If other countries beat the U.S. to the punch, which some anticipate is already happening, companies and sectors that require ubiquitous, fast, and seamless connection – like autonomous transportation for example – could migrate, develop, and evolve abroad casting lasting negative impact on U.S. innovation. 

The potential economic gains are also significant. A 2017 Accenture report predicts an additional $275 billion in infrastructure investments from the private sector, resulting in up to 3 million new jobs and a gross domestic product (GDP) increase of $500 billion. That’s just on the infrastructure side alone. On the global scale, we could see as much as $12 trillion in additional economic activity according to discussion at the World Economic Forum Annual Meeting in January.

Former President John F. Kennedy once said, “Conformity is the jailer of freedom and the enemy of growth.” When it comes to America’s technology evolution, this quote holds especially true. Our nation has led the digital revolution for decades. Now with 5G, we have the opportunity to unlock an entirely new level of innovation that will make our communities safer, more inclusive and more prosperous for all.

While the potential for entertainment in virtual and augmented reality has grabbed the most headlines, these new platforms promise radical transformations across industries and the very way that people interact with their world.

And no company is doing more to develop the toolkit for how to build applications for these new interactions than 6D.AI.

At our inaugural TC Sessions: AR/VR event on UCLA’s world-famous campus on October 18, join 6D.AI co-founder and chief executive Matt Miesnieks and head of developer relations, Bruce Wooden, as they discuss 6D’s big vision of using smartphone cameras to build a cloud-based map of the world’s three-dimensional data.

The company’s goal is nothing short of supercharging augmented reality content in a way that could actually make it useful to people.

Miesnieks certainly knows about the need for applications to drive adoption in a new ecosystem. After a career in the trenches developing mobile software infrastructure for companies like Samsung and Layar, Miesnieks made the jump to AR software infrastructure in 2009.

A founding partner of the firm Super Ventures, which exclusively invests in augmented reality startups, Miesnieks was drawn to 6D and its vision as soon as he saw it demonstrated in the labs at Oxford University.

Wooden, 6D’s head of developer relations, has his own storied career in the world of augmented reality. He was a co-founder of Altspace (which was sold to Microsoft) and SVVR, the world’s largest virtual reality community.

“We want to be a platform that informs AR app developers of the real world without the real world — the structure of the real world, what’s going on in the real world, who else is in the real world — and let them build intelligent apps on top of that,” Miesnieks has said of his company’s mission.

TC Sessions: AR/VR on October 18 at UCLA is a single-day event designed to facilitate in-depth conversations, hands-on demos and networking opportunities with the industry leaders, content creators and game changers bringing innovation to the masses.

Purchase your Early Bird tickets here for just $99 and you’ll save $100 before prices go up!

Students get a special rate of just $45 when they book here.

In recent days, word about Nvidia’s new Turing architecture started leaking out of the Santa Clara-based company’s headquarters. So it didn’t come as a major surprise that the company today announced during its Siggraph keynote the launch of this new architecture and three new pro-oriented workstation graphics cards in its Quadro family.

Nvidia describes the new Turing architecture as “the greatest leap since the invention of the CUDA GPU in 2006.” That’s a high bar to clear, but there may be a kernel of truth here. These new Quadro RTx chips are the first to feature the company’s new RT Cores. “RT” here stands for ray tracing, a rendering method that basically traces the path of light as it interacts with the objects in a scene. This technique has been around for a very long time (remember POV-Ray on the Amiga?). Traditionally, though, it was always very computationally intensive, though the results tend to look far more realistic. In recent years, ray tracing got a new boost thanks to faster GPUs and support from the likes of Microsoft, which recently added ray tracing support to DirectX.

“Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences,” said Nvidia CEO Jensen Huang. “The arrival of real-time ray tracing is the Holy Grail of our industry.”

The new RT cores can accelerate ray tracing by up to 25 times compared to Nvidia’s Pascal architecture, and Nvidia claims 10 GigaRays a second for the maximum performance.

Unsurprisingly, the three new Turing-based Quadro GPUs will also feature the company’s AI-centric Tensor Cores, as well as 4,608 CUDA cores that can deliver up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second. The chips feature GDDR6 memory to expedite things, and support Nvidia’s NVLink technology to scale up memory capacity to up to 96GB and 100GB/s of bandwidth.

The AI part here is more important than it may seem at first. With NGX, Nvidia today also launched a new platform that aims to bring AI into the graphics pipelines. “NGX technology brings capabilities such as taking a standard camera feed and creating super slow motion like you’d get from a $100,000+ specialized camera,” the company explains, and also notes that filmmakers could use this technology to easily remove wires from photographs or replace missing pixels with the right background.

On the software side, Nvidia also today announced that it is open sourcing its Material Definition Language (MDL).

Companies ranging from Adobe (for Dimension CC) to Pixar, Siemens, Black Magic, Weta Digital, Epic Games and Autodesk have already signed up to support the new Turing architecture.

All of this power comes at a price, of course. The new Quadro RTX line starts at $2,300 for a 16GB version, while stepping up to 24GB will set you back $6,300. Double that memory to 48GB and Nvidia expects that you’ll pay about $10,000 for this high-end card.

It’s been a long and trip-filled wait but mixed reality headgear maker Magic Leap will finally, finally be shipping its first piece of hardware this summer.

We were still waiting on the price-tag — but it’s just been officially revealed: The developer-focused Magic Leap One ‘creator edition’ headset will set you back at least $2,295.

So a considerable chunk of change — albeit this bit of kit is not intended as a mass market consumer device (although Magic Leap’s founder frothed about it being “at the border of practical for everybody” in an interview with the Verge) but rather an AR headset for developers to create content that could excite future consumers.

A ‘Pro’ version of the kit — with an extra hub cable and some kind of rapid replacement service if the kit breaks — costs an additional $495, according to CNET. While certain (possibly necessary) extras such as prescription lenses also cost more. So it’s pushing towards 3x iPhone Xes at that point.

The augmented reality startup, which has raised at least $2.3 billion, according to Crunchbase, attracting a string of high profile investors including Google, Alibaba, Andreessen Horowitz and others, is only offering its first piece of reality bending eyewear to “creators in cities across the contiguous U.S.”.

Potential buyers are asked to input their zip code via its website to check if it will agree to take their money but it adds that “the list is growing daily”.

We tried the TC SF office zip and — unsurprisingly — got an affirmative of delivery there. But any folks in, for example, Hawaii wanting to spend big to space out are out of luck for now…

CNET reports that the headset is only available in six U.S. cities at this stage: Chicago, Los Angeles, Miami, New York, San Francisco (Bay Area), and Seattle — with Magic Leap saying that “many” more will be added in fall.

The company specifies it will “hand deliver” the package to buyers — and “personally get you set up”. So evidently it wants to try to make sure its first flush of expensive hardware doesn’t get sucked down the toilet of dashed developer expectations.

It describes the computing paradigm it’s seeking to shift, i.e. with the help of enthused developers and content creators, as “spatial computing” — but it really needs a whole crowd of technically and creatively minded people to step with it if it’s going to successfully deliver that.

Gather around, campers, and hear a tale as old as time.

Remember the HTC Dream? The Evo 4G? The Google Nexus One? What about the Touch Diamond? All amazing devices. The HTC of 2018 is not the HTC that made these industry-leading devices. That company is gone.

It seems HTC is getting ready to lay off nearly a quarter of its workforce by cutting 1,500 jobs in its manufacturing unit in Taiwan. After the cuts, HTC’s employee count will be less than 5,000 people worldwide. Five years ago, in 2013, HTC employed 19,000 people.

HTC started as a white label device maker giving carriers an option to sell devices branded with their name. The company also had a line of HTC-branded connected PDAs that competed in the nascent smartphone market. BlackBerry, or Research in Motion as it was called until 2013, ruled this phone segment, but starting around 2007 HTC began making inroads thanks to innovated touch devices that ran Windows Mobile 6.0.

In 2008 HTC introduced the Touch line with the Touch Diamond, Touch Pro, Touch 3G and Touch HD. These were stunning devices for the time. They were fast, loaded with big, user swappable batteries and microSD card slots. The Touch Pro even had a front-facing camera for video calls.

HTC overplayed a custom skin onto of Windows Mobile making it a bit more palatable for the general user. At that time, Windows Mobile was competing with BlackBerry’s operating system and Nokia’s Symbian. None were fantastic, but Windows Mobile was by far the most daunting for new users. HTC did the best thing it could do and developed a smart skin that gave the phone a lot of features that would still be considered modern.

In 2009 HTC released the first Android device with Google. Called the HTC Dream or G1, the device was far from perfect. But the same could be said about the iPhone. This first Android phone set the stage for future wins from HTC, too. The company quickly followed up with the Hero, Droid Incredible, Evo 4G and, in 2010, the amazing Google Nexus One.

After the G1, HTC started skinning Android in the same fashion as it did Windows Mobile. It cannot be overstated how important this was for the adoption of Android. HTC’s user interface made Android usable and attractive. HTC helped make Android a serious competitor to Apple’s iOS.

In 2010 and 2011, Google turned to Samsung to make the second and third flagship Nexus phones. It was around this time Samsung started cranking out Android phones, and HTC couldn’t keep up. That’s not to say HTC didn’t make a go for it. The company kept releasing top-tier phones: the One X in 2012, the One Max in 2013, and the One (M8) in 2014. But it didn’t matter. Samsung had taken up the Android standard and was charging forward, leaving HTC, Sony, and LG to pick from the scraps.

At the end of 2010, HTC was the leading smartphone vendor in the United States. In 2014 it trailed Apple, Samsung, and LG with around a 6% market share in the US. In 2017 HTC captured 2.3% of smartphone subscribers and now in 2018, some reports peg HTC with less than a half percent of the smartphone market.

Google purchased a large chunk of HTC’s smartphone design talent in 2017 for $1.1 billion. The deal transferred more than 2,000 employees under Google’s tutelage. They will likely be charged with working on Google’s line of Pixel devices. It’s a smart move. This HTC team was responsible for releasing amazing devices that no one bought. But that’s not entirely their fault. Outside forces are to blame. HTC never stopped making top-tier devices.

The HTC of today is primarily focused on the Vive product line. And that’s a smart play. The HTC Vive is one of the best virtual reality platforms available. But HTC has been here before. Hopefully, it learned something from its mistakes in smartphones.

Researchers at the University of Maryland have found that people remember information better if it is presented in VR vs. on a two dimensional personal computer. This means VR education could be an improvement on tablet or device-based learning.

“This data is exciting in that it suggests that immersive environments could offer new pathways for improved outcomes in education and high-proficiency training,” said Amitabh Varshney, dean of the College of Computer, Mathematical, and Natural Sciences at UMD.

The study was quite complex and looked at recall in forty subjects who were comfortable with computers and VR. The researchers was an 8.8 percent improvement in recall.

To test the system they created a “memory palace” where they placed various images. This sort of “spatial mnemonic encoding” is a common memory trick that allows for better recall.

“Humans have always used visual-based methods to help them remember information, whether it’s cave drawings, clay tablets, printed text and images, or video,” said lead researcher Eric Krokos. “We wanted to see if virtual reality might be the next logical step in this progression.”

From the study:

Both groups received printouts of well-known faces–including Abraham Lincoln, the Dalai Lama, Arnold Schwarzenegger and Marilyn Monroe–and familiarized themselves with the images. Next, the researchers showed the participants the faces using the memory palace format with two imaginary locations: an interior room of an ornate palace and an external view of a medieval town. Both of the study groups navigated each memory palace for five minutes. Desktop participants used a mouse to change their viewpoint, while VR users turned their heads from side to side and looked up and down.

Next, Krokos asked the users to memorize the location of each of the faces shown. Half the faces were positioned in different locations within the interior setting–Oprah Winfrey appeared at the top of a grand staircase; Stephen Hawking was a few steps down, followed by Shrek. On the ground floor, Napoleon Bonaparte’s face sat above majestic wooden table, while The Rev. Martin Luther King Jr. was positioned in the center of the room.

Similarly, for the medieval town setting, users viewed images that included Hillary Clinton’s face on the left side of a building, with Mickey Mouse and Batman placed at varying heights on nearby structures.

Then, the scene went blank, and after a two-minute break, each memory palace reappeared with numbered boxes where the faces had been. The research participants were then asked to recall which face had been in each location where a number was now displayed.

The key, say the researchers, was for participants to identify each face by its physical location and its relation to surrounding structures and faces–and also the location of the image relative to the user’s own body.

Desktop users could perform the feat but VR users performed it statistically better, a fascinating twist on the traditional role of VR in education. The researchers believe that VR adds a layer of reality to the experience that lets the brain build a true “memory palace” in 3D space.

“Many of the participants said the immersive ‘presence’ while using VR allowed them to focus better. This was reflected in the research results: 40 percent of the participants scored at least 10 percent higher in recall ability using VR over the desktop display,” wrote the researchers.

“This leads to the possibility that a spatial virtual memory palace–experienced in an immersive virtual environment–could enhance learning and recall by leveraging a person’s overall sense of body position, movement and acceleration,” said researcher Catherine Plaisant.

vrapple Apple’s VR ambitions continue: according to a new report from the Financial Times, Apple has acquired an augmented reality startup called Flyby Media, which developed technology that allows mobile phones to “see” the world around them. The company, notably, had worked with Google in the past, as it was the first consumer-facing application to use the image recognition… Read More
Oculus Quill Gif Art doesn’t have to be an end product. Thanks to Oculus’ new internal creation tool, Quill, illustrators can draw in virtual reality and let audiences see their creations come to life stroke by stroke around them. Quill works much like Tilt Brush, the VR painting app Google acquired. Using Oculus’ Touch controllers and motion cameras, Quill users can select different brushes… Read More
Facebook Pyramid VR How did Facebook go from 1 billion to 8 billion videos views per day in 18 months without the whole server farm catching fire? It’s called SVE, short for streaming video engine. SVE lets Facebook slice videos into little chunks, cutting the delay from upload to viewing by 10X. And to ensure the next generation of 360 and virtual reality videos load fast too, it’s invented and… Read More
unnamed So about this time of year my inbox gets flooded with junk from Sundance. Reps, PR, invites to parties. Raise a chalice with Matt Damon and Stella Artois at Sundance! [Sure, why not]. Raise a glass with Canon [sense a trend?]. See the Extraordinary Difference Vaseline is Making [what?]. You get the picture. But for the last couple of years, the big inbox trend for Sundance has been VR. In… Read More