Steve Thomas - IT Consultant

Nvidia today announced its new GPU for machine learning and inferencing in the data center. The new Tesla T4 GPUs (where the ‘T’ stands for Nvidia’s new Turing architecture) are the successors to the current batch of P4 GPUs that virtually every major cloud computing provider now offers. Google, Nvidia said, will be among the first to bring the new T4 GPUs to its Cloud Platform.

Nvidia argues that the T4s are significantly faster than the P4s. For language inferencing, for example, the T4 is 34 times faster than using a CPU and more than 3.5 times faster than the P4. Peak performance for the P4 is 260 TOPS for 4-bit integer operations and 65 TOPS for floating point operations. The T4 sits on a standard low-profile 75 watt PCI-e card.

What’s most important, though, is that Nvidia designed these chips specifically for AI inferencing. “What makes Tesla T4 such an efficient GPU for inferencing is the new Turing tensor core,” said Ian Buck, Nvidia’s VP and GM of its Tesla data center business. “[Nvidia CEO] Jensen [Huang] already talked about the Tensor core and what it can do for gaming and rendering and for AI, but for inferencing — that’s what it’s designed for.” In total, the chip features 320 Turing Tensor cores and 2,560 CUDA cores.

In addition to the new chip, Nvidia is also launching a refresh of its TensorRT software for optimizing deep learning models. This new version also includes the TensorRT inference server, a fully containerized microservice for data center inferencing that plugs seamlessly into an existing Kubernetes infrastructure.

 

 

Researchers at the University of Maryland are adapting the techniques used by birds and bugs to teach drones how to fly through small holes at high speeds. The drone requires only a few sensing shots to define the opening and lets a larger drone fly through an irregularly shaped hole with no training.

Nitin J. Sanket, Chahat Deep Singh, Kanishka Ganguly, Cornelia Fermüller, and Yiannis Aloimonos created the project, called GapFlyt, to teach drones using only simple, insect-like eyes.

The technique they used, called optical flow, creates a 3D model using a very simple, monocular camera. By marking features in each subsequent picture, the drone can tell the shape and depth of holes based on what changed in each photo. Things closer to the drone move more than things further away, allowing the drone to see the foreground vs. the background.

As you can see in the video below, the researchers have created a very messy environment in which to test their system. The Bebop 2 drone with an NVIDIA Jetson TX2 GPU on board flits around the hole like a bee and then buzzes right through at 2 meters per second, a solid speed. Further, the researchers confused the environment by making the far wall similar to the closer wall, proving that the technique can work in novel and messy situations.

The team at the University of Maryland’s Perception and Robotics Group reported that the drone was 85 percent accurate as it flew through various openings. It’s not quite as fast as Luke skirting Beggar’s Canyon back on Tatooine, but it’s an impressive start.

Lockheed Martin and the Drone Racing League are working together to make driverless drones much, much smarter. The project, aimed at bringing AI to commercial drone flyers, is “challenging teams to develop artificial intelligence (AI) technology that will enable an autonomous drone to race a pilot-operated drone – and win.”

The racers can win up to $2 million in prizes. Lockheed Martin Chief Technology Officer Keoki Jackson announced the challenge at TechCrunch Disrupt in San Francisco today.

“At Lockheed Martin, we are working to pioneer state-of-the-art, AI-enabled technologies that can help solve some of the world’s most complex challenges – from fighting wildfires and saving lives during natural disasters to exploring the farthest reaches of deep space,” said Jackson. “Now, we are inviting the next generation of AI innovators to join us with our AlphaPilot Innovation Challenge. Competitors will have an opportunity to define the future of autonomy and AI and help our world leverage these promising technologies to build a brighter future.”

Contestants will use NVIDIA’s Jetson platform to fly drones “without any pre-programming or human intervention” through a multi-dimensional race course. The contestants can win an extra $250,000 for creating an AI that outperforms a DRL human-piloted drone, a sort drone Turing test that could mean smarter drones for both amateur flyers and Lockheed’s own extensive drone programs.

Lockheed Martin is working with the Drone Racing League to bring the competitors in human-controlled drone racing into the AI future. The goal is to create a drone that flies as well – or better – than a human.

You can learn more here and the challenge opens in November.

As fields of research, machine learning and artificial intelligence both date back to the 50s. More than half a century later, the disciplines have graduated from the theoretical to practical, real world applications. We’ll have some of the top minds in both categories to discuss the latest advances and future of AI and ML on stage and Disrupt San Francisco in early September.

For the first time, Disrupt SF will be held in San Francisco’s Moscone Center. It’s a huge space, which meant we could dramatically increase the amount of programming offered to attendees. And we did. Here’s the agenda. Tickets are still available even though the show is less than two weeks away. Grab one here.

The show features the themes currently facing the technology world including artificial intelligence and machine learning. Some of the top minds in AI and ML are speaking on several stages and some are taking audience questions. We’re thrilled to be joined by Dr. Kai-Fu Lee, former president of Google China and current CEO of Sinovation Ventures, Colin Angle, co-founder and CEO of iRobots, Claire Delaunay, Nvidia VP of Engineering, and among others, Dario Gil, IBM VP of AI.

Dr. Kai-Fu Lee is the CEO and chairman of Sinovation, a venture firm based in the U.S. and China, and he has emerged as one of the world’s top prognosticators on artificial intelligence and how the technology will disrupt just about everything. Dr. Lee wrote in The New York Times last year that AI is “poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.” Dr. Lee will also be on our Q&A stage (after his interview on the Main Stage) to take questions from attendees.

Colin Angle co-founded iRobot with fellow MIT grads Rod Brooks and Helen Greiner in 1990. Early on, the company provided robots for military applications, and then in 2002, introduced the consumer-focused Roomba. Angle has plenty to talk about. As the CEO and Chairman of iRobot, he led the company through the sale of its military branch in 2016 so the company can focus on robots in homes. If there’s anyone that knows how to both work with the military and manage consumers’ expectations with household robots, it’s Colin Angle and we’re excited to have him speaking at the event where he will also take questions from the audience on the Q&A stage.

Claire Delaunay is vice president of engineering at Nvidia, where she is responsible for the Isaac robotics initiative and leads a team to bring Isaac to market for roboticists and developers around the world. Prior to joining Nvidia, Delaunay was the director of engineering at Uber, after it acquired Otto, the startup she co-founded. She was also the robotics program lead at Google and founded two companies, Botiful and Robotics Valley. Delaunay will also be on our Q&A stage (after his interview on the Main Stage) to take questions from attendees.

Dario Gil, the head of IBM’s AI research efforts and quantum computing program, is coming to Disrupt Sf to talk about the current state of quantum computing. We may even see a demo or two of what’s possible today and use that to separate hype from reality. Among the large tech firms, IBM — and specifically the IBM Q lab — has long been at the forefront of the quantum revolution. Last year, the company showed off its 50-qubit quantum computer and you can already start building software for it using the company’s developer kit.

Sam Liang is the CEO/Co-Founder of AISense Inc, based in Silicon Valley. Funded by Horizons Ventures (DeepMind, Waze, Zoom, Facebook), Tim Draper, David Cheriton of Stanford (first investor in Google), etc. AISense has created Ambient Voice Intelligence™ technologies with deep learning that understands human-to-human conversations. Its Otter.ai product digitizes all voice meetings and video conferences, makes every conversation searchable and also provides speech analytics and insights. Otter.ai is the exclusive provider of automatic meeting transcription for Zoom Video Communications.

Laura Major is the Vice President of Engineering at CyPhy Works, where she leads R&D, product design and development and manages the multi-disciplinary engineering team. Prior to joining CyPhy Works, she worked at Draper Laboratory as a division lead and developed the first human-centered engineering capability and expanded it to included machine intelligence and AI. Laura also grew multiple programs and engineering teams to contribute to the development and expansion of ATAK, which is now in wide use across the military.

Dr. Jason Mars founded and runs Clinc to try to close the gap in conversational AI by emulating human intelligence to interpret unstructured, unconstrained speech. AI has the potential to change everything, but there is a fundamental disconnect between what AI is capable of and how we interface with it. Clinc is currently targeting the financial market, letting users converse with their bank account using natural language without any pre-defined templates or hierarchical voice menus. At Disrupt SF, Mars is set to debut other ways that Clinc’s conversational AI can be applied. Without ruining the surprise, let me just say that this is going to be a demo you won’t want to miss. After the demo, he will take questions on the Q&A stage.

Chad Rigetti, the namesake founder of Rigetti Computing, will join us at Disrupt SF 2018 to explain Rigetti’s approach to quantum computing. It’s two-fold: on one front, the company is working on the design and fabrication of its own quantum chips; on the other, the company is opening up access to its early quantum computers for researchers and developers by way of its cloud computing platform, Forest. Rigetti Computing has raised nearly $70 million to date according to Crunchbase, with investment from some of the biggest names around. Meanwhile, labs around the country are already using Forest to explore the possibilities ahead.

Kyle Vogt co-founded and eventually sold Cruise Automation to General Motors in 2016. He stuck around after the sale and still leads the company today. Since selling the company to GM, Cruise has scaled rapidly and seemed to maintain a scrappy startup feel though now a division of a massive corporation. The company had 30 self-driving test cars on the road in 2016 and later rolled out a high-definition mapping system. In 2017 the company started running an autonomous ride-hailing service for its employees in San Francisco, later announcing its self-driving cars would hit New York City. Recently SoftBank’s Vision Fund invested $2.25 billion in GM Cruise Holdings LLC and when the deal closes, GM will invest an additional $1.1 billion. The investments are expected to inject enough capital into Cruise for the unit to reach commercialization at scale beginning in 2019.

Nvidia is taking advantage of the Gamescom in Germany to hold a press conference about its future graphics processing units. The conference will start at 6 PM in Germany, 12 PM in New York, 9 AM in San Francisco.

Just a week after the company unveiled its new Turing architecture, Nvidia could share more details about the configurations and prices of its upcoming products — the RTX 2080, RTX 2080 Ti, etc.

The name of the conference #BeForeTheGame suggests that Nvidia is going to focus on consumer products and in particular GPUs for gamers. While the GeForce GTX 1080 is still doing fine when it comes to playing demanding games, the company is always working on new generations to push the graphical boundaries of your computer.

According to Next INpact, you can expect two different products this afternoon. The GeForce RTX 2080 is going to feature 2,944 CUDA cores with 8GB of GDDR6. The GeForce RTX 2080 Ti could feature as many as 4,352 CUDA cores with 11GB of GDDR6.

Nvidia already unveiled Quadro RTX models for professional workstations last week. The company is expecting significant performance improvements with this new generation as those GPUs are optimized for ray tracing — the “RT” in RTX stands for ray tracing.

While ray tracing isn’t new, it’s hard to process images using this method with current hardware. The RTX GPUs will have dedicated hardware units for this task in particular.

And maybe it’s going to become easier to buy GPUs now that the cryptocurrency mining craze is slowly fading away.

In recent days, word about Nvidia’s new Turing architecture started leaking out of the Santa Clara-based company’s headquarters. So it didn’t come as a major surprise that the company today announced during its Siggraph keynote the launch of this new architecture and three new pro-oriented workstation graphics cards in its Quadro family.

Nvidia describes the new Turing architecture as “the greatest leap since the invention of the CUDA GPU in 2006.” That’s a high bar to clear, but there may be a kernel of truth here. These new Quadro RTx chips are the first to feature the company’s new RT Cores. “RT” here stands for ray tracing, a rendering method that basically traces the path of light as it interacts with the objects in a scene. This technique has been around for a very long time (remember POV-Ray on the Amiga?). Traditionally, though, it was always very computationally intensive, though the results tend to look far more realistic. In recent years, ray tracing got a new boost thanks to faster GPUs and support from the likes of Microsoft, which recently added ray tracing support to DirectX.

“Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences,” said Nvidia CEO Jensen Huang. “The arrival of real-time ray tracing is the Holy Grail of our industry.”

The new RT cores can accelerate ray tracing by up to 25 times compared to Nvidia’s Pascal architecture, and Nvidia claims 10 GigaRays a second for the maximum performance.

Unsurprisingly, the three new Turing-based Quadro GPUs will also feature the company’s AI-centric Tensor Cores, as well as 4,608 CUDA cores that can deliver up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second. The chips feature GDDR6 memory to expedite things, and support Nvidia’s NVLink technology to scale up memory capacity to up to 96GB and 100GB/s of bandwidth.

The AI part here is more important than it may seem at first. With NGX, Nvidia today also launched a new platform that aims to bring AI into the graphics pipelines. “NGX technology brings capabilities such as taking a standard camera feed and creating super slow motion like you’d get from a $100,000+ specialized camera,” the company explains, and also notes that filmmakers could use this technology to easily remove wires from photographs or replace missing pixels with the right background.

On the software side, Nvidia also today announced that it is open sourcing its Material Definition Language (MDL).

Companies ranging from Adobe (for Dimension CC) to Pixar, Siemens, Black Magic, Weta Digital, Epic Games and Autodesk have already signed up to support the new Turing architecture.

All of this power comes at a price, of course. The new Quadro RTX line starts at $2,300 for a 16GB version, while stepping up to 24GB will set you back $6,300. Double that memory to 48GB and Nvidia expects that you’ll pay about $10,000 for this high-end card.

During a gold rush, Silicon Valley’s line is to always invest in picks and shovels instead of mining. Sometimes it pays just to do both.

TechCrunch has learned through a company fundraise overview that Beijing-based mining equipment seller Bitmain hit a quarterly revenue of approximately $2 billion in Q1 of this year. Despite a slump in bitcoin prices since the beginning of the year, the company is on track to become the first blockchain-focused company to achieve $10 billion in annual revenue, assuming that the cryptocurrency market doesn’t drop further.

Fortune has previously reported that the company had $1.1 billion in profits in the same quarter, a number in line with these revenue numbers, given a net margin of around 50%.

That growth is extraordinary. From the same source seen by TechCrunch, Bitmain’s revenues last year were $2.5 billion, and around $300 million just the year before that. The company reportedly raised a major venture round of $300-400 million from investors including Sequoia China, at a valuation of $12 billion.

For comparison, popular cryptocurrency wallet Coinbase made $1 billion in revenue in 2017. In addition, Nvidia, a company based out of California that also makes computer chips, generated revenues of $9.7 billion in its 2018 fiscal year (2017 calendar year). Nvidia’s revenues were $3.21 billion in Q1 fiscal year 2019 (Feb-April 2018), and historical revenue figures show a general seasonal uptrend in revenue from Q1 through Q4.

The same overview also shows that Bitmain is exploring an IPO with a valuation between $40-50 billion. That would represent a significant uptick from its most recent valuation, and is almost certainly dependent on the vitality of the broader blockchain ecosystem.

Several of Bitmain’s competitors have filed for IPO since the beginning of 2018 but most of them are significantly smaller in size. For example, Hong Kong- based company Canaan Creative filed for an IPO in May, and the latest was that it was aiming for $1 billion to $2 billion in fundraising with 2017 revenue of USD $204 million.

When contacted for this story, Bitmain declined to comment on the specific numbers TechCrunch has acquired.

A Brief Overview of Bitmain

Bitmain is the world’s dominant producer of cryptocurrency mining chips known as ASICs, or Application-Specific Integrated Circuit. It was founded by Jihan Wu and Micree Zhang in 2013, and the company is currently headquartered in Beijing.

As the story goes, back in 2011, when Wu read Satoshi Nakamoto’s whitepaper on Bitcoin, he emptied his bank account to buy them. Back then, one bitcoin could be purchased for under a dollar. And by 2013, Wu and Zhang decided to build an ASIC chip specifically for bitcoin mining and founded Bitmain. Wu was just 28 at the time.

Cryptocurrency mining is the process of checking and adding new transactions to bitcoin’s immutable ledger, called the blockchain. The blockchain is formed by digital blocks, where transactions are recorded. The act of mining is essentially using math to solve for a cryptographic hash, or an unique signature if you will, to identify new blocks.

The general mining process requires massive processing power and incurs hefty energy costs. In exchange for those expenses, miners are rewarded with a number of bitcoins for each block they add onto the blockchain. Currently, in the case of Bitcoin, the reward for every block discovered is 12.5 bitcoins. At the current average trailing bitcoin price of approximately $6,500, that’s $81,250 up for grabs every 10 minutes, or $11.7 million dollars a day.

Bitmain has several business segments. The first and primary one is selling mining machines outfitted with Bitmain’s chips that are usually a few hundred to a few thousand dollars each. For example, the latest Antminer S9 model is listed as $3,319. Secondly, you can rent Bitmain’s mining machines to mine cryptocurrencies.

Third, you can participate to mine bitcoin as part of Bitmain’s mining pool. A mining pool is a joint group of cryptocurrency miners who combine their computational resources over a network. Bitmain’s two mining pools, Bitmain’s AntPool and BTC.com, collectively control more than 38 percent of the world’s Bitcoin mining power per BTC.com at the moment.

The future of Bitmain is Closely Tied with the Crypto Market

Bitcoin mining is a massive business with influence over energy prices across the world. (LARS HAGBERG/AFP/Getty Images)

Despite its rapid rise to success, Bitmain is ultimately dependent on the price of cryptocurrencies and overall crypto market fluctuations. When there is a bull crypto market, investors would be willing to give a different valuation multiple to the company than if it were in a bear market. In a bear market, the margins are reduced for both the company as well as for its customers, as the economics of mining cryptocurrrncies are no longer as compelling. For example, at the end of 2014, Mt. Gox, a famous Bitcoin exchange at the time, was hacked, spurring a crash in cryptocurrency prices.

Subsequently, Bitmain went through a bitcoin drought as Bitcoin prices hit low points, and its ASIC chips did not see much demand. It was not appealing to miners to pay for expensive electricity bills to mine a digital currency that was falling in value. But fast forward to now, we have gone through several bull and bear crypto market cycles. According to Frost & Sullivan, in 2017, Bitmain is estimated to have ~67% of the market share in bitcoin mining hardware, and generated 60% of computing power.

Canaan Creative IPO filing. Compay A is Bitmain

One of the fundamental challenges facing any cryptocurrency mining manufacturer such as Bitmain is that the valuation of the company is largely based off of the price of cryptocurrencies. The market in the first half of 2018 has shown that no one really knows when bitcoin prices and the cryptocurrency market will start picking up again Additionally, according to Frost & Sullivan, the ASIC-based blockchain hardware market, which is the market segment that includes Bitmain and Canaan, will see its compound annual growth rate (CAGR) slow to around 57.7% annually between 2017 to 2020, down from 247.6% between 2013 and 2017.

Nonetheless, it seems that Bitmain has planned well ahead to prepare for these macro risks and exposures. The company has raised significant private funding and has been expanding its business into mining new coins and creating new chips outside of cryptocurrency applications.

First, with it’s existing mining rigs, Bitmain can essentially broaden into all SHA256-related coins. So coins such as Bitcoin, Bitcoin Cash, Litecoin, can all be mined on Bitmain’s equipment. The limitation here is largely how fast they can build up more mining equipment and mining centers. The company has broadened it’s geographic reach by developing new mining centers. Most recently, Bitmain revealed that it will build a $500 mn blockchain data center and mining facility in Texas as part of its expansion into the U.S. market, aiming for operations to begin by early 2019.

Secondly, Bitmain is also looking to launch their own AI chips by the end of 2018. Interestingly, the AI chips are called Sophons, originated from the key alien technology in the famous trilogy, the Three Body Problem, by Liu Cixin. If things go as planned, Bitmain’s Sophon units could be training neural networks in data centers around the world. Bitmain’s CEO Wu once said that in 5 years, 40% of revenues could come from AI chips.

Lastly, Bitmain has been equipping itself with cash. Lots of it, from a number of the top and largest investors in Asia. Two months ago, China Money Network reported that Bitmain raised a series B round, led by Sequoia Capital China, DST, GIC, Coatue in a $400 million raise, putting the company at a value of $12 billion. Just last week, Chinese tech conglomerate Tencent and Japan’s Softbank, another tech giant whose 15% stake in Uber makes it the drive-hailing app’s largest shareholder, also joined the investor base.

For Bitmain, there are many reasons to stay private as a company, including keeping its quarterly financials private as well as dealing with market fluctuations and the ongoing volatility and uncertainty in the cryptocurrency world. However, the con is that early employees may not get liquidity in their stock options until much later.

Wu has said that a Bitmain IPO would be a “landmark” for both the company and the cryptocurrency space. However, with the current rich crypto private market financing, it’s not so bad of an idea to continue to raise private money and stay out of the public eye. Once Bitmain’s financials become more diversified and cryptocurrency becomes more widely adopted worldwide, the world may then be ready for this $10bn revenue blockchain company.

These days, no cloud platform is complete without support for GPUs. There’s no other way to support modern high-performance and machine learning workloads without them, after all. Often, the focus of these offerings is on building machine learning models, but today, Google is launching support for the Nvidia P4 accelerator, which focuses specifically on inferencing to help developers run their existing models faster.

In addition to these machine learning workloads, Google Cloud users also can use the GPUs for running remote display applications that need a fast graphics card. To do this, the GPUs support Nvidia Grid, the company’s system for making server-side graphics more responsive for users who log in to remote desktops.

Because the P4s come with 8GB of DDR5 memory and can handle up to 22 tera-operations per second for integer operations, these cards can handle pretty much anything you throw at them. And because buying one will set you back at least $2,200, if not more, renting them by the hour may not be the worst idea.

On the Google Cloud, the P4 will cost $0.60 per hour with standard pricing and $0.21 per hour if you’re comfortable with running a preemptible GPU. That’s significantly lower than Google’s prices for the P100 and V100 GPUs, though we’re talking about different use cases here, too.

The new GPUs are now available in us-central1 (Iowa), us-east4 (N. Virginia), Montreal (northamerica-northeast1) and europe-west4 (Netherlands), with more regions coming soon.

“We’ve been in semi-stealth mode on this basically for the last 2-3 years,” said Elon Musk on an earnings call today. “I think it’s probably time to let the cat out of the bag…”

The cat in question: the Tesla computer. Otherwise known as “Hardware 3”, it’s a Tesla-built piece of hardware meant to be swapped into the Model S, X, and 3 to do all the number crunching required to advance those cars’ self-driving capabilities.

Tesla has thus far relied on Nvidia’s Drive platform. So why switch now?

By building things in-house, Tesla say it’s able to focus on its own needs for the sake of efficiency.

“We had the benefit […] of knowing what our neural networks look like, and what they’ll look like in the future,” said Pete Bannon, director of the Hardware 3 project. Bannon also noted that the hardware upgrade should start rolling out next year.

“The key,” adds Elon “is to be able to run the neural network at a fundamental, bare metal level. You have to do these calculations in the circuit itself, not in some sort of emulation mode, which is how a GPU or CPU would operate. You want to do a massive amount of [calculations] with the memory right there.”

The final outcome, according to Elon, is pretty dramatic: he says that whereas Tesla’s computer vision software running on Nvidia’s hardware was handling about 200 frames per second, its specialized chip is able to do crunch out 2000 frames per second “with full redundancy and failover”.

Plus, as AI analyst James Wang points out, it gives Tesla more control over its own future:

By having its own silicone, Tesla can build for its own needs at its own pace. If they suddenly recognize something the hardware is lacking, they’re not waiting on someone else to build it. It’s by no means a trivial task — but if they can pull it off without breaking the bank (and Elon says it costs them “the same as the current hardware”), it could end up being a significant strength.

As for how they’ll get the chips into existing Teslas, Elon says: “We made it easy to switch out the computer, and that’s all that needs to be done. You take out one computer, and plug in the next. All the connectors are compatible.”

It was revealed at E3 last month that Microsoft was building a cloud gaming system. A report today calls that system Scarlett Cloud and it’s only part of Microsoft next-gen Xbox strategy. And it makes a lot of sense, too.

According to Thurrott.com, noted site for all things Microsoft, the next Xbox will come in two flavors. One, will be a traditional gaming console where games are processed locally. You know, like how it works on game systems right now. The other system will be a lower-powered system that will stream games from the cloud — most likely, Microsoft’s Azure cloud.

This streaming system will still have some processing power, which is in part to counter latency traditionally associated with streaming games. Apparently part of the game will run locally while the rest is streamed to the system.

The streaming Xbox will likely be available at a much lower cost than the traditional Xbox. And why not. Microsoft has sold Xbox systems with a slim profit margin, relying on sales of games and online services to make up the difference. A streaming service that’s talk about on Thurrott would further take advantage of this model while tapping into Microsoft’s deep understanding of cloud computing.

A few companies have tried streaming full video games. Onlive was one of the first and while successful for a time, but eventually went through a dramatic round of layoffs before a surprise sale for $4.8 million in 2012. Sony offers an extensive library of PS2, PS3 and PS4 games for streaming through its PlayStation Now service. Nvidia got into the streaming game this year and offers a small selection of streaming through GeForce Now. But these are all side projects for the companies.

Sony and Nintendo do not have the global cloud computing platform of Microsoft, and if Microsoft’s streaming service hits, it could change the landscape and force competitors to reevaluate everything.

 Nvidia will power artificial intelligence technology built into its future vehicles, including the new I.D. Buzz, its all-electric retro-inspired camper van concept. The partnership between the two companies also extends to the future vehicles, and will initially focus on so-called “Intelligent Co-Pilot” features, including using sensor data to make driving easier, safer and… Read More
 Nvidia revealed a lot of news about its Xavier autonomous machine intelligence processors at this year’s CES show in Las Vegas. The first production samples of the Xavier are now shipping out to customers, after being unveiled last year, and Nvidia also announced three new variants of its DRIVE AI platform, which are based around Xavier SoCs. The new DRIVE AI offerings include one focused… Read More