Steve Thomas - IT Consultant

Nvidia is bringing Fortnite back to iPhones and iPads, according to a report from the BBC.

The British news service is reporting that Nvidia has developed a version of its GeForce cloud gaming service that runs on Safari.

The development means that Fortnite gamers can play the Epic Games title off of servers run by Nvidia. What’s not clear is whether the cloud gaming service will mean significant lag times for players that could effect their gameplay.

Apple customers have been unable to download new versions of Epic Games’ marquee title after the North Carolina-based company circumvented Apple’s rules around in-game payments.

Revenues and rules are at the center of the conflict between Epic and Apple. Epic had developed an in-game marketplace where transactions were not subject to the 30% charges that Apple places on transactions conducted through its platform.

The maneuver was a clear violation of Apple’s terms of service, but Epic is arguing that the rules themselves are unfair and an example of Apple’s monopolistic hold over distribution of applications on its platform.

The ongoing legal dispute won’t even see the inside of a courtroom until May and it could be years before the lawsuit is resolved.

That’s going to create a lot of hassles for the nearly 116 million iOS Fortnite players, especially for the 73 million players that only use Apple products to access the game, according to the BBC report.

Unlike Android, Apple does not allow games or other apps to be loaded on to its phones or tablets via app stores other than its own.

Nvidia already offers its GeForce gaming service for Mac, Windows, Android and Chromebook computers, but the new version will be available on Apple mobile devices as well, according to the BBC report.

If it moves ahead, Nvidia’s cloud gaming service would be the only one on the market to support iOS users. Neither Amazon’s Luna cloud-gaming platform, nor Google’s Stadia service carry Fortnite.

AWS today announced the launch of its newest GPU-equipped instances. Dubbed P4, these new instances are launching a decade after AWS launched its first set of Cluster GPU instances. This new generation is powered by Intel Cascade Lake processors and eight of NVIDIA’s A100 Tensor Core GPUs. These instances, AWS promises, offer up to 2.5x the deep learning performance of the previous generation — and training a comparable model should be about 60% cheaper with these new instances.

Image Credits: AWS

For now, there is only one size available, the p4d.12xlarge instance, in AWS slang and the eight A100 GPUs are connected over NVIDIA’s NVLink communication interface and offer support for the company’s GPUDirect interface as well.

With 320 GB of high-bandwidth GPU memory and 400 Gbps networking, this is obviously a very powerful machine. Add to that the 96 CPU cores, 1.1 TB of system memory and 8 TB of SSD storage and it’s maybe no surprise that the on-demand price is $32.77 per hour (though that price goes down to less than $20/hour for 1-year reserved instances and $11.57 for three-year reserved ones.

Image Credits: AWS

On the extreme end, you can combine 4,000 or more GPUs into an EC2 UltraCluster, as AWS calls these machines, for high-performance computing workloads at what is essentially a supercomputer-scale machine. Given the price, you’re not likely to spin up one of these clusters to train your a model for your toy app anytime soon, but AWS has already been working with a number of enterprise customers to test these instances and clusters, including Toyota Research Institute, GE Healthcare and Aon.

“At [Toyota Research Institute], we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, Technical Lead, Infrastructure Engineering at TRI. “The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

Nvidia is is going to be powering the world’s fastest AI supercomputer, a new system dubbed ‘Leonardo’ that’s being built by the Italian multi-university consortium CINECA, a global supercomutin leader. The Leonardo system will offer as much as 10 exaflops of FP16 AI performance capabilities, and be made up of more than 14,000 Nvidia Ampere-based GPUS once completed.

Leonardo will be one of four new supercomputers supported by a cross-European effort to advance high-performance computing capabilities in the region, that will eventually offer advanced AI capabilities for processing applications across both science and industry. Nvidia will also be supplying its Mellanox HDR InfiniBand networks to the project in order to enable performance across the clusters with low-latency broadband connections.

The other computes in the cluster include MeluXina in Luxembourg and Vega in Solvevnia, as well as a new supercooling coming online in the Czech Republic. The pan-European consortium also plans four more Supercomputers for Bulgaria, Finland, Portugal and Spain, though those will follow later and specifics around their performance and locations aren’t yet available.

Some applications that CINECA and the other supercomputers will be used for include analyzing genomes and discovering new therapeutic pathways; tackling data from multiple different sources for space exploration and extraterrestrial planetary research; and modelling weather patterns, including extreme weather events.

Nvidia is in the process of acquiring chip designer Arm for $40 billion. Coincidentally, both companies are also holding their respective developer conferences this week. After he finished his keynote at the Arm DevSummit, I sat down with Arm CEO Simon Segars to talk about the acquisition and what it means for the company.

Segars noted that the two companies started talking in earnest around May 2020, though at first, only a small group of executives was involved. Nvidia, he said, was really the first suitor to make a real play for the company — with the exception of SoftBank, of course, which took Arm private back in 2016 — and combining the two companies, he believes, simply makes a lot of sense at this point in time.

“They’ve had a meteoric rise. They’ve been building up to that,” Segars said. “So it just made a lot of sense with where they are at, where we are at and thinking about the future of AI and how it’s going to go everywhere and how that necessitates much more sophisticated hardware — and a much more sophisticated software environment on which developers can build products. The combination of the two makes a lot of sense in this moment.”

The data center market, where Nvidia, too, is already a major player, is also an area where Arm has heavily focused in recent years. And while it goes up against the likes of Intel, Segars is optimistic. “We’re not in it to be a bit player,” he said. “Our goal is to get a material market share and I think the proof to the pudding is there.”

He also expects that in a few years, we’ll see Arm-powered servers available on all of the major clouds. Right now, AWS is ahead in this game with its custom-built Gravitron processors. Microsoft and Google do not currently offer Arm-based servers.

“With each passing day, more and more of the software infrastructure that’s required for the cloud is getting ported over and optimized for Arm. So it becomes a more and more compelling proposition for sure,” he said, and cited both performance and energy efficiency as reasons for cloud providers to use Arm chips.

Another interesting aspect of the deal is that we may just see Arm sell some of Nvidia’s IP as well. That would be a big change — and a first — for Nvidia, but Segars believes it makes a lot of sense to do so.

“It may be that there is something in the portfolio of Nvidia that they currently sell as a chip that we may look at and go, ‘you know, what if we package that up as an IP product, without modifying it? There’s a market for that.’ Or it may be that there’s a thing in here where if we take that and combine it with something else that we were doing, we can make a better product or expand the market for the technology. I think it’s going to be more of the latter than it is the former because we design all our products to be delivered as IP.”

And while he acknowledged that Nvidia and Arm still face some regulatory hurdles, he believes the deal will be pro-competitive in the end — and that the regulators will see it the same way.

He does not believe, by the way, that the company will face any issues with Chinese companies not being able to license Arm’s designs because of export restrictions, something a lot of people were worried about when the deal was first announced.

“Export control of a product is all about where was it designed and who designed it,” he said. “And of course, just because your parent company changes, doesn’t change those fundamental properties of the underlying product. So we analyze all our products and look at how much U.S. content is in there, to what extent are our products subject to U.S. export control, U.K. export control, other export control regimes? It’s a full-time piece of work to make sure we stay on top of that.”

Here are some excerpts from our 30-minute conversation:

TechCrunch: Walk me through how that deal came about, actually, kind of what was the timeline for you?

Simon Segars: I think probably around May, June time was when it really kicked off. We started having some early discussions. And then, as these things progress, you suddenly kind of hit the ‘Okay, now let’s go.’ We signed a sort of first agreement to actually go into due diligence and then it really took off. It went from a few meetings, a bit of negotiation, to suddenly heads down and a broader set of people — but still a relatively small number of people involved, answering questions. We started doing due diligence documents, just the mountain of stuff that you go through and you end up with a document. [Segars shows a print-out of the contract, which is about the size of two phone books.]

You must have had suitors before this. What made you decide to go ahead with this deal this time around?

Well, to be honest, in Arm’s history, there’s been a lot of rumors about people wanting to acquire Arm, but really until SoftBank in 2016, nobody ever got serious. I can’t think of a case where somebody actually said, ‘come on, we want to try and negotiate a deal here.’ And so it’s been four years under SoftBank’s ownership and that’s been really good because we’ve been able to do what we said we were going to do around investing much more aggressively in the technology. We’ve had a relationship with Nvidia for a long time. [Rene Haas, Arm’s president of its Intellectual Property Group, who previously worked at Nvidia] has had a relationship with [Nvidia CEO Jensen Huang] for a long time. They’ve had a meteoric rise. They’ve been building up to that. So it just made a lot of sense with where they are at, where we are at and thinking about the future of AI and how it’s going to go everywhere and how that necessitates much more sophisticated hardware — and a much more sophisticated software environment on which developers can build products. The combination of the two makes a lot of sense in this moment.

How does it change the trajectory you were on before for Arm?

As Nvidia continues to work through its deal to acquire ARM for $40 billion from SoftBank, the computing giant is making another big move to lay out its commitment to investing in UK technology. Today the company announced plans to develop Cambridge-1, a new AI supercomputer that will be used for research in the health industry in the country, the first supercomputer built by Nvidia specifically for external research access, it said.

Nvidia said it is already working with GSK, AstraZeneca, London hospitals Guy’s and St Thomas’ NHS Foundation Trust, King’s College London and Oxford Nanopore to use the Cambridge-1. The supercomputer is due to come online by the end of the year and will be the company’s second supercomputer in the country. The first is already in development at the company’s AI Center of Excellence in Cambridge, and the plan is to add more supercomputers over time.

The growing role of AI has underscored an interesting crossroads in medical research. One one hand, leading researchers all acknowledge the role it will be playing in their work. On the other, none of them and their institutions have the resources to meet that demand on their own. That’s driving them all to get involved much more deeply with big tech companies like Google, Microsoft and in this case Nvidia, to carry out work.

Alongside the supercomputer news, Nvidia is making a second announcement in the area of healthcare in the UK: it has inked a partnership with GSK, which has established an AI hub in London, to build AI-based computational processes that will be using in drug vaccine and discovery — an especially timely piece of news, given that we are in a global health pandemic and all drug makers and researchers are on the hunt to understand more about, and build vaccines for, Covid-19.

The news is coinciding with Nvidia’s industry event, the GPU Technology Conference.

“Tackling the world’s most pressing challenges in healthcare requires massively powerful computing resources to harness the capabilities of AI,” said Jensen Huang, founder and CEO of NVIDIA, will say in his keynote at the event. “The Cambridge-1 supercomputer will serve as a hub of innovation for the U.K., and further the groundbreaking work being done by the nation’s researchers in critical healthcare and drug discovery.”

The company plans to dedicate Cambridge-1 resources in four areas, it said: industry research, in particular joint research on projects that exceed the resources of any single institution; university-granted compute time; health-focused AI startups; and education for future AI practitioners. It’s already building specific applications in areas, like the drug discovery work it’s doing with GSK, that will be run on the machine.

The Cambridge-1 will be built on Nvidia’s DGX SuperPOD system, which can process 400 petaflops of AI performance and 8 petaflops of Linpack performance. Nvidia said this will rank it as the 29th fastest supercomputer in the world.

“Number 29” doesn’t sound very groundbreaking, but there are other reasons why the announcement is significant.

For starters, it underscores how the supercomputing market — while still not a mass-market enterprise — is increasingly developing more focus around specific areas of research and industries. In this case, it underscores how health research has become more complex, and how applications of artificial intelligence have both spurred that complexity but, in the case of building stronger computing power, also provides a better route — some might say one of the only viable routes in the most complex of cases — to medical breakthroughs and discoveries.

It’s also notable that the effort is being forged in the UK. Nvidia’s deal to buy ARM has seen some resistance in the market — with one group leading a campaign to stop the sale and take ARM independent — but this latest announcement underscores that the company is already involved pretty deeply in the UK market, bolstering Nvidia’s case to double down even further. (Yes, chip reference designs and building supercomputers are different enterprises, but the argument for Nvidia is one of commitment and presence.)

“AI and machine learning are like a new microscope that will help scientists to see things that they couldn’t see otherwise,” said Dr. Hal Barron, Chief Scientific Officer and President, R&D, GSK, in a statement. “NVIDIA’s investment in computing, combined with the power of deep learning, will enable solutions to some of the life sciences industry’s greatest challenges and help us continue to deliver transformational medicines and vaccines to patients. Together with GSK’s new AI lab in London, I am delighted that these advanced technologies will now be available to help the U.K.’s outstanding scientists.”

“The use of big data, supercomputing and artificial intelligence have the potential to transform research and development; from target identification through clinical research and all the way to the launch of new medicines,” added James Weatherall, PhD, Head of Data Science and AI, Astrazeneca, in his statement.

“Recent advances in AI have seen increasingly powerful models being used for complex tasks such as image recognition and natural language understanding,” said Sebastien Ourselin, Head, School of Biomedical Engineering & Imaging Sciences at King’s College London. “These models have achieved previously unimaginable performance by using an unprecedented scale of computational power, amassing millions of GPU hours per model. Through this partnership, for the first time, such a scale of computational power will be available to healthcare research – it will be truly transformational for patient health and treatment pathways.”

Dr. Ian Abbs, Chief Executive & Chief Medical Director of Guy’s and St Thomas’ NHS Foundation Trust Officer, said: “If AI is to be deployed at scale for patient care, then accuracy, robustness and safety are of paramount importance. We need to ensure AI researchers have access to the largest and most comprehensive datasets that the NHS has to offer, our clinical expertise, and the required computational infrastructure to make sense of the data. This approach is not only necessary, but also the only ethical way to deliver AI in healthcare – more advanced AI means better care for our patients.”

“Compact AI has enabled real-time sequencing in the palm of your hand, and AI supercomputers are enabling new scientific discoveries in large-scale genomic datasets,” added Gordon Sanghera, CEO, Oxford Nanopore Technologies. “These complementary innovations in data analysis support a wealth of impactful science in the UK, and critically, support our goal of bringing genomic analysis to anyone, anywhere.”

 

ARM Holdings, the UK semiconductor company, made history for the second time today, becoming the country’s biggest tech exit when Nvidia announced over the weekend that it would buy it from SoftBank for $40 billion in an all-stock deal. (ARM’s first appearance in the record books? When SoftBank announced in 2016 that it would acquire the company for $32 billion.)

But before you can say advanced reduced instruction set computing machine, the deal has hit a minor hitch. One of ARM’s co-founders has started a campaign to get the UK to interfere in the deal, or else call it off and opt for a public listing backed by the government.

Hermann Hauser, who started the company in 1990 along with a host of others as a spin-out of Acorn Computers, has penned an open letter to the UK’s Prime Minister Boris Johnson in which he says that he is “extremely concerned” about the deal and how it will impact jobs in the country, ARM’s business model, and the future of the country’s economic sovereignty independent of the US and US interests.

Hauser has also created a site to gather public support — savearm.co.uk — and to that end has also started to collect signatures from business figures and others.

He’s calling on the government to intervene, or to at least create legally biding provisions, tied to passing the deal to guarantee jobs, create a way to enforce Nvidia not getting preferential treatment over other licensees, and to secure an exemption from CFIUS regulation “so that UK companies are guaranteed unfettered access to our own microprocessor technology.”

The letter and general wave of backlash that is coming out in the wake of last night’s acquisition news underscores interesting — and, you might argue in the long term, bigger — themes about technology in the UK, or even more generally building technology giants outside of the US or China.

In short, the questions that are being raised are around why ARM can’t try to continue to build itself as an independent company, why it opted to go for a Softbank acquisition in the first place the first time around, and why the UK doesn’t do more to support the building of its own, homegrown tech giants.

Those questions are more high-level. More immediately, Hauser’s position is that by letting the company be acquired by a US entity, any future sales that the company makes will also be subject to US export regulations — a key point since so many of its dealings are with Chinese companies and companies that in turn do business with China, all of whom would need to comply with CFIUS regulations, he notes.

“This puts Britain in the invidious position that the decision about who ARM is allowed to sell to will be made in the White House and not in Downing Street,” he writes. “Sovereignty used to be mainly a geographic issue, but now economic sovereignty is equally important.  Surrendering UK’s most powerful trade weapon to the US is making Britain a US vassal state.” (Bonus points for the Nvidia/invidous pun, Hermann.)

No doubt prepared for critics to slam the deal, Nvidia CEO and co-founder Jensen Huang and Arm CEO Simon Segars held a press conference earlier today in which they both laid out, in many words, a commitment to keeping ARM’s business model and independence intact.

“This will drive innovation for customers of both companies,” said Huang at one point, adding that Nvidia “will maintain ARM’s open licensing model and customer neutrality…We love ARM’s business model. In fact, we intend to expand ARM’s licencing portfolio with access to Nvidia’s technology. Both our ecosystems will be enriched by this combination.”

Hauser’s response? “Do not believe any statements which are not legally binding.”

On the employment side, Hauser’s letter notes that ARM employs thousands of people and its ecosystem of partners stretch across Cambridge (where it is headquartered), Manchester, Belfast, Glasgow, Sheffield and Warwick. “When the headquarters move to the US this will inevitably lead to the loss of jobs and influence in the UK as we have seen with the Cadbury takeover by Kraft,” he writes.

ARM’s business model, meanwhile, has been built on the concept of the company being a “Switzerland” in the semiconductor industry, supplying reference designs to a host of licensees, many of whom might also compete against each other, and who also compete against Nvidia. His belief is that by giving Nvidia control of the company it will inevitably make those business relationships unsustainable.

But back to the biggest issue of all, at least as it is outlined to appeal to the UK government, it is ARM’s position as a company independent of US interests that is of the highest concern.

ARM, he points out, is the only UK technology company with a dominant position in mobile phone, with its microprocessors in a vast array of devices, making up some 95% market share. That helps the company stand distinct from the likes of the “FAANG” group of giant companies Facebook, Apple, Amazon, Netflix and Google, which dominate in their own respective areas (ARM does not compete against any of them, nor necessarily work with them all).

“As the American president has weaponised technology dominance in his trade war with China, the UK will become collateral damage unless it has its own trade weapons to bargain with. ARM powers the smartphones of Apple, Samsung, Sony, Huawei and practically every other brand in the world and therefore can exert influence on all of them.”

Hauser’s response is not the first time that a founder has been critical over how ARM’s business has been thrown first to one buyer, and then another.

Back in August, when the rumors of Nvidia’s interest first began to surface in the wake of SoftBank’s disastrous financial results, another co-founder and the ex-president of the company, Tudor Brown, spoke out against SoftBank’s handling of the company, and the inherent problems of having Nvidia buy it as a “solution” to that.

As we wrote at the time of SoftBank’s deal, SoftBank wanted to use the acquisition to spearhead a big move into Internet of Things technology — essentially use ARM’s business model and relationships with hardware makers to secure a new wave of investment in IP around semiconductors for connected devices, rather than doubling down on the areas that have become “hot” in processors like AI and implementations in autonomous systems.

That turned out to be a disastrous move, since IoT has not been nearly as big of a business opportunity as everyone thought it would be — or at least, the IoT business has not developed in anything like the timescales or trajectories people had predicted it would.

Tudor’s take on Nividia is much like Hauser’s. Selling to a company that essentially competes against your company’s customers will make it very tough, if not impossible, to assert independence and assurance that you’re giving everyone equal access to your products.

Of course, you could argue that Nvidia wouldn’t have acquired the company for $40 billion just to run it into the ground. But with that deal in stock, and Nvidia playing the long game, perhaps it wins either way in the end?

We’ve asked Nvidia for a response to the Save Arm initiative and will update as we learn more.

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast where we unpack the numbers behind the headlines.

This is Equity Monday, our weekly kickoff that tracks the latest big news, chats about the coming week, digs into some recent funding rounds and mulls over a larger theme or narrative from the private markets. You can follow the show on Twitter here and myself here — and don’t forget to check out last Friday’s episode.

What a weekend behind us, and what a week ahead. Disrupt kicks off today, so the TechCrunch crew is busy as heck getting all the final touches put on. Snag a ticket here and we will see you soon.

On the podcast this morning:

Ok, that’s all we have time for today. See you at Disrupt in a few hours!

Equity drops every Monday at 7:00 a.m. PT and Thursday afternoon as fast as we can get it out, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts.

After weeks of on-and-off speculation, Nvidia this evening confirmed that it intends to buy chip design giant Arm Holdings for a total of up to $40 billion from existing owner SoftBank, which bought the company for $32 billion in 2016. The boards of all three parties have approved the outline of the deal.

The deal has a couple of intricacies. SoftBank will immediately receive $2 billion in cash for signing the deal. From there, it will receive another $10 billion in cash and $21.5 billion of stock in Nvidia at closing. That stake will be likely just a bit shy of 10% of the company. In addition, SoftBank is slated to earn $5 billion in a mix of cash and stock as a performance-based earn-out. Conditions or timing for that earn-out were not disclosed.

That $40 billion purchase price also includes $1.5 billion in equity compensation for existing Arm employees, which currently number more than 6,000 according to the company. All together then, SoftBank is looking at a $38.5 billion payout assuming its earn-out comes through.

Nvidia is buying all of Arm’s product groups except for its Internet of Things division, which was one of several areas where Arm has striven in recent years to expand as it attempts to grow outside of its core mobile chip design business.

Owing to the complex ownership structure and the multiple countries involved, closing is expect to take one and a half years, and will require regulatory and antitrust approvals in the U.S., the United Kingdom where Arm is headquartered, China, and the European Union.

Nvidia’s statement made clear that it intends to double down on the United Kingdom as a core part of its engineering efforts, a positioning that almost certainly is designed to placate concerns emanating from Downing Street about the competitiveness of the British economy in tech services following the country’s departure from the EU as it completes Brexit later this year.

Nvidia said that Arm’s offices in Cambridge will expand, and that the company intends to “[establish] a new global center of excellence in AI research at Arm’s Cambridge campus.”

The deal will provide some immediate cash relief to SoftBank, which has been working hard to clean up its balance sheet after a string of high-profile losses. The heavy Nvidia stock component of the deal will see SoftBank returning as a major investor in the company. The Japanese telco had previously held a 4.9% stake in Nvidia in its Vision Fund, which was disposed of in 2019 for a return of $3.3 billion.

While the big deal we have been tracking the past few weeks has been TikTok, there was another massive deal under negotiation that mirrors some of the international tech dynamics that have plagued the consumer social app’s sale.

Arm Holdings, which is the most important designer of processor chips in smartphones and increasingly other areas, has been quietly shopped around as SoftBank works to shed its investments and raise additional capital to placate activist investors like Elliott Management. The Japanese telco conglomerate bought Arm outright back in 2016 for $32 billion.

Now, those talks look like they are coming toward a conclusion. The Wall Street Journal first reported that SoftBank is close to locking in a sale to Nvidia for cash and stock that would value Arm at $40 billion. The Financial Times this afternoon further confirmed the outlines of the deal, which could be announced as early as Monday.

A couple of thoughts while we wait for official confirmation from Nvidia, Arm, and SoftBank.

First, Arm has struggled to turn its wildly successful chip designs — which today power billions of new chips a year — into a fast-growth company. As we discussed back in May, the company has ploddingly entered new growth markets, and while it has had some notable brand successes including Apple announcing that Arm-powered processor designs would be coming to the company’s iconic Macintosh lineup, those wins haven’t translated into significant profits.

SoftBank took a wild swing back in 2016 buying the company. If $40 billion is indeed the price, it’s a 25% gain in roughly four years. Given SoftBank’s recent notorious investing track record, that actually looks stellar, but of course, there was a huge opportunity cost for the company to buy such a pricy asset. Nvidia, which SoftBank’s Vision Fund bought a public stake in, has seen its stock price zoom more than 16x in that time frame, driven by AI and blockchain applications.

Second, assuming a deal is consummated, it’s a somewhat quiet denouement for one of the truly category-defining companies that has emanated out of the United Kingdom. The chip designer, which is based in Cambridge and has deep ties to the leading British university, has been seen as a symbol of Britain’s long legacy at the frontiers of computer science, in which Alan Turing played a key role in the development of computability.

Arm’s sale comes just as the UK government gears up for a fight with the European Union over its industrial policy, and specifically deeper funding for precisely the kinds of technologies that Arm was developing. Arm of course isn’t likely to migrate its workforce, but its ownership by an American semiconductor giant versus a Japanese holding company will likely end its relatively independent operations.

Third and finally, the deal would give Nvidia a dominant position in the semiconductor market, bringing together the company’s strength in graphics and AI processing workflows along with Arm’s underlying chip designs. While the company would not be fully vertically integrated, the combination would intensify Nvidia’s place as one of the major centers of gravity in chips.

It’s also a symbol of how far Intel has fallen behind its once diminutive peer. Intel’s market cap is about $210 billion, compared to Nvidia’s $300 billion. Intel’s stock is practically a straight line compared to Nvidia’s rapid growth the past few years, and this news isn’t likely to be well-received in Intel HQ.

Given the international politics involved and the sensitivity about the company, any deal would have to go through customary antitrust reviews in multiple countries, as well as potential national security reviews in the UK.

For SoftBank, it’s another sign of the company’s retrenchment in the face of extreme losses. But at least for now, it has a likely win on its hands.

For the last two weeks, I’ve been flying around the world in a preview of Microsoft’s new Flight Simulator. Without a doubt, it’s the most beautiful flight simulator yet, and it’ll make you want to fly low and slow over your favorite cities because — if you pick the right one — every street and house will be there in more detail than you’ve ever seen in a game. Weather effects, day and night cycles, plane models — it all looks amazing. You can’t start it up and not fawn over the graphics.

But the new Flight Simulator is also still very much a work in progress, too, even just a few weeks before the scheduled launch date on August 18. It’s officially still in beta, so there’s still time to fix at least some of the issues I list below. Because Microsoft and Asobo Studios, which was responsible for the development of the simulator, are using Microsoft’s AI tech in Azure to automatically generate much of the scenery based on Microsoft’s Bing Maps data, you’ll find a lot of weirdness in the world. There are taxiway lights in the middle of runways, giant hangars and crew buses at small private fields, cars randomly driving across airports, giant trees growing everywhere (while palms often look like giant sticks), bridges that are either under water or big blocks of black over a river — and there are a lot of sunken boats, too.

When the system works well, it’s absolutely amazing. Cities like Barcelona, Berlin, San Francisco, Seattle, New York and others that are rendered using Microsoft’s photogrammetry method look great — including and maybe especially at night.

Image Credits: Microsoft

The rendering engine on my i7-9700K with an Nvidia 2070 Super graphics card never let the frame rate drop under 30 frames per second (which is perfectly fine for a flight simulator) and usually hovered well over 40, all with the graphics setting pushed up to the maximum and with a 2K resolution.

When things don’t work, though, the effect is stark because it’s so obvious. Some cities, like Las Vegas, look like they suffered some kind of catastrophe, as if the city was abandoned and nature took over (which in the case of the Vegas Strip doesn’t sound like such a bad thing, to be honest).

Image Credits: TechCrunch

Thankfully, all of this is something that Microsoft and Asobo can fix. They’ll just need to adjust their algorithms, and because a lot of the data is streamed, the updates should be virtually automatic. The fact that they haven’t done so yet is a bit of a surprise.

Image Credits: TechCrunch

Chances are you’ll want to fly over your house the day you get Flight Simulator. If you live in the right city (and the right part of that city), you’ll likely be lucky and actually see your house with its individual texture. But for some cities, including London, for example, the game only shows standard textures, and while Microsoft does a good job at matching the outlines of buildings in cities where it doesn’t do photogrammetry, it’s odd that London or Amsterdam aren’t on that list (though London apparently features a couple of wind turbines in the city center now), while Münster, Germany is.

Once you get to altitude, all of those problems obviously go away (or at least you won’t see them). But given the graphics, you’ll want to spend a lot of time at 2,000 feet or below.

Image Credits: TechCrunch

What really struck me in playing the game in its current state is how those graphical inconsistencies set the standard for the rest of the experience. The team says its focus is 100% on making the simulator as realistic as possible, but then the virtual air traffic control often doesn’t use standard phraseology, for example, or fails to hand you off to the right departure control when you leave a major airport, for example. The airplane models look great and feel pretty close to real (at least for the ones I’ve flown myself), but some currently show the wrong airspeed, for example. Some planes use modern glass cockpits with the Garmin 1000 and G3X, but those still feel severely limited.

But let me be clear here. Despite all of this, even in its beta state, Flight Simulator is a technical marvel and it will only get better over time.

Image Credits: TechCrunch

Let’s walk through the user experience a bit. The install on PC (the Xbox version will come at some point in the future) is a process that downloads a good 90GB so that you can play offline as well. The install process asks you if you are OK with streaming data, too, and that can quickly add up. After reinstalling the game and doing a few flights for screenshots, the game had downloaded about 10GB already — it adds up quickly and is something you should be aware of if you’re on a metered connection.

[gallery ids="2024272,2024274,2024275,2024276,2024277,2024278,2024281"]

Once past the long install, you’ll be greeted by a menu screen that lets you start a new flight, go for one of the landing challenges or other activities the team has set up (they are really proud of their Courchevel scenery) and go through the games’ flight training program.

Image Credits: Microsoft

That training section walks you through eight activities that will help you get the basics of flying a Cessna 152. Most take fewer than 10 minutes and you’ll get a bit of a de-brief after, but I’m not sure it’s enough to keep a novice from getting frustrated quickly (while more advanced players will just skip this section altogether anyway).

I mostly spent my time flying the small general aviation planes in the sim, but if you prefer a Boeing 747 or Airbus 320neo, you get that option, too, as well as some turboprops and business jets. I’ll spend some more time with those before the official launch. All of the planes are beautifully detailed inside and out and except for a few bugs, everything works as expected.

To actually start playing, you’ll head for the world map and choose where you want to start your flight. What’s nice here is that you can pick any spot on your map, not just airports. That makes it easy to start flying over a city, for example. As you zoom into the map, you can see airports and landmarks (where the landmarks are either real sights like Germany’s Neuschwanstein Castle or cities that have photogrammetry data). If a town doesn’t have photogrammetry data, it will not appear on the map.

As of now, the flight planning features are pretty basic. For visual flights, you can go direct or VOR to VOR, and that’s it. For IFR flights, you choose low or high-altitude airways. You can’t really adjust any of these, just accept what the simulator gives you. That’s not really how flight planning works (at the very least you would want to take the local weather into account), so it would be nice if you could customize your route a bit more. Microsoft partnered with NavBlue for airspace data, though the built-in maps don’t do much with this data and don’t even show you the vertical boundaries of the airspace you are in.

Image Credits: TechCrunch

It’s always hard to compare the plane models and how they react to the real thing. Best I can tell, at least the single-engine Cessnas that I’m familiar with mostly handle in the same way I would expect them to in reality. Rudder controls feel a bit overly sensitive by default, but that’s relatively easy to adjust. I only played with a HOTAS-style joystick and rudder setup. I wouldn’t recommend playing with a mouse and keyboard, but your mileage may vary.

Live traffic works well, but none of the general aviation traffic around my local airports seems to show up, even though Microsoft partner FlightAware shows it.

As for the real/AI traffic in general, the sim does a pretty good job managing that. In the beta, you won’t really see the liveries of any real airlines yet — at least for the most part — I spotted the occasional United plane in the latest builds. Given some of Microsoft’s own videos, more are coming soon. Except for the built-in models you can fly in the sim, Flight Simulator is still missing a library of other airplane models for AI traffic, though again, I would assume that’s in the works, too.

Image Credits: TechCrunch

We’re three weeks out from launch. I would expect the team to be able to fix many of these issues and we’ll revisit all of them for our final review. My frustration with the current state of the game is that it’s so often so close to perfect that when it falls short of that, it’s especially jarring because it yanks you out of the experience.

Don’t get me wrong, though, flying in FS2020 is already a great experience. Even when there’s no photogrammetry, cities and villages look great once you get over 3,000 feet or so. The weather and cloud simulation — in real time — beats any add-on for today’s flight simulators. Airports still need work, but having cars drive around and flaggers walking around planes that are pushing back help make the world feel more alive. Wind affects the waves on lakes and oceans (and windsocks on airports). This is truly a next-generation flight simulator.

Image Credits: Microsoft

Microsoft and Asobo have to walk a fine line between making Flight Simulator the sim that hardcore fans want and an accessible game that brings in new players. I’ve played every version of Flight Simulator since the 90s, so getting started took exactly zero time. My sense is that new players simply looking for a good time may feel a bit lost at first, despite Microsoft adding landing challenges and other more gamified elements to the sim. In a press briefing, the Asobo team regularly stressed that it aimed for realism over anything else — and I’m perfectly ok with that. We’ll have to see if that translates to being a fun experience for casual players, too.

Apple recently announced that they would be transition their Mac line from Intel processors to their own, ARM-based Apple Silicon. That process is meant to begin with hardware to be announced later this year, and last two years according to Apple’s stated expectations, and while new Intel-powered Macs will be released and sold leading up to that time, it does mean that the writing is on the wall for Intel-based Apple hardware. Existing Macs with Intel chips will still be useful long after the transition is complete, however, and software porting means they might even support more of your existing favorite applications for the foreseeable future, which is why adding an external GPU (eGPU) likely makes more sense now than ever.

Apple added support for eGPUs a few years ago, made possible by the addition of Thunderbolt 3 ports on Macs. These have very high throughput, making it possible for a GPU in an internal enclosure to offer almost a much graphics processing capability as one connected internally. But while Apple has directly sold a few eGPUs, and natively supports AMD graphics cards without any special driver gymnastics required, it’s still mostly a niche category. But for anybody looking to extend the life of their existing Mac for a few more years to wait and see how the Apple Silicon transition shakes out, updates from Apple and key software partners make an eGPU a great choice.

Here are a couple of Thunderbolt 3 eGPU enclosure options out there for those considering this upgrade path, and the relative merits of each. Keep in mind that for each of these, the pricing is for the enclosure alone – you’ll have to add your own eGPU to make it work, but the good news is that you can continually upgrade and replace these graphics cards to give your Mac even more of a boost as graphics tech improves.

Razer Core X Chroma ($399)

The Razer Core X Chroma is Razer’s top of the line GPU enclosure, and it supports full-sized PCIe graphics cards up to three slots wide, up to a maximum of 500 watts. The integrated power supply provides 700w of power, which enables 100w output for charging any connected laptop, and on the back of the eGPU you’ll find four extra high-speed USB ports, as well as a Gigabit Ethernet port for networking. The Chroma version also comes with tunable LED lighting for additional user customization options. Razer provided me with a Core X Chrome, an AMD Radeon RX 5700 XT and an Nvidia GeForce RTX 2080 Ti for the purposes of testing across both Mac and PC systems.

This isn’t the smallest enclosure out there, but that’s in part because it supports 3-slot cards, which is over and above a lot of the competition. It’s also relatively short and long, making it a great option to tuck away under a desk, or potentially even held in an under-desk mount (with enough clearance for the fan exhaust to work properly). It’s quiet in operation, and only really makes any audible noise when the GPU held within is actually working for compatible software.

Most of my testing focused on using the Razer Core X Chroma with a Mac, and for that use you’ll need to stick with AMD’s GPUs, since Apple doesn’t natively support Nvidia graphics cards in macOS. The AMD Radeon RX 5700 XT is a beast, however, and delivers plenty of horsepower for improving activities like photo and video editing, as well as giving you additional display output options and just generally providing additional resources for the system to take advantage of.

Thanks to Adobe’s work on adding eGPU support to its Lightroom, Photoshop and Premiere products, you can get a lot of improvement in overall rendering and output in all those applications, particularly if you’re on a Mac that only has an integrated GPU. Likewise with Apple’s own applications, including Final Cut Pro X.

In my experience, using the eGPU greatly improved the export function of both Adobe and Apple’s pro video editing software, cutting export times by at least half. And working in Lightroom was in general much faster and more responsive, with significantly reduced rendering times for thumbnails and previews, which ordinarily take quite a while on my 2018 Mac mini.

Apple also uses eGPUs to accelerate the performance of any apps that use Metal, OpenGL and OpenCL, which is why you may notice a subtle general improvement in system performance when you plug one in. It’s hard to quantify this effect, but overall system performance felt less sluggish and more responsive, especially when running a large number of apps simultaneously.

The Razer Core X Chrome’s extra expansion slots, quiet operation and max power delivery all make it the top choice if you’re looking for an enclosure to handle everything you need, and it can provide big bumps both to Macs and Windows PCs alike – and both interchangeably, if you happen to use both platforms.

Akitio Node Titan ($329)

If you’re looking to spend a little less money, and get an enclosure that’s a bit more barebones but that still offers excellent performance, check out the Akitio Node Titan. Enclosure maker Akitio was acquired by OWC, a popular Mac peripheral maker and seller that has provided third-party RAM, docks, drives and more for decades. The Node Titan is their high-end eGPU enclosure.

The case for the Node Titan is a bit smaller than that of the Razer Core X, and is finished in a space gray-like color that will match Apple’s Mac notebooks more closely. The trade-off for the smaller size is that it only supports 2-slot graphics cards, but it also features an integrated pop-out handle that makes it much more convenient, combined with its lighter, more compact design, for taking with you place to place.

Akitio’s Node Titan packs in a 650w power supply, which is good for high-consumption graphics cards, but it also means that another compromise for this case vs. the Core X Chrome is that the Titan supplies only 85w output to any connected laptops. That’s under the 96W required for full-speed charging on the latest 16-inch MacBook Pro, though it’s still enough to keep your notebook powered up and provide full-speed charging to the rest of Apple’s Mac notebook lineup.

The Node Titan also provides only one port on the enclosure itself – a Thunderbolt output for connecting to your computer. Graphics cards you use with it will offer their own display connections, however, for attaching external displays.

In terms of performance, the Akitio Node Titan offers the same potential gains with the AMD Radeon RX 5700 XT for your Mac (and both AMD and Nvidia cards for PCs) when connected, since the GPU specs are what matter most when working with an enclosure. It operates a little more noisily, especially in terms of making a quiet, but still detectable constant hum even when the GPU is not being taxed.

The Node Titan is still an excellent choice, however, and potentially a better one for those looking for more portability and a bit more affordability at the expense of max notebook power output and a host of handy port expansions.

Bottom line

Back when more Macs had the option for user-expandable RAM, that was a great way to squeeze a little more life out of external machines and make a slowing machine feel much faster. Now, only a few Macs in Apple’s lineup make it easy or even possible to upgrade your memory. Adding an eGPU can have a similar effect, especially if you spend a lot of time in creative editing apps, including Adobe’s suite, Apple’s Pro apps, or various other third-party apps including DaVinci Resolve.

The total price of an eGPU setup, including card, can approach or even match the price of a new Mac, but even less expensive cards offer significant benefit, and you can always swap that out later depending on your needs. It is important to note that the future of eGPU support on Apple Silicon Macs isn’t certain, even though Apple has said they’ll support Thunderbolt. Still, an eGPU can stave off the need for an upgrade for years, making it easier to wait and watch to see what the process transition really means for Mac users.

Nvidia today announced that its new Ampere-based data center GPUs, the A100 Tensor Core GPUs, are now available in alpha on Google Cloud. As the name implies, these GPUs were designed for AI workloads, as well as data analytics and high-performance computing solutions.

The A100 promises a significant performance improvement over previous generations. Nvidia says the A100 can boost training and inference performance by over 20x compared to its predecessors (though you’ll mostly see 6x or 7x improvements in most benchmarks) and tops out at about 19.5 TFLOPs in single-precision performance and 156 TFLOPs for Tensor Float 32 workloads.

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads,” said Manish Sainani, Director of Product Management at Google Cloud, in today’s announcement. “With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA’s T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

Google Cloud users can get access to instances with up to 16 of these A100 GPUs, for a total of 640GB of GPU memory and 1.3TB of system memory.