Steve Thomas - IT Consultant

The voices on Amazon’s Alexa, Google Assistant and other AI assistants are far ahead of old-school GPS devices, but they still lack the rhythms, intonation and other qualities that make speech sound, well, human. NVIDIA has unveiled new research and tools that can capture those natural speech qualities by letting you train the AI system with your own voice, the company announced at the Interspeech 2021 conference.

To improve its AI voice synthesis, NVIDIA’s text-to-speech research team developed a model called RAD-TTS, a winning entry at an NAB broadcast convention competition to develop the most realistic avatar. The system allows an individual to train a text-to-speech model with their own voice, including the pacing, tonality, timbre and more.

Another RAD-TTS feature is voice conversion, which lets a user deliver one speaker’s words using another person’s voice. That interface gives fine, frame-level control over a synthesized voice’s pitch, duration and energy.

Using this technology, NVIDIA’s researchers created more conversational-sounding voice narration for its own I Am AI video series using synthesized rather than human voices. The aim was to get the narration to match the tone and style of the videos, something that hasn’t been done well in many AI narrated videos to date. The results are still a bit robotic, but better than any AI narration I’ve ever heard.

“With this interface, our video producer could record himself reading the video script, and then use the AI model to convert his speech into the female narrator’s voice. Using this baseline narration, the producer could then direct the AI like a voice actor — tweaking the synthesized speech to emphasize specific words, and modifying the pacing of the narration to better express the video’s tone,” NVIDIA wrote.

NVIDIA is distributing some of this research — optimized to run efficiently on NVIDIA GPUs, of course — to anyone who wants to try it via open source through the NVIDIA NeMo Python toolkit for GPU-accelerated conversational AI, available on the company’s NGC hub of containers and other software.

“Several of the models are trained with tens of thousands of hours of audio data on NVIDIA DGX systems. Developers can fine tune any model for their use cases, speeding up training using mixed-precision computing on NVIDIA Tensor Core GPUs,” the company wrote.

Editor’s note: This post originally appeared on Engadget.

The UK’s competition watchdog has raised serious concerns about Nvidia’s proposed takeover of chip designer, ARM.

Its assessment was published today by the government which will now need to decide whether to ask the Competition and Markets Authority (CMA) to carry out an in-depth probe into the proposed acquisition.

In the executive summary of the CMA’s report for the government the watchdog sets out concerns that if the deal were to go ahead the merged business would have the ability and incentive to harm the competitiveness of Nvidia’s rivals by restricting access to ARM’s IP which is used by companies that produce semiconductor chips and related products, in competition with Nvidia.

The CMA is worried that the loss of competition could stifle innovation across a number of markets — including data centres, gaming, the ‘internet of things’, and self-driving cars, with the resulting risk of more expensive or lower quality products for businesses and consumers.

A behavioral remedy offered by Nvidia was rejected by the CMA — which has recommended moving to an in-depth ‘Phase 2’ investigation of the proposed merger on competition grounds. 

Commenting in a statement, CEO Andrea Coscelli said: “We’re concerned that Nvidia controlling Arm could create real problems for NVIDIA’s rivals by limiting their access to key technologies, and ultimately stifling innovation across a number of important and growing markets. This could end up with consumers missing out on new products, or prices going up.

“The chip technology industry is worth billions and is vital to products that businesses and consumers rely on every day. This includes the critical data processing and datacentre technology that supports digital businesses across the economy, and the future development of artificial intelligence technologies that will be important to growth industries like robotics and self-driving cars.”

Nvidia has been contacted for comment.

In a statement on its website, the Department for Digital, Media, Culture and Sport said the UK’s digital secretary is now “considering the relevant information contained in the full report” and will make a decision on whether to ask the CMA to conduct a ‘Phase Two’ investigation “in due course”.

“There is no set period in which this decision must be made, but it must take into account the need to make a decision as soon as reasonably practicable to reduce uncertainty,” it added. 

The proposed merger has faced considerable domestic opposition with opponents including one of the co-founders of ARM calling for it to be blocked.

Microsoft will soon launch a dedicated device for game streaming, the company announced today. It’s also working with a number of TV manufacturers to build the Xbox experience right into their internet-connected screens and Microsoft plans to bring build cloud gaming to the PC Xbox app later this year, too, with a focus on play-before-you-buy scenarios.

It’s unclear what these new game streaming devices will look like. Microsoft didn’t provide any further details. But chances are, we’re talking about either a Chromecast-like streaming stick or a small Apple TV-like box. So far, we also don’t know which TV manufacturers it will partner with.

It’s no secret that Microsoft is bullish about cloud gaming. With Xbox Game Pass Ultimate, it’s already making it possible for its subscribers to play more than 100 console games on Android, streamed from the Azure cloud, for example. In a few weeks, it’ll open cloud gaming in the browser on Edge, Chrome and Safari, to all Xbox Game Pass Ultimate subscribers (it’s currently in limited beta). And it is bringing Game Pass Ultimate to Australia, Brazil, Mexico and Japan later this year, too.

In many ways, Microsoft is unbundling gaming from the hardware — similar to what Google is trying with Stadia (an effort that, so far, has fallen flat for Google) and Amazon with Luna. The major advantage Microsoft has here is a large library of popular games, something that’s mostly missing on competing services, with the exception of Nvidia’s GeForce Now platform — though that one has a different business model since its focus is not on a subscription but on allowing you to play the games you buy in third-party stores like Steam or the Epic store.

What Microsoft clearly wants to do is expand the overall Xbox ecosystem, even if that means it sells fewer dedicated high-powered consoles. The company likens this to the music industry’s transition to cloud-powered services backed by all-you-can-eat subscription models.

“We believe that games, that interactive entertainment, aren’t really about hardware and software. It’s not about pixels. It’s about people. Games bring people together,”
said Microsoft’s Xbox head Phil Spencer. “Games build bridges and forge bonds, generating mutual empathy among people all over the world. Joy and community -that’s why we’re here.”

It’s worth noting that Microsoft says it’s not doing away with dedicated hardware, though, and is already working on the next generation of its console hardware — but don’t expect a new Xbox console anytime soon.

Arm today announced the launch of two new platforms, Arm Neoverse V1 and Neoverse N2, as well as a new mesh interconnect for them. As you can tell from the name, V1 is a completely new product and maybe the best example yet of Arm’s ambitions in the data center, high-performance computing and machine learning space. N2 is Arm’s next-generation general compute platform that is meant to span use cases from hyperscale clouds to SmartNICs and running edge workloads. It’s also the first design based on the company’s new Armv9 architecture.

Not too long ago, high-performance computing was dominated by a small number of players, but the Arm ecosystem has scored its fair share of wins here recently, with supercomputers in South Korea, India and France betting on it. The promise of V1 is that it will vastly outperform the older N1 platform, with a 2x gain in floating-point performance, for example, and a 4x gain in machine learning performance.

Image Credits: Arm

“The V1 is about how much performance can we bring — and that was the goal,” Chris Bergey, SVP and GM of Arm’s Infrastructure Line of Business, told me. He also noted that the V1 is Arm’s widest architecture yet. He noted that while V1 wasn’t specifically built for the HPC market, it was definitely a target market. And while the current Neoverse V1 platform isn’t based on the new Armv9 architecture yet, the next generation will be.

N2, on the other hand, is all about getting the most performance per watt, Bergey stressed. “This is really about staying in that same performance-per-watt-type envelope that we have within N1 but bringing more performance,” he said. In Arm’s testing, NGINX saw a 1.3x performance increase versus the previous generation, for example.

Image Credits: Arm

In many ways, today’s release is also a chance for Arm to highlight its recent customer wins. AWS Graviton2 is obviously doing quite well, but Oracle is also betting on Ampere’s Arm-based Altra CPUs for its cloud infrastructure.

“We believe Arm is going to be everywhere — from edge to the cloud. We are seeing N1-based processors deliver consistent performance, scalability and security that customers want from Cloud infrastructure,” said Bev Crair, senior VP, Oracle Cloud Infrastructure Compute. “Partnering with Ampere Computing and leading ISVs, Oracle is making Arm server-side development a first-class, easy and cost-effective solution.”

Meanwhile, Alibaba Cloud and Tencent are both investing in Arm-based hardware for their cloud services as well, while Marvell will use the Neoverse V2 architecture for its OCTEON networking solutions.

The UK government has intervened to trigger public interest scrutiny of chipmaker’s Nvidia’s planned to buy Arm Holdings.

The secretary of state for digital issues, Oliver Dowden, said today that the government wants to ensure that any national security implications of the semiconductor deal are explored.

Nvidia’s $40BN acquisition of UK-based Arm was announced last September but remains to be cleared by regulators.

The UK’s Competition and Markets Authority (CMA) began to solicit views on the proposed deal in January.

Domestic opposition to Nvidia’s plan has been swift, with one of the original Arm co-founders kicking off a campaign to ‘save Arm’ last year. Hermann Hauser warned that Arm’s acquisition by a U.S. entity would end its position as a company independent of U.S. interests — risking the U.K.’s economic sovereignty by surrendering its most powerful trade weapon.

The intervention by Department of Digital, Media, Culture and Sport (DCMS) — using statutory powers set out in the Enterprise Act 2002 — means the competition regulator has been instructed to begin a phase 1 investigation.

The CMA has a deadline of July 30 to submit its report to the secretary of state.

Commenting in a statement, Dowden said: “Following careful consideration of the proposed takeover of ARM, I have today issued an intervention notice on national security grounds. As a next step and to help me gather the relevant information, the UK’s independent competition authority will now prepare a report on the implications of the transaction, which will help inform any further decisions.”

“We want to support our thriving UK tech industry and welcome foreign investment but it is appropriate that we properly consider the national security implications of a transaction like this,” he added.

At the completion of the CMA’s phase 1 investigation Dowden will have an option to clear the deal, i.e. if no national security or competition concerns have been identified; or to clear it with remedies to address any identified concerns.

He could also refer the transaction for further scrutiny by instructing the CMA to carry out an in-depth phase 2 investigation.

After the phase 1 report has been submitted there is no set period when the secretary of state must make a decision on next steps — but DCMS notes that a decision should be made as soon as “reasonably practicable” to reduce uncertainty.

While Dowden’s intervention has been made on national security grounds, additional concerns have been raised about impact of an Nvidia take-over of Arm — specifically on U.K. jobs and on Arm’s open licensing model.

Nvidia sought to address those concerns last year, claiming it’s committed to Arm’s licensing model and pledging to expand the Cambridge, UK offices of Arm — saying it would create “a new global center of excellence in AI research” at the UK campus.

However it’s hard to see what commercial concessions could be offered to assuage concern over the ramifications of an Nvidia-owed Arm on the UK’s economic sovereignty. That’s because it’s a political risk, which would require a political solution to allay, such as at a treaty level — something which isn’t in Nvidia’s gift (alone) to give.

National security concerns are a rising operational risk for tech companies involved in the supply of cutting edge infrastructure, such as semiconductor design and next-gen networks — where a relative paucity of competitors not only limits market choice but amps up the political calculations.

Proposed mergers are one key flash point as market consolidation takes on an acute politico-economic dimension.

However tech companies’ operations are being more widely squeezed in the name of national security — such as, in recent years, the U.S. government’s attacks on China-based 5G infrastructure suppliers like Huawei, with former president Trump seeking to have the company barred from supplying next-gen networks not only within the U.S. but to national networks of Western allies.

Nor has (geo)political pressure been applied purely over key infrastructure companies in recent years; with Trump claiming a national security justification to try and shake down the Chinese-owned social networking company, TikTok — in another example that speaks to how tech tools are being coopted into wider geopolitical power-plays, fuelled by countries’ economic and political self-interest.

Arm today announced Armv9, the next generation of its chip architecture. Its predecessor, Armv8 launched a decade ago and while it has seen its fair share of changes and updates, the new architecture brings a number of major updates to the platform that warrant a shift in version numbers. Unsurprisingly, Armv9 builds on V8 and is backward compatible, but it specifically introduces new security, AI, signal processing and performance features.

Over the last five years, more than 100 billion Arm-based chips have shipped. But Arm believes that its partners will ship over 300 billion in the next decade. We will see the first ArmV9-based chips in devices later this year.

Ian Smythe, Arm’s VP of Marketing for its client business, told me that he believes this new architecture will change the way we do computing over the next decade. “We’re going to deliver more performance, we will improve the security capabilities […] and we will enhance the workload capabilities because of the shift that we see in compute that’s taking place,” he said. “The reason that we’ve taken these steps is to look at how we provide the best experience out there for handling the explosion of data and the need to process it and the need to move it and the need to protect it.”

That neatly sums up the core philosophy behind these updates. On the security side, ArmV9 will introduce Arm’s confidential compute architecture and the concept of Realms. These Realms enable developers to write applications where the data is shielded from the operating system and other apps on the device. Using Realms, a business application could shield sensitive data and code from the rest of the device, for example.

Image Credits: Arm

“What we’re doing with the Arm Confidential Compute Architecture is worrying about the fact that all of our computing is running on the computing infrastructure of operating systems and hypervisors,” Richard Grisenthwaite, the chief architect at Arm, told me. “That code is quite complex and therefore could be penetrated if things go wrong. And it’s in an incredibly trusted position, so we’re moving some of the workloads so that [they are] running on a vastly smaller piece of code. Only the Realm manager is the thing that’s actually capable of seeing your data while it’s in action. And that would be on the order of about a 10th of the size of a normal hypervisor and much smaller still than an operating system.”

As Grisenthwaite noted, it took Arm a few years to work out the details of this security architecture and ensure that it is robust enough — and during that time Spectre and Meltdown appeared, too, and set back some of Arm’s initial work because some of the solutions it was working on would’ve been vulnerable to similar attacks.

Image Credits: Arm

Unsurprisingly, another area the team focused on was enhancing the CPU’s AI capabilities. AI workloads are now ubiquitous. Arm had already done introduced its Scalable Vector Extension (SVE) a few years ago, but at the time, this was meant for high-performance computing solutions like the Arm-powered Fugaku supercomputer.

Now, Arm is introducing SVE2 to enable more AI and digital signal processing (DSP) capabilities. Those can be used for image processing workloads, as well as other IoT and smart home solutions, for example. There are, of course, dedicated AI chips on the market now, but Arm believes that the entire computing stack needs to be optimized for these workloads and that there are a lot of use cases where the CPU is the right choice for them, especially for smaller workloads.

“We regard machine learning as appearing in just about everything. It’s going to be done in GPUs, it’s going to be done in dedicated processors, neural processors, and also done in our CPUs. And it’s really important that we make all of these different components better at doing machine learning,” Grisenthwaite said.

As for raw performance, Arm believes its new architecture will allow chip manufacturers to gain more than 30% in compute power over the next two chip generations, both for mobile CPUs but also the kind of infrastructure CPUs that large cloud vendors like AWS now offer their users.

“Arm’s next-generation Armv9 architecture offers a substantial improvement in security and machine learning, the two areas that will be further emphasized in tomorrow’s mobile communications devices,” said Min Goo Kim, the executive vice president of SoC development at Samsung Electronics. “As we work together with Arm, we expect to see the new architecture usher in a wider range of innovations to the next generation of Samsung’s Exynos mobile processors.”

Nvidia’s cloud gaming service GeForce Now has announced some changes when it comes to subscription plans. Starting today, paid memberships now cost $9.99 per month, or $99.99 per year — they are now called ‘Priority’ memberships.

If you’re an existing ‘Founders’ member, you’ll keep the same subscription price as long as you remain a subscriber. If you stop your subscription at any point, you won’t be able to pay $5 per month again.

Last year, when Nvidia originally introduced paid plans for GeForce Now, the company was pretty transparent with its user base. You could pay $4.99 per month to access the Founders edition, but the company was going to raise the subscription fee at some point. And it sounds like Nvidia has made up its mind and thinks the paid subscription is worth $9.99 per month.

If you’re not familiar with GeForce Now, it lets you start a game on a powerful gaming PC in a data center near you. You get a video stream on your computer, mobile phone, tablet or TV of the game running in a data center — GeForce Now uses a web app on iOS and iPadOS and is available on a limited number of Android TV devices. When you press a button on your controller, the action is relayed to the server so that you can interact with the game. All of this happens in tens of milliseconds, making it one of the smoothest cloud gaming experience available right now.

Compared to Google Stadia and Amazon Luna, Nvidia isn’t starting its own game store. GeForce Now customers launch games that they already own. The platform supports Steam, Epic Games, GOG.com and Ubisoft’s launcher.

Game publishers have to opt in to GeForce Now, which means that you can’t launch all your games that you own in your Steam library. Right now, GeForce Now supports around 800 games that you can find on this page.

If you want to try GeForce Now, you can start playing for free. Nvidia offers a free membership that should be considered as a free trial. First, you have to wait in a queue until a free server is available — it can take five, ten of fifteen minutes.

After that, you’re limited to one-hour sessions. When you’ve played for an hour, you’re kicked out of the server. You can still start the game again, but you’ll have to go through the queue one more time.

If you become a paid member, games start nearly instantly and you can play up to six hours at a time. Similarly, you can start the game instantly after your six hours are up. Paid members also get RTX-enabled graphics.

When it comes to specifications, Nvidia has several configurations with different CPUs, graphic cards and RAM. If you play Fortnite, you might not get the best rig as you can get very high graphics on a medium-range PC. But if you launch Cyberpunk 2077, the service tries to prioritize better rigs.

Nvidia says it has attracted nearly 10 million users for its cloud gaming service. It’s unclear how many of them are paying for a subscription.

The company doubled the number of data centers in the last year. There are now more than 20 data centers operated by Nvidia or local partners. The company plans to expand capacity in existing data centers, add new data centers in Phoenix, Montreal and Australia.

There will be some quality-of-life updates as well, such as the ability to link games with your account to make it easier to launch them and more aggressive preloading of games.

Image Credits: Nvidia

NeuReality, an Israeli AI hardware startup that is working on a novel approach to improving AI inferencing platforms by doing away with the current CPU-centric model, is coming out of stealth today and announcing an $8 million seed round. The group of investors includes Cardumen Capital, crowdfunding platform OurCrowd and Varana Capital. The company also today announced that Naveen Rao, the GM of Intel’s AI Products Group and former CEO of Nervana System (which Intel acquired), is joining the company’s board of directors.

The founding team, CEO Moshe Tanach, VP of operations Tzvika Shmueli and VP for very large-scale integration Yossi Kasus, has a background in AI but also networking, with Tanach spending time at Marvell and Intel, for example, Shmueli at Mellanox and Habana Labs and Kasus at Mellanox, too.

It’s the team’s networking and storage knowledge and seeing how that industry built its hardware that now informs how NeuReality is thinking about building its own AI platform. In an interview ahead of today’s announcement, Tanach wasn’t quite ready to delve into the details of NeuReality’s architecture, but the general idea here is to build a platform that will allow hyperscale clouds and other data center owners to offload their ML models to a far more performant architecture where the CPU doesn’t become a bottleneck.

“We kind of combined a lot of techniques that we brought from the storage and networking world,” Tanach explained. Think about traffic manager and what it does for Ethernet packets. And we applied it to AI. So we created a bottom-up approach that is built around the engine that you need. Where today, they’re using neural net processors — we have the next evolution of AI computer engines.”

As Tanach noted, the result of this should be a system that — in real-world use cases that include not just synthetic benchmarks of the accelerator but also the rest of the overall architecture — offer 15 times the performance per dollar for basic deep learning offloading and far more once you offload the entire pipeline to its platform.

NeuReality is still in its early days, and while the team has working prototypes now, based on a Xilinx FPGA, it expects to be able to offer its fully custom hardware solution early next year. As its customers, NeuReality is targeting the large cloud providers, but also data center and software solutions providers like WWT to help them provide specific vertical solutions for problems like fraud detection, as well as OEMs and ODMs.

Tanach tells me that the team’s work with Xilinx created the groundwork for its custom chip — though building that (and likely on an advanced node), will cost money, so he’s already thinking about raising the next round of funding for that.

“We are already consuming huge amounts of AI in our day-to-day life and it will continue to grow exponentially over the next five years,” said Tanach. “In order to make AI accessible to every organization, we must build affordable infrastructure that will allow innovators to deploy AI-based applications that cure diseases, improve public safety and enhance education. NeuReality’s technology will support that growth while making the world smarter, cleaner and safer for everyone. The cost of the AI infrastructure and AIaaS will no longer be limiting factors.”

NeuReality team. Photo credit - NeuReality

Image Credits: NeuReality

The United States has, over the past few decades, been extremely lenient on antitrust enforcement, rarely blocking deals, even with overseas competitors. Yet, there have been inklings that things are changing. Yesterday, we learned that Visa and Plaid called off their combination after the Department of Justice sued to block it in early November. We also learned a week ago that shaving startup Billie would end its proposed acquisition by consumer product goods giant P&G after the Federal Trade Commission sued to block it in December.

Many, many, many other deals of course get through the gauntlet of regulations, but even a few smoke signals is enough to start raising concerns. That new calculus is even before we start to look at the morass of reforms being proposed around antitrust in Washington DC these days, nearly all of which — on a bipartisan basis — would create stricter controls for antitrust, particularly in critical technology industries and information services.

So, what’s the valuation prognosis for startups these days given that one of the most important exit options available is increasingly looking fraught?

The UK’s competition and markets regulator is seeking views on Nvidia’s takeover of Arm Holdings as it prepares to kick off formal oversight of potential competition impacts of the deal.

The US-based chipmaker’s $40BN purchase of the UK-based chip designer, announced last September, has triggered a range of domestic concerns — over the impact on UK jobs, industrial strategy/economic sovereignty and even national security — although the Competition and Markets Authority (CMA)’s probe will focus solely on possible competition-related impacts.

It said today that the probe will likely to consider whether, post-acquisition, Arm would have an incentive to “withdraw, raise prices or reduce the quality of its IP licensing services to Nvidia’s rivals”, per a press release.

The CMA is inviting interested third parties to comment on the acquisition before January 27 — ahead of the launch of its formal probe. That phase 1 investigation will include additional opportunities for external comment, according to the regulator, which has not yet provided a date for when it will take a decision on the acquisition.

Further details can be found on its case page — here.

Commenting in a statement, Andrea Coscelli, the CMA’s chief executive, said: “The chip technology industry is worth billions and critical to many of the products that we use most in our everyday lives. We will work closely with other competition authorities around the world to carefully consider the impact of the deal and ensure that it doesn’t ultimately result in consumers facing more expensive or lower quality products.”

Among those sounding the alarm about the impact on the UK of an Nvidia-Arm takeover is the original founder of the company, Hermann Hauser.

In September he wrote to the prime minister saying he’s “extremely concerned” about the impact on UK jobs, Arm’s business model and the future of the country’s economic sovereignty.

A website Hauser set up to gather signatures of objection — called savearm.co.uk — states that more than 2,000 signatures had been collected as of October 12.

As well as the CMA, a number of other international regulators will be scrutinizing the deal, with Nvidia saying in September that it expected the clearance process to take 1.5 years.

It has sought to preempt UK concerns, saying it will double down on the country as a core part of its engineering efforts by expanding Arm’s offices in Cambridge — where it said it would establish “a new global center of excellence in AI research”.

On wider national security concerns that are being attached to the Nvidia-Arm deal from some quarters, the CMA noted that the UK government could choose to issue a public interest intervention notice “if appropriate”.

Arm was earlier bought by Japan’s SoftBank for around $31BN back in 2016.

Its subsequent deal to offload the chip designer to Nvidia is a mixture of cash and stock — and included an immediate $2BN cash payment to SoftBank. But the majority of the transaction’s value is due to be paid in Nvidia stock at close of the deal, pending regulatory clearances.

Apple is reportedly developing a number of Apple Silicon chip variants with significantly higher core counts relative to the M1 chips that it uses in today’s MacBook Air, MacBook Pro and Mac mini computers based on its own ARM processor designs. According to Bloomberg, the new chips include designs that have 16 power cores and hour high-efficiency cores, intended for future iMacs and more powerful MacBook Pro models, as well as a 32-performance core top-end version that would eventually power the first Apple Silicon Mac Pro.

The current M1 Mac has four performance cores, along with four high-efficiency cores. It also uses either seven or eight dedicated graphics cores, depending on the Mac model. Apple’s next-gen chips could leap right to 16 performance cores, or Bloomberg says they could opt to use eight or 12-core versions of the same, depending primarily on what kinds of yields they see from manufacturing processes. Chipmaking, particularly in the early stages of new designs, often has error rates that render a number of the cores on each new chip unusable, so manufacturers often just ‘bin’ those chips, offering them to the market as lower max core count designs until manufacturing success rates improve.

Apple’s M1 system on a chip.

Regardless of whether next-gen Apple Silicon Macs use 16, 12 or eight-performance core designs, they should provide ample competition for their Intel equivalents. Apple’s debut M1 line has won the praise of critics and reviewers for significant performance benefits over not only their predecessors, but also much more expensive and powerful Mac powered by higher-end Intel chips.

The report also says that Apple is developing new graphics processors that include both 16- and 32-core designs for future iMacs and pro notebooks, and that it even has 64- and 128-core designs in development for use in high-end pro machines like the Mac Pro. These should offer performance that can rival even dedicated GPU designs from Nvidia and AMD for some applications, though they aren’t likely to appear in any shipping machines before either late 2021 or 2022 according to the report.

Apple has said from the start that it plans to transition its entire line to its own Apple Silicon processors by 2022. The M1 Macs now available are the first generation, and Apple has begun with its lowest-power dedicated Macs, with a chip design that hews closely to the design of the top-end A-series chips that power its iPhone and iPad line. Next-generation M-series chips look like they’ll be further differentiated from Apple’s mobile processors, with significant performance advantages to handle the needs of demanding professional workloads.

Google and Nvidia both had some news about their respective cloud gaming service today. Let’s start with Nvidia. GeForce Now is now available on the iPhone and the iPad as a web app. The company says it’s a beta for now, but you can start using it by heading over to play.geforcenow.com on your iOS device.

GeForce Now is a cloud gaming service that works with your own game library. You can connect to your Steam, Epic or Battle.net account and play games you’ve already purchased on those third-party platforms. GeForce Now is also available on macOS, Android and Windows.

Game publishers have to opt in to appear on GeForce Now, which means that you won’t find your entire Steam library on the service. Still, the list is already quite long.

Right now, it costs $5 per month to access the Founders edition, which lets you play whenever you want and for as long as you want. It’s an introductory price, which means that Nvidia could raise prices in the future.

You can also try the service with a free account. You’re limited to one-hour sessions and less powerful hardware. There are also few slots. For instance, you have to wait 11 minutes to launch a game with a free account right now.

Once you add the web app to your iOS home screen, you can launch the service in full screen without the interface of Safari. You can connect a Bluetooth controller. Unfortunately, you can’t use a keyboard and a mouse.

The company says it is actively working with Epic Games on a touch-friendly version of Fortnite so that iOS players can play the game again. It could definitely boost usage on the service.

As for Google, the company issued an update 12 months after the launch of Stadia. Unlike GeForce Now, Stadia works more like a console. You have to buy games for the platform specifically. There are a hundred games on the platform including some games that you get with an optional Stadia Pro subscription.

The company says that iOS testing should start in the coming weeks. “This will be the first phase of our iOS progressive Web application. As we test performance and add more features, your feedback will help us improve the Stadia experience for everyone. You can expect this feature to begin rolling out several weeks from now,” the company wrote.