Steve Thomas - IT Consultant

Minecraft is getting a free update that brings much-improved lighting and color to the game’s blocky graphics using real-time ray tracing running on Nvidia GeForce RTX graphics hardware. The new look is a dramatic change in the atmospherics of the game, and manages to be eerily realistic while retaining Minecraft’s pixelated charm.

The ray tracing tech will be available via a free update to the game on Windows 10 PCs, but it’ll only be accessible to players using an Nvidia GeForce RTX GPU, since that’s the only graphics hardware on the market that currently supports playing games with real-time ray tracing active.

It sounds like it’ll be an excellent addition to the experience for players who are equipped with the right hardware, however – including lighting effects not only from the sun, but also from in-game materials like glowstone and lava; both hard and soft shadows depending on transparency of material and angle of light refraction; and accurate reflections in surfaces that are supposed to be reflective (ie. gold blocks, for instance).

This is welcome news after Minecraft developer Mojang announced last week that it cancelled plans to release its Super Duper Graphics Pack, which was going to add a bunch of improved visuals to the game, because it wouldn’t work well across platforms. At the time, Mojang said it would be sharing news about graphics optimization for some platforms “very soon,” and it looks like this is what they had in mind.

Nvidia meanwhile is showing off a range of 2019 games with real-time ray tracing enabled at Gamescom 2019 in Cologne, Germany, including Dying Light 2, Cyperpunk 2077, Call of Duty: Modern Warfare and Watch Dogs: Legion.

[gallery ids="1870333,1870334,1870335"]

Nvidia’s GPU-powered platform for developing and running conversational AI that understands and responds to natural language requests has achieved some key milestones and broken some records that have big implications for anyone building on their tech – which includes companies large and small, since much of the code they’ve used to achieve these advancements is open source, written in PyTorch and easy to run.

The biggest achievements Nvidia announced today include its breaking the hour mark in training BERT, one of the world’s most advanced AI language models and a state-of-the-art model widely considered a good standard for natural language processing. Nvidia’s AI platform was able to train the model in under an hour, a record-breaking achievement at just 53 minutes, and the trained model could then successfully infer (ie, actually applying the learned capability achieved through training to achieve results) in under 2 milliseconds (10 milliseconds is considered a high-water mark in the industry), another record.

Nvidia’s breakthroughs aren’t just cause for bragging rights – these advances scale and provide real-world benefits for anyone working with their NLP conversational AI and GPU hardware. Nvidia achieved its record-setting times for training on one of its SuperPOD systems which is made up of 92 Nvidia DGX-2H systems runnings 1,472 V100 GPUs, and managed the inference on Nvidia T4 GPUs running Nvidia TensorRT – which beat the performance of event highly optimized CPUs by many orders of magnitude. But it’s making available the BERT training code, and TensorRT optimized BERT Sample via GitHub for all to leverage.

Alongside these milestones, Nvidia’s Research wing also build and trained the largest ever language model based on ‘Transformers,’ which is the tech that underlies BERT, too. This custom model includes a massive 8.3 billion parameters, making it 24 times the size of BERT-Large, the largest current core BERT model. Nvidia has cheekily titled this model ‘Megatron,’ and also offered up the PyTorch code it used to train this model so that others can also train their own similar, massive Transformer-based language models.

Sense and compute are the electronic eyes and ears that will be the ultimate power behind automating menial work and encouraging humans to cultivate their creativity. 

These new capabilities for machines will depend on the best and brightest talent and investors who are building and financing companies aiming to deliver the AI chips destined to be the neurons and synapses of robotic brains.

Like any other herculean task, this one is expected to come with big rewards.  And it will bring with it big promises, outrageous claims, and suspect results. Right now, it’s still the Wild West when it comes to measuring AI chips up against each other.

Remember laptop shopping before Apple made it easy? Cores, buses, gigabytes and GHz have given way to “Pro” and “Air.” Not so for AI chips.

Roboticists are struggling to make heads and tails out of the claims made by AI chip companies.  Every passing day without autonomous cars puts more lives at risk of human drivers. Factories want humans to be more productive while out of harm’s way. Amazon wants to get as close as possible to Star Trek’s replicator by getting products to consumers faster.

A key component of that is the AI chips that will power them.  A talented engineer making a bet on her career to build AI chips, an investor looking to underwrite the best AI chip company, and AV developers seeking the best AI chips, need objective measures to make important decisions that can have huge consequences. 

A metric that gets thrown around frequently is TOPS, or trillions of operations per second, to measure performance.  TOPS/W, or trillions of operations per second per Watt, is used to measure energy efficiency. These metrics are as ambiguous as they sound. 

What are the operations being performed on? What’s an operation? Under what circumstances are these operations being performed? How does the timing by which you schedule these operations impact the function you are trying to perform?  Is your chip equipped with the expensive memory it needs to maintain performance when running “real-world” models? Phrased differently, do these chips actually deliver these performance numbers in the intended application?

Image via Getty Images / antoniokhr

What’s an operation?

The core mathematical function performed in training and running neural networks is a convolution, which is simply a sum of multiplications. A multiplication itself is a bunch of summations (or accumulation), so are all the summations being lumped together as one “operation,” or does each summation count as an operation? This little detail can result in difference of 2x or more in a TOPS calculation. For the purpose of this discussion, we’ll use a complete multiply and accumulate (or MAC), as “two operations.” 

What are the conditions?

Is this chip operating full-bore at close to a volt or is it sipping electrons at half a volt? Will there be sophisticated cooling or is it expected to bake in the sun? Running chips hot, and tricking electrons into them, slows them down.  Conversely, operating at modest temperature while being generous with power, allows you to extract better performance out of a given design. Furthermore, does the energy measurement include loading up and preparing for an operation? As you will see below, overhead from “prep” can be as costly as performing the operation itself.

What’s the utilization?

Here is where it gets confusing.  Just because a chip is rated at a certain number of TOPS, it doesn’t necessarily mean that when you give it a real-world problem, it can actually deliver the equivalent of the TOPS advertised.  Why? It’s not just about TOPS. It has to do with fetching the weights, or values against which operations are performed, out of memory and setting up the system to perform the calculation. This is a function of what the chip is being used for. Usually, this “setup” takes more time than the process itself.  The workaround is simple: fetch the weights and set up the system for a bunch of calculations, then do a bunch of calculations. Problem with that is that you’re sitting around while everything is being fetched, and then you’re going through the calculations.  

Flex Logix (my firm Lux Capital is an investor) compares the Nvidia Tesla T4’s actual delivered TOPS performance vs. the 130 TOPS it advertises on its website. They use ResNet-50, a common framework used in computer vision: it requires 3.5 billion MACs (equivalent to two operations, per above explanation of a MAC) for a modest 224×224 pixel image. That’s 7 billion operations per image.  The Tesla T4 is rated at 3,920 images/second, so multiply that by the required 7 billion operations per image, and you’re at 27,440 billion operations per second, or 27 TOPS, well shy of the advertised 130 TOPS.  

Screen Shot 2019 07 19 at 6.13.46 AM

Batching is a technique where data and weights are loaded into the processor for several computation cycles.  This allows you to make the most of compute capacity, BUT at the expense of added cycles to load up the weights and perform the computations.  Therefore if your hardware can do 100 TOPS, memory and throughput constraints can lead you to only getting a fraction of the nameplate TOPS performance.

Where did the TOPS go? Scheduling, also known as batching, of the setup and loading up the weights followed by the actual number crunching takes us down to a fraction of the speed the core can perform. Some chipmakers overcome this problem by putting a bunch of fast, expensive SRAM on chip, rather than slow, but cheap off-chip DRAM.  But chips with a ton of SRAM, like those from Graphcore and Cerebras, are big and expensive, and more conducive to datacenters.  

There are, however, interesting solutions that some chip companies are pursuing:

Compilers:

Traditional compilers translate instructions into machine code to run on a processor.  With modern multi-core processors, multi-threading has become commonplace, but “scheduling” on a many-core processor is far simpler than the batching we describe above.  Many AI chip companies are relying on generic compilers from Google and Facebook, which will result in many chips companies offering products that perform about the same in real-world conditions. 

Chip companies that build proprietary, advanced compilers specific to their hardware, and offer powerful tools to developers for a variety of applications to make the most of their silicon and Watts will certainly have a distinct edge. Applications will range from driverless cars to factory inspection to manufacturing robotics to logistics automation to household robots to security cameras.  

New compute paradigms:

Simply jamming a bunch of memory close to a bunch of compute results in big chips that sap up a bunch of power.  Digital design is one of tradeoffs, so how can you have your lunch and eat it too? Get creative. Mythic (my firm Lux is an investor) is performing the multiply and accumulates inside of embedded flash memory using analog computation. This empowers them to get superior speed and energy performance on older technology nodes.  Other companies are doing fancy analog and photonics to escape from the grips of Moore’s Law.

Ultimately, if you’re doing conventional digital design, you’re limited by a single physical constraint: the speed at which charge travels through a transistor at a given process node. Everything else is optimization for a given application.  Want to be good at multiple applications? Think outside the VLSI box!

It’s the 50th anniversary of the 1969 Apollo 11 Moon landing, and Nvidia is using the anniversary to showing off the power of its current GPU technology. Using the RTX real-time ray tracing, which was the topic of the day at its recent GTC Conference.

Nvidia employed its latest tech to make big improvements to the moon-landing demo it created five years ago and refined last year to demonstrate its Turing GPU architecture. The resulting simulation is a fully interactive graphic demo that models sunlight in real-time, providing a cinematic and realistic depiction of the Moon landing complete with accurate shadows, visor and metal surface reflections, and more.

Already, Nvidia had put a lot of work into this simulation, which runs on some of its most advanced graphics hardware. When the team began constructing the virtual environment, they studied the lander, the actual reflectivity of astronaut’s space suits and the properties of the Moon’s surface dust and terrain. With its real-time ray-tracing, they can now scrub the sun’s relative position back and forth and have every surface reflect light the way it actually would.

[gallery ids="1857795,1857794,1857793,1857792"]

Idiot conspiracy theorists may still falsely argue that the original was a stage show, but Nvidia’s recreation is the real wizardry, potentially providing a ‘more real than archival’ look at something only a dozen people have actually experienced.

A long-running European antitrust investigation into whether Qualcomm used predatory pricing when selling UMTS baseband chips about a decade ago has landed the chipmaker with a fine of €242 million (~$271M) — aka, 1.27% of its global revenue for 2018.

The EU regulator concluded Qualcomm used abusive pricing to force its main rival at the time, UK-based company Icera, out of the market — by selling certain quantities of three of its UMTS chipsets below cost to two strategically important customers: Chinese tech companies Huawei and ZTE.

Commenting on the decision in a statement, competition commissioner Margrethe Vestager, said: Baseband chipsets are key components so mobile devices can connect to the Internet. Qualcomm sold these products at a price below cost to key customers with the intention of eliminating a competitor. Qualcomm’s strategic behaviour prevented competition and innovation in this market, and limited the choice available to consumers in a sector with a huge demand and potential for innovative technologies. Since this is illegal under EU antitrust rules, we have today fined Qualcomm €242M.

Qualcomm has come out fighting in response — dismissing what it dubs as the Commission’s “novel theory” and saying it plans to appeal.

It also says it will provide a financial guarantee in lieu of paying the fine while this appeal is pending.

The case — which was triggered by a complaint filed by Icera — dates back to 2015, and relates to Qualcomm business practices between 2009 and 2011. The baseband chipsets in question were used over the period for connecting smartphones and tablets to cellular networks, including 3G networks, and for both for voice and data transmission.

The Commission says Icera had been offering advanced data rate performance vs Qualcomm’s chipsets, thereby posing a threat to the latter’s business.

The EU regulator found Qualcomm held a dominant position in the global market for UMTS baseband chipset between 2009 and 2011 — when it had a marketshare of around 60% (almost 3x that of its biggest competitor), as well as on the high barriers to entry to the market — such as significant initial investments in R&D for designing such chipsets  and IP barriers given the volume of related patents Qualcomm holds.

European competition rules mean those holding a dominant position in a market have a special responsibility not to abuse their powerful position by restricting competition.

The Commission says its conclusion that Qualcomm engaged in predatory pricing during the probe period is based on a price-cost test for the three Qualcomm chipsets concerned; and “a broad range of qualitative evidence demonstrating the anti-competitive rationale behind Qualcomm’s conduct, intended to prevent Icera from expanding and building market presence”.

“The results of the price-cost test are consistent with the contemporaneous evidence gathered by the Commission in this case,” it writes. “The targeted nature of the price concessions made by Qualcomm allowed it to maximise the negative impact on Icera’s business, while minimising the effect on Qualcomm’s own overall revenues from the sale of UMTS chipsets. There was also no evidence that Qualcomm’s conduct created any efficiencies that would justify its practice.

“On this basis, the Commission concluded that Qualcomm’s conduct had a significant detrimental impact on competition. It prevented Icera from competing in the market, stifled innovation and ultimately reduced choice for consumers.”

In May 2011 Icera was acquired for $367M by US tech company Nvidia — which the Commission notes then decided to wind down the baseband chipset business line in 2015.

In its press release responding to the decision, Qualcomm’s Don Rosenberg, executive vice president and general counsel, comes out throwing punches — claiming the Commission’s theory is without precedent and “inconsistent”.

“The Commission spent years investigating sales to two customers, each of whom said that they favored Qualcomm chips not because of price but because rival chipsets were technologically inferior.  This decision is unsupported by the law, economic principles or market facts, and we look forward to a reversal on appeal,” he writes. “The Commission’s decision is based on a novel theory of alleged below-cost pricing over a very short time period and for a very small volume of chips. There is no precedent for this theory, which is inconsistent with well-developed economic analysis of cost recovery, as well as Commission practice.

“Contrary to the Commission’s findings, Qualcomm’s alleged conduct did not cause anticompetitive harm to Icera, the company that filed the complaint. Icera was later acquired by Nvidia for hundreds of millions of dollars and continued to compete in the relevant market for several years after the end of the alleged conduct. We cooperated with Commission officials every step of the way throughout the protracted investigation, confident that the Commission would recognize that there were no facts supporting a finding of anti-competitive conduct.  On appeal we will expose the meritless nature of this decision.”

The size of the fine being issued to Qualcomm — which is dwarfed by the $1.23BN fine also handed out to the company by EU regulators a year ago (for iPhone LTE chipset related market abuse) — has been calculated on the basis of the value of its direct and indirect sales of UMTS chipsets in the European Economic Area, with the Commission also factoring in the duration of the infringement it found to have taken place.

In addition to being fined, the Commission decision orders Qualcomm not to engage in the same or equivalent practices in the future.

Nvidia made a somewhat unusual announcement today: it’s launching a set of updated GPUs that will come in at the same price points as the existing GeForce RTC 2060, 2070 and 2080 GPUs with ‘Super’ variants that offer better performance at the same price and with the same power consumption specs.

Prices for the GeForce RTX 2060 Super, GeForce RTX 2070 Super and GeForce RTX 2080 Super will start at $399, $499 and $399 respectively. Nvidia will also continue to sell the entry-level non-Super RTX 2060 for $349, as well as the high-end RTX 2080 Ti, which starts at $999.

 

GeForce Super 2080 angle

Most of the cards this new Super series replaces aren’t all that old, but the technology that makes it stand out, Nvidia’s new real-time raytracing tech that allows game developers to render far more realistic characters and environments, is still pretty new. The performance gains, however, aren’t software-based. Instead, Nvidia improved its manufacturing process and is now able to turn on more cores on the 2060 and 2070 variant — and tweak the memory speed of the 2080 Super to 15.5Gbps. Thanks to this, the new 2060 Super is on average 15% faster than the 2060 it replaces. The 2070 boasts similar numbers.

2019 07 01 1603It’s worth noting that the 2060 Super now also comes with 8GB of memory instead of 6GB.

The new 2060 and 2070 Super GPUs will go on sale on July 9, while those who want to have the high-end 2080 Super will have to wait until July 23.

“The ecosystem driving real-time ray tracing is immense — tens of millions of GPUs, industry standard APIs, leading game engines and an all-star roster of game franchises,” said Matt Wuebbling, head of GeForce Marketing for NVIDIA . “This killer lineup of SUPER GPUs delivers even more performance for demanding PC gamers and ensures that they’re prepared for the coming wave of real-time ray tracing blockbusters.”

This new lineup of GPUs will allow Nvidia to better compete with AMD’s upcoming ‘Navi’ GPUs, which are also scheduled to launch next week. Nvidia obviously doesn’t want AMD to get all of the mindshare, so today’s announcement makes sense (and was prefigured by a number of leaks in recent weeks).

GeForce Super 2060

Automaker Tesla is looking into how it might own another key part of its supply chain, through research being done at a secret lab near its Fremont, CA HQ, CNBC reports. The company currently relies on Panasonic to build the battery pack and cells it uses for its vehicles, which is one of, if not the most significant component in terms of its overall bill of materials.

Tesla is no stranger to owning components of its own supply chain rather than farming them out to vendors as is more common among automakers – it builds its own seats at a facility down the road from its Fremont car factory, for instance, and it recently started building its own chip for its autonomous features, taking over those duties from Nvidia.

Eliminating links in the chain where possible is a move emulated from Tesla CEO Elon Musk inspiration Apple, which under Steve Jobs adopted an aggressive strategy of taking control of key parts of its own supply mix and continues to do so where it can eke out improvements to component cost. Musk has repeatedly pointed out that batteries are a primary constraint when it comes to Tesla’s ability to produce not only is cars, but also its home power products like the Powerwall consumer domestic battery for solar energy systems.

Per the CNBC report, Tesla is doing its battery research at an experimental lab near its factory in Fremont, at a property it maintains on Kato road. Tesla would need lots more time and effort to turn its battery ambitions into production at the scale it requires, however, so don’t expect it to replace Panasonic anytime soon. And in fact, it could add LG as a supplier in addition to Panasonic once its Shanghai factory starts producing Model 3s, per the report.

Volvo and Nvidia announced a new partnership today aimed at developing the next-generation decision-making engine for Volvo Group’s fully autonomous commercial trucks and industrial service vehicles. The partnership will use Nvidia’s Drive artificial intelligence platform, which encompasses processing data from sensors, perception systems, localization, mapping and path prediction and planning.

Volvo already has some freight vehicles with autonomous technology on board in early service, but these are deployed in tightly controlled environments and operate supervised, as at the Swedish port of Gothenburg. The partnership between Nvidia and Volvo Group is intended to help not only test and deploy a range of autonomous vehicles with AI decision-making capabilities on board, but also eventually ensure these commercial vehicles can operate on their own on public roads and highways.

Transport freight is only one target for the new joint effort – Nvidia and Volvo will also seek to build autonomous systems and vehicles that can handle garbage and recycling pickup, operate on construction sites, at mines, and in the forestry industry, too. Nvidia notes on its blog that its solution will help address soaring demand for global shipping, driven by increased demand for consumer package delivery. It’ll also cover smaller-scale use cases such as on-site port freight management.

The agreement between the two companies will span multiple years, and will involve teams from both companies sharing space both in Volvo’s HQ of Gothenburg, and Nvidia’s hometown of Santa Clara, California.

Nvidia has done plenty with autonomous trucking in the past, including an investment in Chinese self-driving trucking startup TuSimple, powering the intelligence of the fully driverless Einride transport vehicle and working with Uber on its ATG-driven truck business.

Habana Labs, a Tel Aviv-based AI processor startup, today announced its Gaudi AI training processor, which promises to easily beat GPU-based systems by a factor of four. While the individual Gaudi chips beat GPUs in raw performance, it’s the company’s networking technology that gives it the extra boost to reach its full potential.

Gaudi will be available as a standard PCIe card that supports eight ports of 100Gb Ethernet, as well as a mezzanine card that is compliant with the relatively new Open Compute Project accelerator module specs. This card supports either the same ten 100GB Ethernet ports or 20 ports of 50Gb Ethernet. The company is also launching a system with eight of these mezzanine cards.

Last year, Habana Labs previously launched its Goya inferencing solution. With Gaudi, it now offers a complete solution for businesses that want to use its hardware over GPUs with chips from the likes of Nvidia. Thanks to its specialized hardware, Gaudi easily beats an Nvidia T4 accelerator on most standard benchmarks — all while using less power.

“The CPU and GPU architecture started from solving a very different problem than deep learning,” Habana CBO Eitan Medina told me.  “The GPU, almost by accident, happened to be just better because it has a higher degree of parallelism. However, if you start from a clean sheet of paper and analyze what a neural network looks like, you can, if you put really smart people in the same room […] come up with a better architecture.” That’s what Habana did for its Goya processor and it is now taking what it learned from this to Gaudi.

For developers, the fact that Habana Labs supports all of the standard AI/ML frameworks, as well as the ONNX format, should make the switch from one processor to another pretty painless.

“Training AI models require exponentially higher compute every year, so it’s essential to address the urgent needs of the data center and cloud for radically improved productivity and scalability. With Gaudi’s innovative architecture, Habana delivers the industry’s highest performance while integrating standards-based Ethernet connectivity, enabling unlimited scale,” said David Dahan, CEO of Habana Labs. “Gaudi will disrupt the status quo of the AI Training processor landscape.”

As the company told me, the secret here isn’t just the processor itself but also how it connects to the rest of the system and other processors (using standard RDMA RoCE, if that’s something you really care about).

Habana Labs argues that scaling a GPU-based training system beyond 16 GPUs quickly hits a number of bottlenecks. For a number of larger models, that’s becoming a necessity, though. With Gaudi, that becomes simply a question of expanding the number of standard Ethernet networking switches so that you could easily scale to a system with 128 Gaudis.

“With its new products, Habana has quickly extended from inference into training, covering the full range of neural-network functions,” said Linley Gwennap, principal analyst of The Linley Group. “Gaudi offers strong performance and industry-leading power efficiency among AI training accelerators. As the first AI processor to integrate 100G Ethernet links with RoCE support, it enables large clusters of accelerators built using industry-standard components.”

During the Tesla Annual Shareholders Meeting that took place on Tuesday, Tesla CEO Elon Musk didn’t mince words when he talked about what he thinks of the value proposition of traditional fossil fuel vehicles. He called it “financially insane” to buy any car that isn’t an electric car capable of full autonomy – which, conveniently, currently is type of vehicle that only Tesla claims to sell.

Musk reiterated a claim he’s made previously about Tesla vehicles, that all of its cars manufactured since October 2016 have everything they need to become fully autonomous – with those built before the release of its new autonomous in-car computer earlier this year needing only a computer swap, replacing the new Tesla-built computer for the Nvidia ones they shipped with.

The Tesla CEO also reiterated his claim from earlier this year that there will be 1 million robotaxis on the road as of next year, noting that it’s easy to arrive at that number if you consider that it includes all Teslas, including Model X, Model S and Model 3 sold between October 2016 and today.

Regarding Tesla’s progress with self-driving, Musk noted that by end of year, Tesla hopes to deliver autonomy such that while you’ll still have to supervise the driving in-car, it’ll get you from your garage to your workplace without intervention. He said that by next year, their goal is the same thing but without requiring supervision, and then some time after that, pending regulatory cooperation, they’ll be able to do full autonomy without anyone on board.

Musk ended this musing with a colorful metaphor, likening buying a car that’s powered by traditional fossil fuel and without any path to self-driving to someone today “riding a horse and using a flip phone.”

After a relatively quiet show last year, Computex picked up the pace this year, with dueling chip launches by rivals AMD and Intel and a slew of laptop releases from Asus, Qualcomm, Nvidia, Lenovo and other companies.

Founded in 1981, the trade show, which took place last week from May 28 to June 1, is one of the ICT industry’s largest gatherings of OEMs and ODMs. In recent years, the show’s purview has widened, thanks to efforts by its organizers, the Taiwan External Trade Development Council and Taipei Computer Association, to attract two groups: high-end computer customers, such as hardcore gamers, and startups looking for investors and business partners. This makes for a larger, more diverse and livelier show. Computex’s organizers said this year’s event attracted 42,000 international visitors, a new record.

Though the worldwide PC market continues to see slow growth, demand for high-performance computers is still being driven by gamers and the popularity of esports and live-streaming sites like Twitch. Computex, with its large, elaborate booths run by brands like Asus’ Republic of Gaming, is a popular destination for many gamers (the show is open to the public, with tickets costing NTD $200, or about $6.40), and began hosting esport competitions a few years ago.

People visit the ASUS stand during Computex at Nangang exhibition centre in Taipei on May 28, 2019. (Photo by Chris STOWERS / AFP) (Photo credit should read CHRIS STOWERS/AFP/Getty Images)

The timing of the show, formally known as the Taipei International Information Technology Show, at the end of May or beginning of June each year, also gives companies a chance to debut products they teased at CES or preview releases for other shows later in the year, including E3 and IFA.

One difference between Computex now and ten (or maybe even just five) years ago is that the increasing accessibility of high-end PCs means many customers keep a close eye on major announcements by companies like AMD, Intel and Nvidia, not only to see when more powerful processors will be available but also because of potential pricing wars. For example, many gamers hope competition from new graphic processor units from AMD will force Nvidia to bring down prices on its popular but expensive GPUs.

The Battle of the Chips

The biggest news at this year’s Computex was the intense rivalry between AMD and Intel, whose keynote presentations came after a very different twelve months for the two competitors.

During its press conference in Taipei a day before Computex starts, Nvidia announced a new line of laptops that will run its RTX graphics processing units, as well as a new software platform called Studio, with SDKs and drivers to make graphics rendering and other tasks faster. The units are targeted to creative professionals, like video editors, photographers and graphic designers, and meant to compete with the 15-inch MacBook Pro.

One of Nvidia's new Studio laptops, meant to compete against the MacBook Pro

One of Nvidia’s new Studio laptops, meant to compete against the MacBook Pro

The series will include seventeen laptops made by Nvidia’s manufacturing partners (including Acer, ASUS, Dell, Gigabyte, HP, MSI and Razer). The laptops will begin retailing in June, with prices starting at $1,599.

The 17 laptops will be equipped with Quadro RTX 5000, 4000 or 3000 GPUs or GeForce RTX 2080, 2070 and 2060 GPUs. Nvidia claims they can perform up to seven times faster than the MacBook Pro. Studio laptops with Quadro RTX 5000 GPUs will have 16GB of graphics memory, and some of the devices will also have 4K displays and Nvidia’s Max-Q tech for building thin and lightweight laptops.

The Nvidia Studio suite includes the CUDA-X AI platform for automating tasks like color matching videos or tagging photos.