Steve Thomas - IT Consultant

CES may be going ahead as a shortened, pared-down operation this year, but we’re still seeing a decent swathe of announcements prepared for the event still coming out into the wild, in particular among chipmakers who power the world’s computers. Intel has been a regular presence at the show and is continuing that with its run of news today, focusing on the newest, 12th generation of its mobile chip with versions aimed both at enterprises and consumers, alongside updates to its Evo computing platform concept, new 35- and 65-watt processors for desktop, and vPro platform launches.

With some of the lineup announced back in October (it has now dropped the Alder Lake naming that appeared still to be in use then) today’s news is arguably the biggest push that Intel has made in years to promote its processors and build for a range of use cases, ranging from consumers through to more intense gaming, through to enterprise applications and IoT deployments.  After what some described as a lacklustre 11th-generation launch, here Intel is unveiling no less than 28 new 12th-generation Intel Core mobile processors, and an addition 22 desktop processors.

Intel claims that the mobile processors are clocking in speeds of up to 40% faster than its previous generations

“Intel‘s new performance hybrid architecture is helping to accelerate the pace of innovation and the future of compute,” said Gregory Bryant, executive vice president and general manager of Intel’s Client Computing Group, in a statement. “We want to bring that idea of ubiquitous computing to life,” he added in a presentation today at CES.

The H-series of the 12th Gen Intel core mobile processors come in four main categories, i3, i5, i7, and i9. The i9-12900HK is the fastest of the range of eight and are one of the first from Intel to build performance and efficient cores together on the same chip to better handle heavy workloads. They come with frequencies of up to 5GHz, 14 cores (6 for performance, 8 for efficiency) and 20 threads for applications that are multi-threaded, and they also offer memory support for DDR5/LPDDR5 and DDR4/LPDDR4 modules up to 4800 MT/s. Intel says this is a first in the industry for H-series mobile processors.

They offer support for Deeplink for optimized power usage; Thunderbolt 4 for faster data transfers (up to 40 Gbps) and connectivity; and Intel’s new integrated WiFi 6E, which Intel dubs its “Killer” WiFi and will be available in nearly all laptops running Intel’s 12th generation chips. The interesting thing about this latest WiFi version is that it essentially optimizes for gameplay and other bandwidth-intensive activities: latency is created by putting the most powerful applications on channels separate from the rest of the applications on a device that might also be using bandwidth (these are relegated to lower bandwidth channels) that now essentially run in the background. Bands can also be merged intelligently by the system when more power for a specific application is needed. All this will be available from February 2022, Intel said.

H-series, it added, is now in full production, with Acer, Dell, Gigabyte, HP, Lenovo, MSi, and Razer among those building machines using it, some 100 designs in all covering both Windows and Chrome operating systems.

In terms of applications that Intel is highlighting for its chip use, in addition to enterprises and more casual consumers, it continues to focus on gaming. No surprise there, given the demands of the most advanced games and gamers today, which have become major drivers for improving compute power. To that end, Intel is making sure its chips are in that mix with the 12th-generation chip.

That has included both investing in gaming companies (such as Razer), as well as working closely with developers to optimize speeds on its processors. Intel said that work with Hitman 3, for example, so that its chips could support the audio and graphics processing in the game increased frame rates by up to 8%. 

“Tuning games to achieve maximum performance can be daunting,” says Maurizio de Pascale, chief technology officer at IO Interactive, in a statement. “But partnering with Intel and leveraging their decades of expertise has yielded fantastic results – especially optimizing for the powerful 12th Gen Intel Core processors. As an example, anyone who plays on a laptop

Content application remains another major part of the market for Intel, with customers building software and hardware optimized for its chips including Adobe, Autodesk, Foundry Blender, Dolby, Dassault, Magix and more. Indeed, the processor now sites at the centre of all of these as they live as digital activities. They also represent a large number of verticals that Intel can target, including product designers, engineers, broadcasting and streaming, architects, creators, scientific visualization.

The 22 new processors getting unveiled are coming in both 65 watt and 35 watt varieties. Alongside the higher wattage (and thus higher energy consuming) chip, Intel also launched a new Laminar cooler.

Another strand of Intel’s work over the last several years has been to approach the specifications of computers running its chips in a more holistic way to integrate what it is building with where it can be put to use, by way of its Evo platform and Project Athena. Intel said that there are not more than 100 co-engineered designs using the 12th-generation chips based on these, ranging from foldable displays to more standard laptops, with many of them launching during the first half of this year.

Evo specifications already cover responsiveness, battery life, instant wake, and fast charge, and Intel said that a new addition to that range will be a new parameter, “intelligent collaboration”, which will be focused on how many of us are using computers today, for remote collaboration, videoconferencing and the features that make it better such as AI-based noise cancellation, better WiFi usage, and enhanced camera and other imaging effects. This will be likely where its $100-$150 million acquisition of screen mirroring tech provider Screenovate, which it confirmed in December, will fit.

At a time when Intel continues to face a lot of competition from the likes of AMD and Nvidia, and Apple makes yet more moves to distance itself from the company, continuing to move ahead and reinforce the partners that it does have, and build an ecosystem around that, is the strategy that the company will continue to pursue, as long as it keeps up its end of the innovation bargain.

“Microsoft and Intel have a long history of partnering together to deliver incredible performance and seamless experiences to people all over the world,” said Panos Panay, chief product officer at Microsoft, in a statement. “Whether playing the latest AAA title, encoding 8K video or developing complex geological models, the combination of Windows 11 and the new 12th-gen Intel Core mobile processors means you’re getting a powerhouse experience.”

Read more about CES 2022 on TechCrunch

Omniverse is Nvidia’s platform for allowing creators, designers and engineers to collaboratively build virtual worlds. It’s the company’s platform that brings together design tools and assets from first- and third-party applications into a single hardware and software ecosystem. Until now, Omniverse and the various Nvidia tools that support it were in beta, but at CES today, the company took off the beta label and made Omniverse generally available to creators.

The company says Omniverse has already been downloaded by almost 100,000 creators and with today’s update, it is bringing a number of new features to the platform, too. These include the Omniverse Nucleus Cloud, a service for sharing large Omniverse 3D scenes that will make it possible for creators and clients to collaborate on scenes in a way that’s similar to working on a cloud-shared document (and without having to move a ton of data for every small change).

Image Credits: Nvida

At the core of Omniverse is the Universal Scene Description format, wich makes it easy to import assets from a wide variety of existing tools. But sometimes, you just want a basic 3D asset and are willing to pay for it, so Nvidia also now added support for 3D marketplaces and libraries from the likes of TurboSquid by Shutterstock, CGTrader, Sketchfab and Twinbru in the Omniverse Launcher. Reallusion’s ActorCore, Daz3D and e-on software’s PlantCatalog will soon launch their own Omniverse-ready assets, too.

Image Credits: Nvidia

For free assets, Omniverse is expanding its set of Omniverse Machinima assets with new characters and objects from games like Shadow Warrior 3 and Mount & Blade II: Bannerlord.

And when you need your character to speak, Omniverse Audio2Face, an existing AI-enabled app that animates 3D faces with the help of an audiotrack, now features blendshape support and the ability to export directly to Epic’s MetaHuman Creator app.

Read more about CES 2022 on TechCrunch

You can now add another set of Nvidia-based graphics cards to the graphics cards you probably won’t be able to buy anytime soon, as the company today launched the GeForce RTX 3050 for desktops.

Starting at $249, the budget-friendly card — assuming Nvidia and its partners can produce enough to keep prices from escalating — will feature 8GB of GDDR6 memory and promises to be able to run the latest games at over 60 frames in a 1440p resolution with ray-tracing enabled. Like its more powerful brethren in the company’s 30-series, it will feature third-generation Tensor cores to power Nvidia’s DLSS for smart upscaling and AI workloads, as well as second-generation RT cores for ray-tracing.

Image Credits: Nvidia

Nvidia noted that the vast 75 percent of gamers are still using GTX GPUs, which should make the RTX 3050 a compelling upgrade for many. The 2016 GTX 1050, for example, isn’t quite able to run modern games at 60 fps anymore and even the more recent GTX 1650 now often struggles to do so.

Cards based on the new chip will launch on January 27.

Image Credits: Nvidia

In addition to these budget chips, Nvidia also teased an RTX 3090 Ti, which looks like it will be a monster of a GPU, but didn’t provide any real details about it. We’ll know more later this month.

For laptop gamers and creators, Nvidia also today announced the GeForce RTX 3080 Ti laptop GPU, which promises to offer more performance than a desktop Titan RTX machine. Laptops with this chip will start at $2,499. On a more budget-friendly note, the new GeForce RTX 3070 Ti GPU for laptops will launch on machines that start at $1,499 and offer 70 percent more performance than 2070 Super laptops at a 1440p resolution.

Image Credits: Nvidia

And 1440p, it’s worth noting, is what Nvidia is now pushing for e-sports games, with a new category of gaming monitors that will be able to handle up to 360fps at 1440p, starting with new 360 fps display from Asus and 300 fps support on a set of new displays from AOC, MSI and ViewSonic.

Read more about CES 2022 on TechCrunch

At CES, Nvidia today put a strong emphasis on its GeForce Now game streaming service, its competitor to the likes of Google’s Stadia (that’s still around, right?), Amazon’s Luna and Microsoft’s increasingly popular Xbox Cloud Gaming service. All of these use a different business model, with GeForce Now making it easy for players to bring games they bought elsewhere to the service, with Nvidia offering a restricted free tier and then charging a membership fee for access to its servers, starting at $10 per month.

Today, the company announced a number of new partnerships, as well as the news that Electronic Arts Battlefield 4 from 2013 and Battlefield V from 2018 are now available for streaming on the service. Not exactly Day One releases, but nice to have, I guess.

What’s maybe more important is that Nvidia continues to expand the overall GeForce Now ecosystem. In this case, that means a deal with AT&T, with will give its customers on a 5G device on a 5G “unlimited” plan a free six-month GeForce Now priority membership. Nvidia says the two companies are “teaming up as 5G technical innovation collaborators,” but we’re basically talking about a marketing deal here. The whole promise of 5G is low latency after all.

Image Credits: Nvidia

For the living room, Nvidia is teaming up with Samsung to bring its game streaming platform to that company’s smart TVs after already offering its app on 2021 LG WebOS TVs as a beta last year.

“Our cloud gaming service will be added to the Samsung Gaming Hub, a new game-streaming discovery platform that bridges hardware and software to provide a better player experience,” Nvidia writes in today’s announcement. It’ll have more to share about this deal in the second quarter of this year.

Read more about CES 2022 on TechCrunch

Ahead of the official start of CES, Samsung today revealed its vision for its next generation of smart TVs that will include everything from cloud-based gaming services to video chat while watching TV to even NFTs. The company says its 2022 Smart TVs will ship with a new “Smart Hub” that will offer consumers the ability to switch between different types of entertainment, including media, gaming via its new Gaming Hub, and “ambient” — the latter referring to the TV’s ability to display art, photos, or other information on the TV when it’s not in use.

For gamers, the most notable addition to the new TVs will be its game streaming discovery platform, the Samsung Gaming Hub, powered by Tizen. This service will allow game streaming providers to bring their game libraries directly to the TV. Samsung today announced partnerships with NVIDIA GeForce NOW, Google’s Stadia, and Utomik, and says more partnerships will come further down the road.

From the hub, Samsung TV owners will be able to browse available titles, search, and purchase games, and instantly play their favorites. Their game controllers will also be able to be paired with the new Gaming Hub, the company notes. Plus, users can easily access YouTube where they can follow favorite streamers to watch gaming content.

Select 2022 4K and 8K TVs and gaming monitors will also support the new HDR10+ GAMING standard, offering an HDR gaming experience with low latency, variable refresh rate, and over 120Hz. This experience features automated HDR calibration that does away with the need for manual calibration of settings across input sources, like consoles, PCs and more, the company notes. Supported TVs include the Neo QLED lineup with the Q70 TV series and above, as well as Samsung gaming monitors.

Samsung’s new Gaming Hub will launch later in 2022 and will be available through the main navigation menu across the Gaming, Media, and Lifestyle categories.

The addition of cloud gaming to smart TVs isn’t unique to Samsung. LG last year said it was bringing GeForce NOW to its WebOS smart TVs, as well as Google Stadia. Amazon’s Luna runs on its Fire TVs and Google Stadia works on a range of supported smart TVs, including those from LG, Hisense, TCL, Philips, and others. And of course, you can also access these services through streaming devices as an alternative.

Beyond gaming, the new line of Samsung’s 2022 Smart TVs will embrace other trends that grew in popularity over the past year or so, including co-watching TV and movies with friends and buying and selling NFTs.

During the early days of the pandemic, family and friends looked for different ways to connect and spend time together while under Covid-19 lockdowns and other restrictions. This led to a rise in co-watching services and features that let users stream entertainment at the same time as their loved ones. Hulu, Amazon Prime Video, and Disney+, among others, launched co-viewing features that let people stream a movie or show at the same time while in different locations. More recently, Apple launched SharePlay over FaceTime, which also supports Disney+ and other streaming apps, like NBA, Paramount+, Showtime, Apple TV+, and TikTok, for example.

Samsung’s take on this trend is to instead offer its own, new “Watch Together” app that lets family and friends video chat while watching a TV show or movie on their TV.

Image Credits: Samsung

Another more odd addition to the Smart Hub is support for NFTs. The platform will offer an app that allows users to discover, purchase and trade their NFTs on Samsung’s MICRO LED, Neo QLED and The Frame TV models later this year.

“With demand for NFTs on the rise, the need for a solution to today’s fragmented viewing and purchasing landscape has never been greater,” the company told The Verge, when detailing what it calls the “world’s first TV screen-based NFT explorer and marketplace aggregator.” Users will be able to browse, preview, purchase as well as show off their NFT art on their TVs — the latter enhanced by a smart calibration feature that will automatically adjust the TV’s display settings to match the NFT’s creator’s recommendations. As users research NFTs, they’ll also be able to view the NFT’s history and the blockchain metadata.

Image Credits: Samsung

In addition to the included services in the new Smart Hub, the 2022 Smart TVs will work with accessories like an Auto Rotating Wall Mount and Stand, which allow users to rotate their screen to a vertical mode. This mode will support Samsung’s own lifestyle features like its Ambient Mode+ and Art Mode, as well as third-party apps like TikTok and YouTube.

NVIDIA’s plan to acquire ARM just hit a major stumbling block. The Federal Trade Commission has sued to block the merger over concerns the $40 billion deal would “stifle” competition for multiple technologies, including datacenters and car computers. ARM is a “critical input” that fosters competition between NVIDIA and rivals, the FTC said, and a merger would give NVIDIA a way to “undermine” those challengers.

The FTC was also worried NVIDIA would have access to sensitive info from ARM licensees. The merger could reduce the incentive for ARM to develop tech that might run counter to NVIDIA’s business goals, officials added. The administrative trial is due to start August 9th, 2022.

The company didn’t appear bothered. NVIDIA characterized the lawsuit as the “next step” in the FTC process, and repeated its arguments in favor of the buyout. The acquisition would “accelerate” ARM’s product plans, foster more competition and still protect the chip architecture designer’s open licensing model, according to NVIDIA. You can read the full statement below.

Despite the claims, an FTC lawsuit is a huge issue for NVIDIA. The Commission files lawsuits like these when it believes a company is breaking the law — concessions might not be enough. It also comes after the European Commission launched an investigation into the purchase in October. NVIDIA is facing questions from major regulators clearly wary of the acquisition, and those agencies might not accept the answers.

As it stands, NVIDIA’s competition likely isn’t happy. Qualcomm reportedly objected to the ARM deal in communications with the FTC (among other bodies) over fears NVIDIA might refuse to license designs. And when heavyweights like Apple, MediaTek and Samsung also depend on ARM, it’s doubtful the rest of the market would be enthusiastic. At the least, the trial would likely delay closure of the union past NVIDIA’s original 2022 target.

As we move into this next step in the FTC process, we will continue to work to demonstrate that this transaction will benefit the industry and promote competition. NVIDIA will invest in Arm’s R&D, accelerate its roadmaps, and expand its offerings in ways that boost competition, create more opportunities for all Arm licensees and expand the Arm ecosystem. NVIDIA is committed to preserving Arm’s open licensing model and ensuring that its IP is available to all interested licensees, current and future.

Editor’s note: This article originally appeared on Engadget.

Chipmaker Nvidia’s planned $40 billion purchase of UK-based chip designer ARM will face an in-depth probe by the UK’s competition regulator after the government ordered the Competition and Markets Authority (CMA) to take a closer look at the proposed transaction.

The UK’s digital secretary, Nadie Dorries, said today that she has written to the CMA instructing it to carry out a phase two investigation — citing competition and national security concerns.

Back in August, the government published details of the CMA’s preliminary probe which raised a number of competition concerns attached to the acquisition — saying it could lead to a “substantial lessening of competition” in markets for data centres, Internet of Things, the automotive sector and gaming applications.

The CMA’s phase one report, which recommended a deeper probe on competition grounds — but did not make a decision on the national security issue — has been published in full today.

Earlier this year, back in April, the government issued an intervention notice on national security grounds — asking the CMA to prepare a report on the implications of the transaction to help it decide whether a deeper probe is required.

Today Dorries said national security interests remain “relevant” — and “should be subject to further investigation”.

Under the Enterprise Act 2002, the digital secretary has statutory powers that allow her to make a quasi-judicial decision to intervene in mergers under a handful of public interest considerations, including for matters of national security.

Commenting in a statement, she said: “I have carefully considered the Competition and Market Authority’s ‘Phase One’ report into NVIDIA’s proposed takeover of Arm and have decided to ask them to undertake a further in-depth  ‘Phase Two’ investigation.

“Arm has a unique place in the global technology supply chain and we must make sure the implications of this transaction are fully considered. The CMA will now report to me on competition and national security grounds and provide advice on the next steps.”

“The government’s commitment to our thriving tech sector is unwavering and we welcome foreign investment, but it is right that we fully consider the implications of this transaction,” Dorries added.

Nvidia has been contacted for comment on the phase 2 referral.

The CMA will have 24 weeks (with a possible eight week extension) to conduct the phase two probe and report its findings to the government — meaning, at the very least, Nvidia’s acquisition of ARM faces months more delay before the transaction could be cleared.

The digital secretary will need to take a decision on whether to make an “adverse public interest finding” — in relation to the acquisition on national security and/or competition grounds — which, if she does make such a finding, could lead to the acquisition being blocked on public interest grounds.

A final decision on the national security issue lies with the UK secretary of state — who has 30 days after receiving the CMA’s final report to make the call.

If Dorries finds no adverse public interest grounds for intervention she would refer the case back to the CMA — which could still advise against it on competition grounds — and/or impose conditions in order to remedy concerns so it may go ahead.

So there are substantial barriers to clearance — with the potential for the acquisition to be blocked on both national security and competition grounds, or on one of either ground.

Although it could also ultimately be cleared on both grounds (albeit that seems unlikely on the competition front, given the CMA’s phase one probe raised significant concerns).

The deal could also be approved subject to remedies (aka conditions and/or restrictions intended to address specific concerns).

Growing concerns

Nvidia’s plan to buy ARM faced instant domestic opposition, with one of the original co-founders of the company starting a campaign to ‘save ARM’ from being snapped up by the US giant.

The global chip crunch has only likely heightened concerns about supply chain stability in the semiconductor arena (though ARM develops and licenses IP, rather than making chips itself). And the EU recently announced a plan to legislate with a Chips Act that’s intended to strengthen regional sovereignty around semiconductor supply.

The European Union is also examining the Nvidia-ARM deal directly — announcing its own in-depth investigation late last month and throwing up another road-block for the US giant to scoop up the UK chip designer.

In a similar finding to the CMA’s phase 1 probe, the Commission said its preliminary analysis of the Nvidia-ARM deal raised a raft of competition concerns.

“The Commission is concerned that the merged entity would have the ability and incentive to restrict access by NVIDIA’s rivals to Arm’s technology and that the proposed transaction could lead to higher prices, less choice and reduced innovation in the semiconductor industry,” the EU’s executive wrote last month. “Whilst Arm and NVIDIA do not directly compete, Arm’s IP is an important input in products competing with those of NVIDIA, for example in datacentres, automotive and in Internet of Things,” added competition chief Margrethe Vestager in a statement.

“Our analysis shows that the acquisition of Arm by NVIDIA could lead to restricted or degraded access to Arm’s IP, with distortive effects in many markets where semiconductors are used. Our investigation aims to ensure that companies active in Europe continue having effective access to the technology that is necessary to produce state-of-the-art semiconductor products at competitive prices.”

The EU has until March 15, 2022 to decide whether or not to clear the acquisition. 

According to a Reuters report last month, the Commission was not swayed by concessions offered earlier by Nvidia as it sought to avoid an in-depth EU probe.

H2O.ai — a startup that has developed an open-source framework as well as proprietary apps that make it easier for any kind of enterprise to build and operate artificial intelligence-based services — has seen a surge of interest as AI applications have become more ubiquitous, and enterprises beyond tech companies want to get in on the action. Now, it has raised $100 million to fuel its growth, a round of funding that values H2O.ai at $1.7 billion post-money ($1.6 billion pre-money).

This is a Series E round, and it’s being led by a strategic backer, the Commonwealth Bank of Australia (CBA), which has been a customer of the startup and will be using the backing to kick off a deeper partnership between the two to build new services. Others in the round include Goldman Sachs, Pivot Investment Partners, Crane Venture Partners and Celesta Capital. Further plans for the funding include building more products for H2O.ai as a whole, and hiring more talent to continue expanding the company’s H2O AI Hybrid Cloud platform.

This is not the first time that a customer has led a round as a strategic backer: in 2019, Goldman Sachs led the company’s Series D of $72.5 million. As a sign of how the company has been growing, and the general appetite for what it does, H2O’s valuation has leapfrogged since that last round, when it was valued at $400 million, per PitchBook data. Mountain View-based H2O.ai has raised $246.5 million to date.

The fact that both of the last rounds have been led by big banks that are also customers of H2O.ai’s speaks a lot to where the opportunity has been for the startup. Sri Ambati, the founder and CEO (who previously was also a co-founder of Platfora, which was acquired by Workday), told me over email that about 40% of the company’s revenues currently come from the very wide and all-encompassing world of financial services.

“Retail banking, credit cards, payments — almost every payment system from PayPal to MasterCard are customers of H2O,” he said. On the equities side, companies power fixed income, asset management, and mortgage backed security services using H2O’s technology, with MarketAxess, Franklin Templeton, and BNY Mellon also “strong” customers, he said.

That is also seeing a growing complement of business from other verticals, he added: Unilever, Reckitt P&G are among those in consumer goods; UPS is one of its users in logistics and delivery; Chipotle is among those in food services; and he said that AT&T “is one of our largest customers.”

Covid-19 has had a role to play here, too.

“Manufacturing became a fast-growing vertical due to supply chain disruption and demand sensing,” he said of the pandemic. “We launched H2O AI Health to help our hospitals and providers, payers like Aetna and pharma customers.”

Notably, H2O.ai is also now breaking ground into working more with other tech companies that want to build more AI into their own workflows to in turn provide services to their own customers. “Our latest wins are in vertical clouds and SaaS ISVs,” Ambati said.

The company has offered an open source component to its services, which it calls simply H2O, from its earliest days, and that is now used by over 20,000 enterprises. Part of the reason for that is its flexibility: H2O.ai says that its open source framework works both on top of existing big data infrastructure, on bare metal or on top of existing Hadoop, Spark or Kubernetes clusters and is able to ingest data directly from HDFS, Spark, S3, Azure Data Lake or any other data source into it’s in-memory distributed key-value store.

“Our open source platform gives freedom and ability for customers to build their own AI centers of competence and excellence,” Ambati said of the open source tools. “We are like the Tenzing Sherpas of the AI mountains helping our customers to traverse and conquer AI peaks.”

That framework can be used by engineers to build customized applications, while H2O.ai’s proprietary tools provide more completed applications in areas like fraud detection, churn prediction, anomaly detection, price optimization and credit scoring — areas that can benefit from the ingestion of massive amounts of data in order to gain better insights into what might happen next: these sit either as a complement to what human analysts and data scientists might be able to unearth, or potentially, in some cases, as a replacement for the more basic work they might do. In all, there are currently some 45 applications in all.

The plan, Ambati said, is to over time build out more of these, which will reside in “app stores” in specific verticals offering a range of its proprietary, pre-built tools particular to the demands of each of them.

The trend fueling H2O.ai’s growth has been gaining momentum for several years now.

Artificial intelligence holds a lot of promise for the world of enterprise IT: used well, tools like machine learning, natural language processing and computer vision can speed up productivity, or even open up completely new areas of opportunity for an organization. Over time, it can save companies billions of dollars in operational and other costs.

One big issue, however, is that in many cases, organizations might lack the internal teams to build or carry through projects that use AI, and that’s before considering the fact that as needs and parameters evolve all of that infrastructure will need updating, too. Technology touches everything in an enterprise these days, but not every enterprise is a tech company.

H2O.ai is not the first or only startup that has aimed to fill this gap in the market, although it seems to have managed its task a little more successfully than others.

Notably, Element.AI out of Canada was built out on the back of a large amount of funding and buy-in from big tech companies like Microsoft and Nvidia also to address the idea of democratizing AI for the wider world of enterprises that might lack the resources to build and run AI tools themselves, but could very much benefit from them before their businesses simply get cannibalized by the many AI-fuelled tech companies moving into their spaces. It had a strong focus on integration (it was a little like an Accenture for AI services) but never managed to make a big enough jump from concept to business and was eventually acquired, in 2020, by ServiceNow to complement its own efforts to build tools for businesses.

Ambati said that only about 10% of H2O.ai’s business is in the area of services, with the remainder, 90%, coming from its products, as an explanation for why one startup’s approach worked while another did not.

“It is easy to get lured by services in data science and AI,” he said. “Being true to our product maker culture and yet building deep customer empathy and listening is critical to success. Customers experience our maker culture and become makers themselves. We are continuously making our software easier (democratizing) low-code, reusable recipes and automation through AI Cloud and building data pipelines, AI AppStores and delivering AI as a service that our customers can use to improve their customer experiences, brands and communities.

“The big difference — we are raising a forest, not just a tree. H2O AI Cloud, H2O Wave our low-code Application Development, H2O AI AppStores, Marketplace and H2O-3 Open Source ML are at the core of AI Applications and software already and we are partnering with customers and their ecosystem of partners and developers.”

That’s a play, and business, resonating well with investors, too,

“Commonwealth Bank has a significant asset in the millions of data points collected every day. AI already has helped us to improve our customer experience, however, we know there is untapped potential to do more,” said Matt Comyn, CEO of CBA, in a statement. “The investment in and strategic partnership with H2O.ai extends our leadership in artificial intelligence and ultimately boosts the bank’s ability to offer leading digital propositions and reimagined products and services to customers.” Dr. Andrew McMullan, chief data and analytics officer at CBA, will join the H2O.ai board.

NVIDIA has unveiled its next-generation cloud gaming platform called GeForce Now RTX 3080 with “desktop-class latency” and 1440p gaming at up to 120 fps on PC or Mac. The service is powered by a new gaming supercomputer called the GeForce Now SuperPod and costs double the price of the current Priority tier.

The SuperPod is “the most powerful gaming supercomputer ever built,” according to NVIDIA, delivering 39,200 TFLOPS, 11,477, 760 CUDA Cores and 8,960 CPU Cores. NVIDIA said it will provide an experience equivalent to 35 TFLOPs, or triple the Xbox Series X, roughly equal to a PC with an 8-core CPU, 28GB of DDR4-3200 RAM and a PCI-GEN4 SSD.

NVIDIA launches GeForce Now RTX 3080-class gaming at up to 1440p 120fps
NVIDIA

As such, you’ll see 1440p gaming at up to 120fps on a Mac or PC, and even 4K HDR on a shield, though NVIDIA didn’t mention the refresh rate for the latter. It’ll also support 120 fps on mobile, “supporting next-gen 120Hz displays,” the company said. By comparison, the GeForce Now Priority tier is limited to 1080p at 60 fps, with adaptive VSync available in the latest update.

It’s also promising a “click-to-pixel” latency down to 56 milliseconds, thanks to tricks like adaptive sync that reduces buffering, supposedly beating other services and even local, dedicated PCs. However, that’s based on a 15 millisecond round trip delay (RTD) to the GeForce Now data center, something that obviously depends on your internet provider and where you’re located.

NVIDIA’s claims aside, it’s clearly a speed upgrade over the current GeForce Priority tier, whether you’re on a mobile device or PC. There’s a price to pay for that speed, though. The GeForce Now premium tier started at $50 per year and recently doubled to $100, which is already a pretty big ask. But the RTX 3080 tier is $100 for six months (around double the price) “in limited quantities,” with Founders and priority early access starting today. If it lives up to the claims, it’s cheaper than buying a new PC, in any case.

Editor’s note: This article originally appeared on Engadget.

Building usable models to run AI algorithms requires not just adequate data to train systems, but also the right hardware subsequently to run them. But because the theoretical and practical are often not the same thing, there is often a gap between what data scientists may hope to do and what they practically do. Today, a startup called Deci that has built a deep learning platform to help bridge that gap — by building models that can work with the data and hardware that are available to use — is announcing some funding after finding strong traction for its products with Fortune 500 tech companies running mass-market, AI-based products based on video and other computer vision-based services.

The Tel Aviv-based startup has picked up a Series A of $21 million, money that it will be using to continue expanding its product and customer base. Insight Partners is leading the round, with previous backers Square Peg, Emerge and Jibe Ventures, alongside some new backers: Samsung Next, Vintage Investment Partners, and Fort Ross Ventures. Square Peg and Emerge led Deci’s seed round of $9.1 million a year ago. It also works very closely with others who are not strategic or financial investors (but may well be down the line?). Intel collaborated with it on MLPerf, where Deci’s technology accelerates the inference speed of the ResNet-50 neural network when run on Intel CPUs.

Up to now, Deci has been focusing its attention on models for computer vision-based products, where its platform — built on its own proprietary AutoNAC (Automated Neural Architecture Construction) technology — is able to build, and continuously update, models quickly for services that might have otherwise taken longer, and a lot of trail and error, to devise.

One key client, for example, is one of the world’s biggest and well-known videoconferencing platforms (unfortunately, name undisclosed) that is using Deci to build AI modeling so that users can blur their backgrounds in video calls. Here, all of the computing needed to execute that blurring is happening at “the edge”, on users’ own CPU-based devices (that is, not typically optimized for AI workloads).

Yonatan Geifman, the CEO who co-founded Deci with Ran El-Yaniv and Jonathan Elial (a trio of AI specialists), said that the plan is now to start expanding from computer vision applications to another challenge, building better NLP (natural language) models, which you might need to run any kind of service with a voice interface, from personal assistants on phones or smart speakers through to audio-based search or any kind of customer service interface, for example.

Although Deci has picked up a lot of business by helping companies address the challenge of running AI services in a landscape of devices that are not necessarily optimized for AI, it has also found a lot of interest from organizations to use Deci to build better models for their own internal computing, even when they theoretically have the GPUs and compute power on hand to run anything. This taps into an interesting power balance that has long existed in enterprise IT and is very much getting played out in AI today, where enterprises will try to do more with the assets they have to hand, while at the same time they are regularly getting pushed to invest more in newer and more expensive and powerful equipment.

“There is a race to larger models all the time,” Geifman said in an interview, citing the new language model announced earlier this month by Nvidia and Microsoft as one example of that evolution. “So the hardware is just not enough. In one sense, maybe that race and drive to invest in new hardware is being pushed by the hardware makers themselves, but the models are getting larger. There is a gap, between the algorithm and the supply of the hardware. So, we need to have some convergence based on what hardware we have. Deci is bridging or even closing that gap.”

With adequate training data being another perennial problem in AI, Deci is also working to give a boost on the data side of the equation. Geifman said that Deci essentially builds synthetic data sets to supplement data when more is needed to build the models. In all cases, the product works within organizations’ developer environments, data stays where it is and does not go to Deci or anywhere else in the process of building the models.

Alongside that Deci is also using AutoNAC to build more products. The most recent of these is DeciNets, which Deci describes as “a family of computer vision models” that essentially skip some of the work of building models from the ground up and therefore using less compute power to run.

“Deci is at the forefront of AI and deep learning acceleration, with highly differentiated technology that lets customers optimize blazingly fast deep learning models for inference tuned to any hardware platform,” said Lonne Jaffe, MD at Insight Partners, in a statement. “We are delighted to be part of Deci’s ScaleUp journey and look forward to supporting the company’s rapid growth.” Jaffe is joining the board with this round.

Amid a global semiconductor shortage, an upstart in the world of AI chips is announcing a big round of funding to meet a boom in demand for its technology. Hailo, which makes edge-device chips customized to work with AI workloads — typical implementations include smart cities, retail environments, industrial settings and next-generation automotive systems — has raised $136 million in a Series C round of funding, one of the biggest to date in the AI chip market. We’ve confirmed with sources close to Hailo that the investment values it at around $1 billion.

Poalim Equity and Gil Agmon co-led the round, with participation also from previous backers Hailo Chairman Zohar Zisapel, ABB Technology Ventures (ATV), Latitude Ventures, OurCrowd; and new investors Carasso Motors, Comasco, Shlomo Group, Talcar Corporation Ltd., and Automotive Equipment (AEV). The company has now raised some $224 million to date.

This latest round comes about 18 months after Hailo’s Series B of $60 million, and about a year after the release of the company’s most recent AI modules based on its Hailo 8 chip, intended to compete against the likes of Intel and Nvidia.

Orr Danon, the co-founder and CEO, said in an interview that in the interim the company has been seeing a huge surge of interest in the market — in the last quarter alone, he said Hailo doubled the number of projects it’s working on to 100 — so this latest round is about scaling to meet that demand, but also to continue customizing how and where its processors can be used.

“We are now in the market with the Hailo 8, and people are very excited because of its efficiency,” he said. One unique aspect of Hailo’s edge chips is that they are designed to adapt to existing resources to work with custom neural networks, so not only are they fast but need less energy to run than the equivalent amount of processing power that you might otherwise need in a data center computer to run a similar task. “However, we want to expand our offering. It’s not a one size fits all, so we are also investing in software.”

The funding is coming on the heels of a complicated year in the world of chips, where the pandemic has alternative driven strong demand in some areas (for example in consumer environments, where users have been kickstarting the renewal cycle for better devices at a time when other activity has been limited); big drops in activity among others (eg, ambitious projects in areas like autonomous vehicles); and a major slowdown in production overall. For a company working in edge devices as Hailo is, that presents an opportunity to make the case for more efficient and cost effective systems, helped by the fact that it is able to integrate with users’ own neural networks and preferred development frameworks such TensorFlow or ONNX.

Danon said that while Hailo has seen a softening in demand in some segments — for example, automotive — the diversity of its business has meant that demand overall has continued to rise. Even automotive is coming back after a particularly frothy period and some fallout as a result. So, for example, while the number of projects focused on fully-autonomous vehicles may have gone down, there are still a number of efforts working on semi-autonomous systems, which he said still translate into business for Hailo.

“Companies are now starting to look for realistic deployments, and facing real challenges,” he said. ” Okay, maybe we do not need our self-driving cars to drive on the Autobahn, but we still need to learn new tasks.”

Alongside this, he said the company is seeing a lot of strong interest from industrial customers, those in the retail industry (where edge devices are used for computer vision applications, such as those you might have in a security system, or in an automated check-out service, or for analytics), and in smart cities, where transportation will also continue to be a major driver of business. It’s for the opportunity ahead as much as for the current business that investors are backing the company.

“In the coming years, AI will become the defining feature for creating new business value and reshaping user experience as we know it. The ability to bring AI-based features to market will increasingly be the deciding factor over whether companies succeed or fail,” said Mooly Eden, who recently left Intel, where he worked for nearly 40 years and was most recently president of its operations in Israel, and is now on the board of Hailo. “Hailo’s innovative and hyper-efficient processor architecture addresses the growing demand for a new kind of chip to handle these new types of workloads, challenging traditional computing solutions.”

Datacenters are taking on ever-more specialized chips to handle different kinds of workloads, moving away from CPUs and adopting GPUs and other kinds of accelerators to handle more complex and resource-intensive computing demands. In the latest development, a startup called Speedata, which is building a processor (fabless) to cover the specific area of big data analytics, is coming out of stealth and announcing $70 million in funding to continue building its product and embark on its first commercial deals. In a market that’s seeing a proliferation of purpose-built chipsets these days, Speedata claims to have, in its own words, “the world’s first dedicated processor for optimizing cloud-based database and analytic workloads.”

The news comes after a period in which Israel-based Speedata has piloted its tech with a mix of large companies — hardware makers, end users, big-name cloud providers — to show how it can speed up their workloads, which it has, by some two orders of magnitude, CEO and co-founder Jonathan Friedmann told TechCrunch. Speedata is a fabless chip startup, so the next steps will be to produce the chips and ink commercial deals, likely with some of those running tests with the company.

Speedata was founded in 2019 and has been in stealth since then, and so the funding getting announced today is actually in two parts. First, there is a $15 million seed round led by Viola and Pitango that dates from some time back. And second, a newer $55 million Series A led by Walden Catalyst Ventures83North, and Koch Disruptive Technologies (KDT), with Pitango and Viola participating, alongside Eyal Waldman, co-founder and former CEO of Mellanox Technologies.

Friedmann and others on the founding team — the other co-founders are Dan Charash, Rafi Shalom, Itai Incze, Yoav Etsion and Dani Voitsechov — have an impressive track record in the worlds of academia and the chip industry, with multiple exits and groundbreaking patents behind them — one key factor in the company accruing so much backing without a single commercial deal yet to its name. “We were grossly oversubscribed for this round,” he said.

The reason Speedata has focused on big data analytics is that, first of all, it’s been a hard problem to solve for the fragmentation in data sources (and before you wonder, the company is not disclosing how and why it was able to make this breakthrough now; the stealth element persists). And second of all, because, in Friedmann’s words, “it’s probably the biggest workload in the datacenter” and so is overdue for more dedicated processor attention beyond the CPUs and FPGAs that are currently being used to support it.

To be clear, its focus on big data analytics is not the same as computing AI workloads, an area currently dominated by Nvidia, although Nvidia also becomes a potential competitor as it expands its own horizons.

That is also why striking while the iron is hot is of the essence for Speedata right now, he added. One reason it has yet to be addressed is because it’s only really starting to emerge as a bottleneck now.

Data, and the generation of it in the enterprise, is currently exploding exponentially — Speedata cites research from IDC that projects the amount of data that will be created in the next three years will exceed the amount created in the past 30.

But analytics processing around that is no longer advancing at the rate it used to — in part because of that volume of data — and so looking at how to fix that by way of better processors is a multidisciplinary problem, Friedmann said. “It’s a human problem, but also one of networking improvements and needing a deep understanding of what is going on in the deep learning software,” he said.

“We really have a once in a lifetime opportunity,” he said. “We are approaching a huge, not niche, market. Analytics covers about 50% of the expense in a data center, so that is a huge market. There are so many things to do around that.”

The payoff of course is gaining much deeper insights and knowledge that can help in many areas such as medical research, financial services, cybersecurity, autonomous systems and more. Big data analytics is projected to be a $70 billion market by 2025, and that implies better hardware and services built to support that.

“Datacenter analytics are being completely transformed, and accelerated processors are set to play a substantial role in this revolution,” said Waldman in a statement. “Much like NVIDIA’s GPU revolutionized the AI space, Speedata’s unique APU will transform database computing. Data processing is a swiftly growing, multi-billion-dollar market in which acceleration will unleash the use of data in the applications of tomorrow and help countless entities reliant on big data innovate and compete. I look forward to supporting this extraordinary team as they reimagine big data processing for years to come.”