Steve Thomas - IT Consultant

It’s a busy week in the world of quantum computing and today Tel Aviv-based Quantum Machines, a startup that is building a software and hardware stack for controlling and operating quantum computers, announced the launch of QUA, a new language that it calls the first “standard universal language for quantum computers.”

Quantum Machines CEO Itamar Sivan likened QUA to developments like Intel’s x86 and Nvidia’s CUDA, both of which provide the low-level tools for developers to get the most out of their hardware.

Quantum Machine’s own control hardware is essentially agnostic with regards to the underlying quantum technology that its customers want to use. The idea here is that if the company manages to make its own hardware the standard for controlling these systems, then its language will – almost by default – become the standard as well. And while it’s a ‘universal’ language in the technical sense, it is — at least for now — meant to run on Quantum Machine’s own Quantum Orchestration Platform, which it announced earlier this year.

“QUA is basically the language of the Quantum Orchestration Platform,” Sivan told me. “But beyond that, QUA is what we believe the first candidate to become what we define as the ‘quantum computing software abstraction layer.’”

He argued that we are now at the right stage for the development of this layer because the underlying hardware has reached a matureness and because these systems are now fully programmable.

In his view, this is akin to what happened in classical computing, too. “The transition from having just specific circuits — physical circuits for specific algorithms — to the stage at which the system is programmable is the dramatic point. Basically, you have a software abstraction layer and then, you get to the era of software and everything accelerated.”

Image Credits: Quantum Machines /

Sivan actually believes that for the time being, developers will want languages that give them a lot of direct control over the hardware because for the foreseeable future, that’s what’s necessary to harness the advantages of quantum computing. “If you want to squeeze out everything quantum computers can give you, you better use low-level languages in the first place,” he argued,

For low-level developers, Sivan argues, QUA will represent a paradigm shift. “They shift from having to developing many, many things in an iterative way to actually having a language that can support even their wildest dreams — their while quantum algorithms dreams,” he said. “This is a real paradigm shift and these guys are experiencing in its full capacity — and it’s not only the accelerated process of programming and working, but also the capabilities themselves. Once everything is programmed in QUA and then compiled to the Quantum Orchestration Platform, then you also get the full benefit of the underlying hardware.”

Image Credits: Quantum Machines /

The company argues that its QUA language is the first language to combine quantum operations at the pulse level and universal classical operations. Quantum Machines also built a compiler, XQP, which can then optimize the programs for the specific underlying hardware, in this case, Quantum Machine’s Pulse Processor assembly language.

It obviously needs to do all of this in order to create an ecosystem and a community around its language. Of course, if its Quantum Orchestration Platform becomes widely used — and it already has an impressive list of users today — then QUA will also see wide adoption.

‘It’s one thing to build a beautiful language,” said Sivan. “But it’s another thing to develop it to be both beautiful and supported by an underlying hardware that is then adopted by itself. And then, the adoption of QUA is also led by the adoption of the Quantum Orchestration Platform, which is itself driven by the capabilities, nothing else.”

In a surprise move, Mirantis acquired Docker’s Enterprise platform business at the end of last year and while Docker itself is refocusing on developers, Mirantis kept the Docker Enterprise name and product. Today, Mirantis is rolling out its first major update to Docker Enterprise with the release of version 3.1.

For the most part, these updates are in line with what’s been happening in the container ecosystem in recent months. There’s support for Kubernetes 1.17 and improved support for Kubernetes on Windows (something the Kubernetes community has worked on quite a bit in the last year or so). Also new is Nvidia GPU integration in Docker Enterprise through a pre-installed device plugin, as well as support for Istio Ingress for Kubernetes and a new command-line tool for deploying clusters with the Docker Engine.

In addition to the product updates, Mirantis is also launching three new support options for its customers that now give them the option to get 24×7 support for all support cases, for example, as well as enhanced SLAs for remote managed operations, designated customer success managers and proactive monitoring and alerting. With this, Mirantis is clearly building on its experience as a managed service provider.

What’s maybe more interesting, though, is how this acquisition is playing out at Mirantis itself. Mirantis, after all, went through its fair share of ups and downs in recent years, from high-flying OpenStack platform to layoffs and everything in between.

“Why we do this in the first place and why at some point I absolutely felt that I wanted to do this is because I felt that this would be a more compelling and interesting company to build, despite maybe some of the short-term challenges along the way, and that very much turned out to be true. It’s been fantastic,” Mirantis CEO and co-founder Adrian Ionel told me. “What we’ve seen since the acquisition, first of all, is that the customer base has been dramatically more loyal than people had thought, including ourselves.”

Ionel admitted that he thought some users would defect because this is obviously a major change, at least from the customer’s point of view. “Of course we have done everything possible to have something for them that’s really compelling and we put out the new roadmap right away in December after the acquisition — and people bought into it at very large scale,” he said. With that, Mirantis retained more than 90 percent of the customer base and the vast majority of all of Docker Enterprise’s largest users.

Ionel, who almost seemed a bit surprised by this, noted that this helped the company to turn in two “fantastic” quarters and was profitable in the last quarter, despite the COVID-19.

“We wanted to go into this acquisition with a sober assessment of risks because we wanted to make it work, we wanted to make it successful because we were well aware that a lot of acquisitions fail,” he explained. “We didn’t want to go into it with a hyper-optimistic approach in any way — and we didn’t — and maybe that’s one of the reasons why we are positively surprised.”

He argues that the reason for the current success is that enterprises are doubling down on their container journeys and because they actually love the Docker Enterprise platform, like infrastructure independence, its developer focus, security features and ease of use. One thing many large customers asked for was better support for multi-cluster management at scale, which today’s update delivers.

“Where we stand today, we have one product development team. We have one product roadmap. We are shipping a very big new release of Docker Enterprise. […] The field has been completely unified and operates as one salesforce, with record results. So things have been extremely busy, but good and exciting.”

NVIDIA announced today that its NVIDIA A100, the first of its GPUs based on its Ampere architecture, is now in full production and has begun shipping to its customers globally. Ampere is a big generational jump in NVIDIA’s GPU architecture design, providing what the company says is the “largest leap in performance to date” across all eight generations of its graphics hardware.

Specifically, the A100 can improve performance on AI training and inference as much as 20x relative to prior NVIDIA data center GPUs, and it offers advantages across just about any kind of GPU-intensive data center workloads, including data analytics, protein modelling and other scientific computing uses, and cloud-based graphics rendering.

The A100 GPU can also be scaled either up or down depending on the needs, meaning that you can use a single unit to handle as many as seven separate tasks with partitioning, and you can combine them to work together as one large, virtual GPU to tackle the toughest training tasks for AI applications. The ‘Multi-instance GPU’ partitioning feature in particular is novel to this generation, and really helps emphasize the ability of the A100 to provide the most value for cost for clients of all sizes, since one could theoretically replace up to seven discrete GPUs in a data center if you’re already finding you have some headroom on your usage needs.

Alongside the production and shipping announcement, NVIDIA is also announcing that a number of customers are already adopting the A100 for use in their supercomputers and data centers, including Microsoft Azure, Amazon Web Services, Google Cloud and just about every significant cloud provider that exists.

NVIDIA also announced the DGX A100 system, which combines eight of the A100 GPUs linked together using NVIDIA’s NVLink. That’s also available immediately directly from NVIDIA, and from its approved resale partners.

Like other accelerators, Techstars, a network of more than 40 corporate and geographically targeted startup bootcamps, has had to bring its marquee demo day events online.

Over the last two weeks of April, industry-focused accelerators working with startups building businesses around mobility technologies (broadly) and the future of the home joined programs in Abu Dhabi, Bangalore, Berlin, Boston, Boulder and Chicago to present their cohorts.

Each group had roughly 10 companies pitching businesses that ran the gamut from early-childhood education to capturing precious metals from the waste streams of mining operations. There were language companies, security companies, marketing companies and even a maker of a modular sous vide product for home chefs.

The ideas were as creative as they were varied, and while all seemed promising, about two concepts from each batch stood out above the rest.

What follows is our completely unscientific picks of the top companies that pitched at each of these virtual Techstars demo days. In late May or early June, expect to see our roundup of the next batch of top picks from the their next round of demo days.

Hub71

Techstars’ inaugural cohort for its accelerator run in conjunction with Abu Dhabi-based technology incubator Hub71 included a number of novel businesses spanning climate, security, retail, healthcare and property tech. Standouts in this batch included Sia Secure and Aumet (with an honorable mention for the novel bio-based plastic processing and reuse technology developer, Poliloop).

Nvidia today announced its plans to acquire Cumulus Networks, an open-source centric company that specializes in helping enterprises optimize their data center networking stack. Cumulus offers both its own Linux distribution for network switches, as well as tools for managing network operations. With Cumulus Express, the company also offers a hardware solution in the form of its own data center switch.

The two companies did not announce the price of the acquisition, but chances are we are talking about a considerable amount, given that Cumulus had raised $134 million since it was founded in 2010.

Mountain View-based Cumulus already had a previous partnership with Mellanox, which Nvidia acquired for $6.9 billion. That acquisition closed only a few days ago. As Mellanox’s Amit Katz notes in today’s announcement, the two companies first met in 2013 and they formed a first official partnership in 2016.  Cumulus, it’s worth noting, was also an early player in the OpenStack ecosystem.

Having both Cumulus and Mellanox in its stable will give Nvidia virtually all of the tools it needs to help enterprises and cloud providers build out their high-performance computing and AI workloads in their data centers. While you may mostly think about Nvidia because of its graphics cards, the company has a sizable data center group, which delivered close to $1  billion in revenue in the last quarter, up 43 percent from a year ago. In comparison, Nvidia’s revenue from gaming was just under $1.5 billion.

“With Cumulus, NVIDIA can innovate and optimize across the entire networking stack from chips and systems to software including analytics like Cumulus NetQ, delivering great performance and value to customers,” writes Katz. “This open networking platform is extensible and allows enterprise and cloud-scale data centers full control over their operations.”

NVIDIA Chief Scientist Bill Daily has released an open-source ventilator hardware design he developed in order to address the shortage resulting from the global coronavirus pandemic. The mechanical ventilator design developed by Daily can be assembled quickly, using off-the-shelf parts with a total cost of around $400 – making it an accessible and affordable alternative to traditional, dedicated ventilators which can cost $20,000 or more.

The design created by Daily strives for simplicity, and basically includes just two central components – a solenoid valve and a microcontroller. The design is called the OP-Vent, and in the video below you can see how bare-bones it is in terms of hardware compared to existing alternatives, including some of the other more complex emergency-use ventilator designs developed in response to COVID-19.

Daily’s design, which was developed using input from mechanical engineers and doctors including Dr. Andrew Moore, a chief resident at Stanford University and D.r Bryant Lin, a medical devices expert and company co-founder, can be assembled in as little as five minutes, and is small enough to fit in a Pelican case for easy transportation and potability. It also employs fewer parts and uses less energy than similarly simple designs that adapt the manual breather bags used by paramedics in emergency response.

Next up for the design is getting it cleared by the FDA under the agency’s Emergency Use Authorization program for COVID-19 equipment, and then seeking manufacturing partners to pursue large-scale manufacturing.

OctoML, a startup founded by the team behind the Apache TVM machine learning compiler stack project, today announced that it has raised a $15 million Series A round led by Amplify, with participation from Madrone Ventures, which led its $3.9 million seed round. The core idea behind OctoML and TVM is to use machine learning to optimize machine learning models so they can more efficiently run on different types of hardware.

“There’s been quite a bit of progress in creating machine learning models,” OctoML CEO and University of Washington professor Luis Ceze told me.” But a lot of the pain has moved to once you have a model, how do you actually make good use of it in the edge and in the clouds?”

That’s where the TVM project comes in, which was launched by Ceze and his collaborators at the University of Washington’s Paul G. Allen School of Computer Science & Engineering. It’s now an Apache incubating project and because it’s seen quite a bit of usage and support from major companies like AWS, ARM, Facebook, Google, Intel, Microsoft, Nvidia, Xilinx and others, the team decided to form a commercial venture around it, which became OctoML. Today, even Amazon Alexa’s wake word detection is powered by TVM.

Ceze described TVM as a modern operating system for machine learning models. “A machine learning model is not code, it doesn’t have instructions, it has numbers that describe its statistical modeling,” he said. “There’s quite a few challenges in making it run efficiently on a given hardware platform because there’s literally billions and billions of ways in which you can map a model to specific hardware targets. Picking the right one that performs well is a significant task that typically requires human intuition.”

And that’s where OctoML and its “Octomizer” SaaS product, which it also announced, today come in. Users can upload their model to the service and it will automatically optimize, benchmark and package it for the hardware you specify and in the format you want. For more advanced users, there’s also the option to add the service’s API to their CI/CD pipelines. These optimized models run significantly faster because they can now fully leverage the hardware they run on, but what many businesses will maybe care about even more is that these more efficient models also cost them less to run in the cloud, or that they are able to use cheaper hardware with less performance to get the same results. For some use cases, TVM already results in 80x performance gains.

Currently, the OctoML team consists of about 20 engineers. With this new funding, the company plans to expand its team. Those hires will mostly be engineers, but Ceze also stressed that he wants to hire an evangelist, which makes sense, given the company’s open-source heritage. He also noted that while the Octomizer is a good start, the real goal here is to build a more fully featured MLOps platform. “OctoML’s mission is to build the world’s best platform that automates MLOps,” he said.

An undertaking that involved combining massive amounts of graphics processing power could provide key leverage for researchers looking to develop potential cures and treatments for the novel coronavirus behind the current global pandemic. Immunotherapy startup ImmunityBio is working with Microsoft’s Azure to deliver a combined 24 petaflops of GPU computing capability for the purposes of modelling, in a very high degree of detail, the structure o the so-called “spike protein” that allows the SARS-CoV-2 virus that causes COVID-19 to enter human cells.

This new partnership means that they were able to produce a model of the spike protein within just days, instead of the months it would’ve taken previously. That time savings means that the model can get in the virtual hands of researchers and scientists working on potential vaccines and treatments even faster, and that they’ll be able to gear their work towards a detailed replication of the very protein they’re trying to prevent from attaching to the human ACE-2 proteins’ receptor, which is what sets up the viral infection process to begin with.

The main way that scientists working on treatments look to prevent or minimize the spread of the virus within the body is to block the attachment of the virus to these proteins, and the simplest way to do that is to ensure that the spike protein can’t connect with the receptor it targets. Naturally-occurring antibodies in patients who have recovered from the novel coronavirus do exactly that, and the vaccines under development are focused on doing the same thing pre-emptively, while many treatments are looking at lessening the ability of the virus to latch on to new cells as it replicates within the body.

In practical terms, the partnership between the two companies included a complement of 1,250 NVIDIA V100 Tensor Core GPUs designed for use in machine learning applications from a Microsoft Azure cluster, working with ImmunityBio’s existing 320 GPU cluster that is tuned specifically to molecular modeling work. The results of the collaboration will now be made available to researchers working on COVID-19 mitigation and prevention therapies, in the hopes that they will enable them to work more quickly and effectively towards a solution.

Data science platform cnvrg.io today announced the launch of the free community version of its data science platform. Dubbed ‘CORE,’ this version includes most — but not all — of the standard feature in cnvrg’s main commercial offering. It’s an end-to-end solution for building, managing and automating basic ML models with limitations in the free version that mostly center around the production capabilities of the paid premium version and working with larger teams of data scientists.

As the company’s CEO Yochay Ettun told me, CORE users will be able to use the platform either on-premise or in the cloud, using Nvidia-optimized containers that run on a Kubernetes cluster. Because of this, it natively handles hybrid- and multi-cloud deployments that can automatically scale up and down as needed — and adding new AI frameworks is simply a matter of spinning up new containers, all of which are managed from the platform’s web-based dashboard.

Ettun describes CORE as a ‘lightweight version’ of the original platform but still hews closely to the platform’s original mission. “As was our vision from the very start, cnvrg.io wants to help data scientists do what they do best – build high impact AI,” he said. “With the growing technical complexity of the AI field, the data science community has strayed from the core of what makes data science such a captivating profession — the algorithms. Today’s reality is that data scientists are spending 80 percent of their time on non-data science tasks, and 65 percent of models don’t make it to production. Cnvrg.io CORE is an opportunity to open its end-to-end solution to the community to help data scientists and engineers focus less on technical complexity and DevOps, and more on the core of data science — solving complex problems.”

This has very much been the company’s direction from the outset and as Ettun noted in a blog post from a few days ago, many data scientists today try to build their own stack by using open-source tools. They want to remain agile and able to customize their tools to their needs, after all. But he also argues that data scientists are usually hired to build machine learning models, not to build and manage data science platforms.

While other platforms like H2O.ai, for example, are betting on open source and the flexibility that comes with that, cnvrg.io’s focus is squarely on ease of use. Unlike those tools, Jerusalem-based cnvrg.io, which has raised about $8 million so far, doesn’t have the advantage of the free marketing that comes with open source, so it makes sense for the company to now launch this free self-service version

It’s worth noting that while cnvrg.io features plenty of graphical tools for managing date ingestion flows, models and clusters, it’s very much a code-first platform. With that, Ettun tells me that the ideal user is a data scientist, data engineer or a student passionate about machine learning. “As a code-first platform, users with experience and savvy in the data science field will be able to leverage cnvrg CORE features to produce high impact models,” he said. “As our product is built around getting more models to production, users that are deploying their models to real-world applications will see the most value.”

 

NVIDIA is making its Parabricks tool available for free for 90 days (with the possibility of extension, depending on needs) to any researcher currently working on any effort to combat the ongoing novel coronavirus pandemic and spread of COVID-19. The tool is a GPU-accelerated genome analysis toolkit, which leveraged graphics processing power to take a process that previously took days, but that through its use can be accomplished in just a matter of hours.

Researchers will still need access to NVIDIA GPUs for running the Parabricks genetic sequencing suite, but they won’t have to pay anything for the privilege of running the software. This is a big advantage for anyone studying the new coronavirus or the patients who have contracted the illness. The GPU-maker is also providing links to different cloud-based GPU service providers to lower that barrier to entry, as well.

We’ve cut down drastically on genomic sequencing times in the past few years, but they still require a massive amount of computing hardware, and Parabricks, which was acquired by NVIDIA late last year, has developed technology that makes it possible to sequence an entire human genome in under an hour – and that’s using a single server, not an entire server farm.

Speed is of the essence when it comes to every aspect of the continued effort to fight the spread of the virus, and the sever respiratory illness that it can cause. One of the biggest challenges that scientists and researchers working on building potential drug therapies and vaccines for the novel coronavirus face is lack of solid, reliable information. The more sequencing that can be done to understand, identify and verify characteristics of the genetic makeup of both the virus itself, and patients who contract it (both during and post-infection), the quicker everyone will be able to move on potential treatments and immunotherapies.

NVIDIA is making its Parabricks tool available for free for 90 days (with the possibility of extension, depending on needs) to any researcher currently working on any effort to combat the ongoing novel coronavirus pandemic and spread of COVID-19. The tool is a GPU-accelerated genome analysis toolkit, which leveraged graphics processing power to take a process that previously took days, but that through its use can be accomplished in just a matter of hours.

Researchers will still need access to NVIDIA GPUs for running the Parabricks genetic sequencing suite, but they won’t have to pay anything for the privilege of running the software. This is a big advantage for anyone studying the new coronavirus or the patients who have contracted the illness. The GPU-maker is also providing links to different cloud-based GPU service providers to lower that barrier to entry, as well.

We’ve cut down drastically on genomic sequencing times in the past few years, but they still require a massive amount of computing hardware, and Parabricks, which was acquired by NVIDIA late last year, has developed technology that makes it possible to sequence an entire human genome in under an hour – and that’s using a single server, not an entire server farm.

Speed is of the essence when it comes to every aspect of the continued effort to fight the spread of the virus, and the sever respiratory illness that it can cause. One of the biggest challenges that scientists and researchers working on building potential drug therapies and vaccines for the novel coronavirus face is lack of solid, reliable information. The more sequencing that can be done to understand, identify and verify characteristics of the genetic makeup of both the virus itself, and patients who contract it (both during and post-infection), the quicker everyone will be able to move on potential treatments and immunotherapies.

Nvidia today announced that it has acquired SwiftStack, a software-centric data storage and management platform that supports public cloud, on-premises and edge deployments.

The company’s recent launches focused on improving its support for AI, high-performance computing and accelerated computing workloads, which is surely what Nvidia is most interested in here.

“Building AI supercomputers is exciting to the entire SwiftStack team,” says the company’s co-founder and CPO Joe Arnold in today’s announcement. “We couldn’t be more thrilled to work with the talented folks at NVIDIA and look forward to contributing to its world-leading accelerated computing solutions.”

The two companies did not disclose the price of the acquisition, but SwiftStack had previously raised about $23.6 million in Series A and B rounds led by Mayfield Fund and OpenView Venture Partners. Other investors include Storm Ventures and UMC Capital.

SwiftStack, which was founded in 2011, placed an early bet on OpenStack, the massive open-source project that aimed to give enterprises an AWS-like management experience in their own data centers. The company was one of the largest contributors to OpenStack’s Swift object storage platform and offered a number of services around this, though it seems like in recent years, it has downplayed the OpenStack relationship as that platform’s popularity has fizzled in many verticals.

SwiftStack lists the likes of PayPal, Rogers, data center provider DC Blox, Snapfish and Verizon (TechCrunch’s parent company) on its customer page. Nvidia, too, is a customer.

SwiftStack notes that it team will continue to maintain existing set of open source tools like Swift, ProxyFS, 1space and Controller.

“SwiftStack’s technology is already a key part of NVIDIA’s GPU-powered AI infrastructure, and this acquisition will strengthen what we do for you,” says Arnold.