Steve Thomas - IT Consultant

At its I/O developer conference, Google today announced the public preview of a full cluster of Google Cloud’s new Cloud TPU v4 Pods.

Google’s fourth iteration of its Tensor Processing Units launched at last year’s I/O and a single TPU pod consists of 4,096 of these chips. Each chip has a peak performance of 275 teraflops and each pod promises a combined compute power of up to 1.1 exaflops of compute power. Google now operates a full cluster of eight of these pods in its Oklahoma data center with up to 9 exaflops of peak aggregate performance. Google believes this makes this “the world’s largest publicly available ML hub in terms of cumulative computing power, while operating at 90% carbon-free energy.”

“We have done extensive research to compare ML clusters that are publicly disclosed and publicly available (meaning – running on Cloud and available for external users),” a Google spokesperson told me when I asked the company to clarify its benchmark. “Those clusters are powered by supercomputers that have ML capabilities (meaning that they are well-suited for ML workloads such as NLP, recommendation models etc. The supercomputers are built using ML hardware — e.g. GPUs (graphic processing units) — as well as CPU and memory. With 9 exaflops, we believe we have the largest publicly available ML cluster.”

At I/O 2021, Google’s CEO Sundar Pichai said that the company would soon have “dozens of TPU v4 pods in our data centers, many of which will be operating at or near 90% carbon-free energy. And our TPUv4 pods will be available to our cloud customers later this year.” Clearly, that took a bit longer than planned, but we are in the middle of a global chip shortage and these are, after all, custom chips.

Ahead of today’s announcement, Google worked with researchers to give them access to these pods. “Researchers liked the performance and scalability that TPU v4 provides with its fast interconnect and optimized software stack, the ability to set up their own interactive development environment with our new TPU VM architecture, and the flexibility to use their preferred frameworks, including JAX, PyTorch, or TensorFlow,” Google writes in today’s announcement. No surprise there. Who doesn’t like faster machine learning hardware?

Google says users will be able to slice and dice the new cloud TPU v4 cluster and its pods to meet their needs, whether that’s access to four chips (which is the minimum for a TPU virtual machine) or thousands — but also not too many, either, because there are only so many chips to go around.

As of now, these pods are only available in Oklahoma. “We have run an extensive analysis of various locations and determined that Oklahoma, with its exceptional carbon-free energy supply, is the best place to host such a cluster. Our customers can access it from almost anywhere,” a spokesperson explained.

Google today announced three new features for its voice-activated Assistant that will all make it easier and more natural to interact with it.

The first is about making it easier to initiate a conversation with the Assistant by simply looking at a device like the Nest Hub, with its built-in camera, and talking to the Assistant without using the “Hey Google” wake word. This will roll out later this week to users who pair their Nest Hub Max with an Android device, while iOS users will have to wait a few more weeks.

Image Credits: Google

The other new feature is extended support for quick phrases, that is, the ability to use a quick phrase to answer a phone call, turn off the lights or ask about the weather, all without having to use a wake word either. That means going forward, you’ll be able to simply set a timer without saying “Hey Google.” Google notes that this is an opt-in feature and that it will use the company’s voice match feature that’s already available on the Nest Hub today.

Image Credits: Google

Finally, Google is also making some changes to how the Assistant processes your requests so that it will be able to better understand your intent, even if have to correct yourself or make small pauses while you think about how you want to phrase your question, for example.

“We realize when evaluating real conversations, they’re full of nuances,” said Nino Tosca, the director of product management for the Google Speech team and the Google Assistant. “People say ‘uhm,’ interruptions when two people are speaking back and forth, pauses, self-corrections — but we realized that with two humans communicating, these things are natural. They don’t really get in the way with people understanding each other. […] We’re trying to bring these natural behaviors to the Google Assistant so that a user doesn’t have to think before they say a command — or actually process the command in their head, make sure they have every word right and then try to get it out perfectly. We want you to be able to just talk to the Google Assistant like you would with another human and we’ll understand the meaning and be able to fulfill your intent.”

Image Credits: Google

Sadly, this feature is still in development but should roll out sometime in early 2023. Google has always used I/O to showcase upcoming features, even though some of them never launch, so we’ll just have to wait and see where this one goes.

Overall, though, these seem like worthwhile additions to the Google Assistant feature set. Saying ‘Hey Google’ quickly gets old, after all, and continues to feel a bit weird. Indeed, I can’t help but think that the shine has worn off a bit from the Assistant (and its competitors). Personally, despite having a bunch of Nest Hubs and Google Homes at home, I don’t think I’ve used them for anything but turning on the lights using their touchscreen and setting the occasional cooking timer in recent months. Google has major ambitions around ‘ambient computing,’ but when the Assistant doesn’t understand you and then randomly starts playing a Justin Bieber video on your TV, it feels like that future still needs some tuning. Anything to remove those barriers is welcome.

"Read

In April, Google introduced a new “multisearch” feature that offered a way to search the web using both text and images at the same time. Today, at Google’s I/O developer conference, the company announced an expansion to this feature, called “Multisearch Near Me.” This addition, arriving later in 2022, will allow Google app users to combine either a picture or a screenshot with the text “near me” to be directed to options for local retailers or restaurants that would have the apparel, home goods, or food you’re in search of. It’s also pre-announcing a forthcoming development to multisearch that appears to be built with AR glasses in mind, as it can visually search across multiple objects in a scene based on what you’re currently “seeing” via a smartphone camera’s viewfinder.

With the new “near me” multisearch query, you’ll be able to find local options related to your current visual and text-based search combination. For example, if you were working on a DIY project and came across a part that you needed to replace, you could snap a photo of the part with your phone’s camera to identify it and then find a local hardware store that has a replacement in stock.

This isn’t all that different from how multisearch already works, Google explains — it’s just adding the local component.

Image Credits: Google

Originally, the idea with multisearch was to allow users to ask questions about an object in front of them and refine those results by color, brand, or other visual attributes. The feature today works best with shopping searches, as it allows users to narrow down product searches in a way that standard text-based web searches could sometimes struggle with. For instance, a user could snap a photo of a pair of sneakers, then add text to ask to see them in blue to be shown just those shoes in the color specified. They could choose to visit the website for the sneakers and immediately purchase them, as well. The expansion to include the “near me” option now simply limits the results further in order to point users to a local retailer where the given product is available.

In terms of helping users find local restaurants, the feature works similarly. In this case, a user could search based on a photo they found on a food blog or somewhere else on the web to learn what the dish is and which local restaurants might have the option on their menu for dine-in, pickup, or delivery. Here, Google Search combines the image with the intent that you’re in search of a nearby restaurant, and will scan across millions of images, reviews, and community contributions to Google Maps to find the local spot.

The new “near me” feature will be available globally in English and will roll out to more languages over time, Google says.

The more interesting addition to multisearch is the capability to search within a scene. In the future, Google says users will be able to pan their camera around to learn about multiple objects within that wider scene.

Google suggests the feature could be used to scan the shelves at a bookstore then see helpful several insights overlaid in front of you.

Image Credits: Google

“To make this possible, we bring together not only computer vision, natural language understanding, but we also bring that together with the knowledge of the web and on-device technology,” noted Nick Bell, Senior Director, Google Search. “So the possibilities and the capabilities of this are going to be huge and significant,” he noted.

The company — which came to the AR market early with its Google Glass release — didn’t confirm it had any sort of new AR glasses-type device in the works, but hinted at the possibility.

“With A.I. systems now, what’s possible today — and going to be possible over the next few years — just kind of unlocks so many opportunities,” said Bell. In addition to voice search, desktop, and mobile search, the company believes visual search will also be a bigger part of the future, he noted.

Image Credits: Google

“There are 8 billion visual searches on Google with Lens every single month now and that number is three times the size that it was just a year ago,” Bell continued. “What we’re definitely seeing from people is that the appetite and the desire to search visually is there. And what we’re trying to do now is lean into the use cases and identify where this is most useful,” he said. “I think as we think about the future of search, visual search is definitely a key part of that.”

The company, of course, is reportedly working on a secret project, codenamed Project Iris, to build a new AR headset with a projected 2024 release date. It’s easy to imagine not only how this scene-scanning capability could run on such a device, but also how any sort of image-plus-text (or voice!) search feature could be used on an AR headset. Imagine again looking at the pair of sneakers you liked, for instance, then asking a device to navigate to the nearest store you could make the purchase.

“Looking further out, this technology could be used beyond everyday needs to help address societal challenges, like supporting conservationists in identifying plant species that need protection, or helping disaster relief workers quickly sought through donations in times of need,” suggested Prabhakar Raghavan, Google Search SVP, speaking on stage at Google I/O.

Unfortunately, Google didn’t offer a timeframe for when it expected to put the scene-scanning capability into the hands of users, as the feature is still “in development.”

"Read

In the face of increased competition from Apple Maps and its 3D city views, Google today introduced its own vision for its next-generation Google Maps with a preview of its new more “immersive” viewing experience. The enhancement, presented during Google’s I/O conference keynote, leverages a combination of computer vision and A.I. technology to fuse together Street View and aerial imagery in order to offer a digital model of the world and a new way to explore cities, key landmarks, restaurants, venues, and other places of interest.

Google says it has fused together “billions” of images to create this immersive view, which allows users to explore by visually soaring over an area to see what it may look like. For example, if you were planning a trip to London, you might use the feature to look at landmarks like Big Ben or Westminster to get a better sense of the place and experience and the architecture. You’ll also be able to use a “time slider” to adjust what the area looks like at different times of day — a feature that somewhat resembles Apple Maps’ nighttime mode with a moonlight glow that activates at dusk, even when browsing 3D cities.

Image Credits: Google

Google’s immersive mode, however, will additionally allow users to look up local weather and traffic conditions to aid with planning.

This new mode won’t stop at just representing major cities in a more immersive perspective — it will also make it possible to more easily explore inside places, including neighborhood restaurants and other popular venues.

Image Credits: Google

Users will be able to glide down to the street level and then click to view what it looks like inside a place they may want to visit. This could help people to figure out what kind of vibe a restaurant may have, among other things. You can also see the area’s live busyness and nearby traffic from this level.

Image Credits: Google

The feature won’t be immediately available everywhere, though.

Initially, it will begin to roll out to major cities including L.A., London, New York, San Francisco, and Tokyo by the end of the year, with more cities to follow in the months ahead. It will work across platforms and devices, Google said, starting with a rollout on Android and iOS later this year.

The company also announced a handful of other Google Maps updates during the event. It said that its eco-friendly routing feature, which launched in the U.S. and Canada this past fall to help drivers find the most fuel-efficient routes, will expand to Europe later this year.

Image Credits: Google

So far, the addition is estimated to have saved more than half a million metric tons of carbon emissions, which is the equivalent of taking 100,000 cars off the road. Google says the European expansion will double this figure.

Meanwhile, for developers, Google announced the launch of a new ARCore Geospatial API which will bring Maps’ Live View feature to developers for use in third-party applications.

Image Credits: Google

Live View is the feature that overlays AR arrows and directions on top of the real world, as seen through the smartphone’s camera. The idea is that you could pull out your phone and see exactly which way to begin walking when in a new place where you may otherwise be disoriented. It’s as if you’ve dropped yourself right inside Google Street View.

Now, Google will also allow its developer partners to build products that utilize Live View technologies.

Image Credits: Google/Docomo

One partner is the micromobility company Lime, which is using the API to help commuters in London, Paris, Tel Aviv, Madrid, San Diego, and Bordeaux to find spots to park their e-scooters and e-bikes. Telstra and Accenture are using it to help sports fans and concertgoers find their seats, concession stands, and restaurants at Marvel Stadium in Melbourne. In Japan, Docomo and Curiosity are building a new game that has players fending off virtual dragons with robot companions in front of Tokyo landmarks, like the Tokyo Tower, also powered via the API.

"Read

In addition to improving Google Assistant’s ability to communicate with users in a more natural way, Google today also announced improvements to its Google Translate service. The company said it’s adding 24 new languages — including its first indigenous languages of the Americas with the additions of Quechua, Guarani, and Aymara.

Other new additions include the following:

In total, the 24 new languages are spoken by over 300 million people worldwide, Google said.

“This range from smaller languages, like Mizo spoken by people in the northeast of India — by about 800,000 people — up to very large world languages like Lingala spoken by around 45 million people across Central Africa,” said Isaac Caswell, a Google Translate Research Scientist.

He added that in addition to the indigenous languages of the Americas, Google Translate will support a dialect of English for the first time with Krio from Sierra Leone. The company said it selected this newest batch of languages to support by looking for languages with very large but underserved populations — which were frequently in the African continent and Indian subcontinent. It also wanted to address indigenous languages which are often overlooked by technology.

Google’s ability to add new languages has improved thanks to technological advances taking place over the past few years, Caswell said.

“Up until a couple of years ago, it simply was not technologically possible to add languages like these, which are what we call a low resource — meaning that there are not very many text resources out there for them,” he explained. But a  new technology called Zero-Shot Machine Translation has made it easier. “At a high level, the way you can imagine it working is you have a single gigantic neural AI model, and it’s trained on 100 different languages with translation. You can think of it as a polyglot that knows lots of languages. But then additionally, it gets to see text in 1,000 more languages that isn’t translated. You can imagine if you’re some big polyglot, and then you just start reading novels in another language, you can start to piece together what it could mean based on your knowledge of language in general,” he said.

The expansion brings the total number of languages supported by the service to 133. But Google said the service still has a long way to go, as there are still some 7,000 unsupported languages globally that Translate doesn’t address.

The new languages will be live today on Google Translate, but won’t reach all users worldwide for a couple of days, Google noted.

"Read

YouTube is expanding on its latest feature that made any public YouTube video potential fodder for its TikTok competitor, YouTube Shorts. Today, the company is announcing the launch of “Green Screen,” a tool that will allow users to use up to a 60-second video segment from any eligible YouTube video or YouTube Short as the background for their new original Short video.

The feature joins a number of other effects available now to YouTube Shorts creators, including the appearance-smoothing Retouch feature; a Lighting feature to boost dark environments; an Align feature that aligns the subject from the last frame of a video with a new video; a text and timeline editor to add messages over top videos; various video filters; and, most recently, Cut — the tool that effectively made all of YouTube’s public content possible Shorts material.

As with Cut, YouTube says the new Green Screen remix feature can be used with any public YouTube video unless the creator has opted out. The only exception to this involves music videos which include copyrighted content from YouTube’s partners or others with visual claims. Also similar to Cut, any video created with Green Screen includes a link back to the original content creator for attribution’s sake.

On iOS, creators can also use the Green Screen tool in the Shorts camera to choose any photo or video from their device gallery as the background, the company says.

Image Credits: YouTube

YouTube’s decision to make its platform’s videos available for remixing is meant to be a competitive advantage as the competition with TikTok heats up. It’s notable that it made the feature opt-out by default, meaning videos are essentially up for grabs unless a creator says otherwise. So far, there hasn’t been a major backlash to this decision, as some creators feel that Shorts is just another way to get their channel discovered, or generally aren’t worried about Shorts eating into their own audience, as it’s a different type of viewing experience.

Given the integration with YouTube content, Green Screen makes sense as the next new video effect for Shorts. On TikTok, a similar feature is heavily used to allow creators to comment on and reference each other’s content. But in Shorts’ case, the original video creator isn’t necessarily a Shorts creator, too — they may only produce long-form content for YouTube proper. That could lessen the appeal of the Green Screen tool as a community conversation tool, as the person whose video is being referenced may not even participate in the Shorts community itself.

Google says. the new Green Screen tool is beginning to roll out on iOS today and will come to Android soon.

Asked why the company was prioritizing iOS over Google’s own mobile platform, YouTube only replied that it was prioritizing the need to move quickly when launching the new features.

“Our priority is bringing the best experience to our creators as quickly as possible, and sometimes that means we bring particular features to one platform before another,” a spokesperson said.

The addition of Green Screen follows a rougher quarter for YouTube, where the company missed its projections for ad revenue, bringing in $6.87 billion, when it was forecast to pull in $7.51 billion. YouTube chalked this up to the lingering pandemic impacts, saying the slower growth is more of a reflection of last year’s gains. At the time, it also reported Shorts was now generating 30 billion views per day.

 

While (former) startups like Lemonade came along to attack the tired world of insurance, the travel insurance market is now coming in for the same treatment from the likes of Safetywing (covered by TC here) and Battleface.

In an ideal world, travel insurance would be easier to understand, would pay out quickly when things go wrong, and operate almost like Apple Pay or Google Pay in its simplicity. New ‘whole-trip travel insurance’ startup Faye – which exited stealth mode last month – hopes to bring that kind of vibe with its approach, and now it’s raised backing to do it.

The startup has pulled in $8 million in a seed funding round led by Viola Ventures and F2 Venture Capital. Also participating was Portage Ventures, Global Founders Capital (GFC) and former NBA player Omri Casspi.

It’s fair to say that most travel insurance products are not so much built for consumers as for distributors, where jargon-filled add-ons abound. Faye claims its approach is much more simple, where customers are asked 6 questions in order to find the right plan.

The platform covers trips, health, belongings, and pets via an app that sends alerts, has 24/7customer support, and enables digital claims filing, and electronic transfers of reimbursements to its Faye Wallet.

Co-Founder & CEO, Elad Schaffer said in a statement: “Travel insurance has become synonymous with lengthy, jargon-filled policies that leave travelers confused rather than well-informed… Faye is hitting the market as a solution to these pain points, at a time in which consumers are planning to travel more than ever before and are seeking solutions to look after them while on the road.”

Faye will also cover travelers if they contract COVID-19 pre-trip, and cancellations (subject to the terms of the plan), as well as coverage for emergency medical expenses and trip interruption

Omry Ben David, General Partner, Viola Ventures commented: “Faye is re-inventing the category of consumer travel insurance in the U.S. where data isn’t leveraged yet for bespoke, price-optimized coverage in an industry with very favorable growth characteristics and unit economics.”

The Faye team is comprised of Jeff Rolander (formerly at Allianz), Moran Treiser (formerly Lemonade) and Lauren Gumport (formerly Guesty).

Google’s developer conference Google I/O is back, which means that the company has a few things to announce. During the opening keynote, Google is expected to unveil new hardware products, new software updates and new features for Google’s ecosystem.

The conference starts at 10 a.m. PDT (1 p.m. on the East Cost, 6 p.m. in London, 7 p.m. in Paris) and you can watch the livestream right here on this page.

Rumor has it that Google could unveil the Pixel Watch. This isn’t the company’s first experience in the smartwatch space, but it represents a fresh new start with Google’s own hardware division leveraging Wear OS. If you’re a Pixel person, you can also expect some smartphone news and maybe new accessories.

More importantly, Google will likely share some news about its flagship services, such as Google Maps, Google’s search engine, YouTube and Google Play. It’s going to be interesting to see if Google has anything to share about Chrome and Android as well.

Whether you’re a Google user who relies a lot on Google’s ecosystem or a tech enthusiast who wants to see what’s next for Google, make sure to watch today’s keynote and read our coverage on TechCrunch.

Read more about Google I/O 2022 on TechCrunch

The UK government has confirmed it will move forward on a major ex ante competition reform aimed at Big Tech, as it set out its priorities for the new parliamentary session earlier today.

However it has only said that draft legislation will be published over this period — booting the prospect of passing updated competition rules for digital giants further down the road.

At the same time today it confirmed that a “data reform bill” will be introduced in the current parliamentary session.

This follows a consultation it kicked off last year to look at how the UK might diverge from EU law in this area, post-Brexit, by making changes to domestic data protection rules.

There has been concern that the government is planning to water down citizens’ data protections. Details the government published today, setting out some broad-brush aims for the reform, don’t offer a clear picture either way — suggesting we’ll have to wait to see the draft bill itself in the coming months.

Read on for an analysis of what we know about the UK’s policy plans in these two key areas… 

Ex ante competition reform

The government has been teasing a major competition reform since the end of 2020 — putting further meat on the bones of the plan last month, when it detailed a bundle of incoming consumer protection and competition reforms.

But today, in a speech setting out prime minister Boris Johnson’s legislative plans for the new session at the state opening of parliament, it committed to publish measures to “create new competition rules for digital markets and the largest digital firms”; also saying it would publish “draft” legislation to “promote competition, strengthen consumer rights and protect households and businesses”.

In briefing notes to journalists published after the speech, the government said the largest and most powerful platform will face “legally enforceable rules and obligations to ensure they cannot abuse their dominant positions at the expense of consumers and other businesses”.

A new Big Tech regulator will also be empowered to “proactively address the root causes of competition issues in digital markets” via “interventions to inject competition into the market, including obligations on tech firms to report new mergers and give consumers more choice and control over their data”, it also said.

However another key detail from the speech specifies that the forthcoming Digital Markets, Competition and Consumer Bill will only be put out in “draft” form over the parliament — meaning the reform won’t be speeding onto the statue books.

Instead, up to a year could be added to the timeframe for passing laws to empower the Digital Markets Unit (DMU) — assuming ofc Johnson’s government survives that long. The DMU was set up in shadow form last year but does not yet have legislative power to make the planned “pro-competition” interventions which policymakers intend to correct structural abuses by Big Tech.

(The government’s Online Safety Bill, for example — which was published in draft form in May 2021 — wasn’t introduced to parliament until March 2022; and remains at the committee stage of the scrutiny process, with likely many more months before final agreement is reached and the law passed. That bill was included in the 2022 Queen’s Speech so the government’s intent continues to be to pass the wide-ranging content moderation legislation during this parliamentary session.)

The delay to introducing the competition reform means the government has cemented a position lagging the European Union — which reached political agreement on its own ex ante competition reform in March. The EU’s Digital Markets Act is slated to enter into force next Spring, by which time the UK may not even have a draft bill on the table yet. (While Germany passed an update to its competition law last year and has already designated Google and Meta as in scope of the ex ante rules.)

The UK’s delay will be welcomed by tech giants, of course, as it provides another parliamentary cycle to lobby against an ex ante reboot that’s intended to address competition and consumer harms in digital markets which are linked to giants with so-called “Strategic Market Status”.

This includes issues that the UK’s antitrust regulator, the CMA, has already investigated and confirmed (such as Google and Facebook’s anti-competitive dominance of online advertising); and others it suspects of harming consumers and hampering competition too (like Apple and Google’s chokepoint hold over their mobile app stores).

Any action in the UK to address those market imbalances doesn’t now look likely before 2024 — or even later.

Recent press reports, meanwhile, have suggested Johnson may be going cold on the ex ante regime — which will surely encourage Big Tech’s UK lobbyists to seize the opportunity to spread self-interested FUD in a bid to totally derail the plan.

The delay also means tech giants will have longer to argue against the UK introducing an Australian-style news bargaining code — which the government appears to be considering for inclusion in the future regime.

One of the main benefits of the bill is listed as [emphasis ours]:

“Ensuring that businesses across the economy that rely on very powerful tech firms, including the news publishing sector, are treated fairly and can succeed without having to comply with unfair terms.”

“The independent Cairncross Review in 2019 identified an imbalance of bargaining power between news publishers and digital platforms,” the government also writes in its briefing note, citing a Competition and Markets Authority finding that “publishers see Google and Facebook as ‘must have’ partners as they provide almost 40 per cent of large publishers’ traffic”.

Major consumer protection reforms which are planned in parallel with the ex ante regime — including letting the CMA decide for itself when UK consumer law has been broken and fine violating platforms over issues like fake reviews, rather than having to take the slow route of litigating through the courts — are also on ice until the bill gets passed. So major ecommerce and marketplace platforms will also have longer to avoid hard-hitting regulatory action for failures to purge bogus reviews from their UK sites.

Consumer rights group, Which?, welcomed the government’s commitment to legislate to strengthen the UK’s competition regime and beef up powers to clamp down on tech firms that breach consumer law. However it described it as “disappointing” that it will only publish a draft bill in this parliamentary session.

“The government must urgently prioritise the progress of this draft Bill so as to bring forward a full Bill to enact these vital changes as soon as possible,” added Rocio Concha, Which? director of policy and advocacy, in a statement.

Data reform bill

In another major post-Brexit policy move, the government has been loudly flirting with ripping up protections for citizens’ data — or, at least, killing off cookie banners.

Today it confirmed it will move forward with ‘reforming’ the rules wrapping people’s data — just without being clear about the exact changes it plans to make. So where exactly the UK is headed on data protection still isn’t clear.

That said, in briefing notes on the forthcoming data reform bill, the government appears to be directing most focus at accelerating public sector data sharing instead of suggesting it will pass amendments that pave the way for unfettered commercial data-mining of web users.

Indeed, it claims that ensuring people’s personal data “is protected to a gold standard” is a core plank of the reform.

A section on the “main benefits” of the reform also notably lingers on public sector gains — with the government writing that it will be “making sure that data can be used to empower citizens and improve their lives, via more effective delivery of public healthcare, security, and government services”.

But of course the devil will be in the detail of the legislation presented in the coming months. 

Here’s what else the government lists as the “main elements” of the upcoming data reform bill:

  • Using data and reforming regulations to improve the everyday lives of people in the UK, for example, by enabling data to be shared more efficiently between public bodies, so that delivery of services can be improved for people.
  • Designing a more flexible, outcomes-focused approach to data protection that helps create a culture of data protection, rather than “tick box” exercises.

Discussing other “main benefits” for the reform, the government touts increased “competitiveness and efficiencies” for businesses, via a suggested reduction in compliance burdens (such as “by creating a data protection framework that is focused on privacy outcomes rather than box-ticking”); a “clearer regulatory environment for personal data use” which it suggests will “fuel responsible innovation and drive scientific progress”; “simplifying the rules around research to cement the UK’s position as a science and technology superpower”, as it couches it; and ensuring the data protection regulator (the ICO) takes “appropriate action against organisations who breach data rights and that citizens have greater clarity on their rights”.

The upshot of all these muscular-sounding claims boils down to whatever the government means by an “outcomes-focused” approach to data protection vs “tick-box” privacy compliance. (As well as what “responsible innovation” might imply.)

It’s also worth mulling what the government means when it says it wants the ICO to take “appropriate” action against breaches of data rights. Given the UK regulator has been heavily criticized for inaction in key areas like adtech you could interpret that as the government intending the regulator to take more enforcement over privacy breaches, not less.

(And its briefing note does list “modernizing” the ICO, as a “purpose” for the reform — in order to “[make] sure it has the capabilities and powers to take stronger action against organisations who breach data rules while requiring it to be more accountable to Parliament and the public”.)

However, on the flip side, if the government really intends to water down Brits’ privacy rights — by say, letting businesses overrule the need to obtain consent to mine people’s info via a more expansive legitimate interest regime for commercial entities to do what they like with data (something the government has been considering in the consultation) — then the question is how that would square with a top-line claim for the reform ensuing “UK citizens’ personal data is protected to a gold standard”?

The overarching question here is whose “gold standard” the UK is intending to meet? Brexiters might scream for their own yellow streak — but the reality is there are wider forces at play once you’re talking about data exports.

Despite Johnson’s government’s fondness for ‘Brexit freedom’ rhetoric, when it comes to data protection law the UK’s hands are tied by the need to continue meeting the EU’s privacy standards, which require the an equivalent level of protection for citizens’ data outside the bloc — at least if the UK wants data to be able to flow freely into the country from the bloc’s ~447M citizens, i.e. to all those UK businesses keen to sell digital services to Europeans. 

This free flow of data is governed by a so-called adequacy decision which the European Commission granted the UK in June last year, essentially on account that no changes had (yet) been made to UK law since it adopted the bloc’s General Data Protection Regulation (GDPR) in 2018 by incorporating it into UK law.

And the Commission simultaneously warned that any attempt by the UK to weaken domestic data protection rules — and thereby degrade fundamental protections for EU citizens’ data exported to the UK — would risk an intervention. Put simply, that means the EU could revoke adequacy — requiring all EU-UK data flows to be assessed for legality on a case-by-case basis, vastly ramping up compliance costs for UK businesses wanting to import EU data.

Last year’s adequacy agreement also came with a baked in sunset clause of four years — meaning it will be up for automatic review in 2025. Ergo, the amount of wiggle room the UK government has here is highly limited. Unless it’s truly intent on digging ever deeper into the lunatic sinkhole of Brexit by gutting this substantial and actually expanding sunlit upland of the economy (digital services).

The cost — in pure compliance terms — of the UK losing EU adequacy has been estimated at between £1BN-£1.6BN. But the true cost in lost business/less scaling would likely be far higher.

The government’s briefing note on its legislative program itself notes that the UK’s data market represented around 4% of GDP in 2020; also pointing out that data-enabled trade makes up the largest part of international services trade (accounting for exports of £234BN in 2019).

It’s also notable that Johnson’s government has never set out a clear economic case for tearing up UK data protection rules.

The briefing note continues to gloss over that rather salient detail — saying that analysis by the Department for Digital, Culture, Media and Sport (DCMS) “indicates our reforms will create over £1BN in business savings over ten years by reducing burdens on businesses of all sizes”; but without specifying exactly what regulatory changes it’s attaching those theoretical savings to.

And that’s important because — keep in mind — if the touted compliance savings are created by shrinking citizens’ data protections that risks the UK’s adequacy status with the EU — which, if lost, would swiftly lead to at least £1BN in increased compliance costs around EU-UK data flows… thereby wiping out the claimed “business savings” from ‘less privacy red tape’.

The government does cite a 2018 economic analysis by DCMS and a tech consultancy, called Ctrl-Shift, which it says estimated that the “productivity and competition benefits enabled by safe and efficient data flows would create a £27.8BN uplift in UK GDP”. But the keywords in that sentence are “safe and efficient”; whereas unsafe EU-UK data flows would face being slowed and/or suspended — at great cost to UK GDP…

The whole “data reform bill” bid does risk feeling like a bad-faith PR exercise by Johnson’s thick-on-spin, thin-on-substance government — i.e. to try to claim a Brexit ‘boon’ where there is, in fact, none.

See also this “key fact” which accompanies the government’s spiel on the reform — claiming:

“The UK General Data Protection Regulation and Data Protection Act 2018 are highly complex and prescriptive pieces of legislation. They encourage excessive paperwork, and create burdens on businesses with little benefit to citizens. Because we have left the EU, we now have the opportunity to reform the data protection framework. This Bill will reduce burdens on businesses as well as provide clarity to researchers on how best to use personal data.”

Firstly, the UK chose to enact those pieces of legislation after the 2016 Brexit vote to leave the EU. Indeed, it was a Conservative government (not led by Johnson at that time) that passed these “highly complex and prescriptive pieces of legislation”.

Moreover, back in 2017, the former digital secretary Matt Hancock described the EU GDPR as a “decent piece of legislation” — suggesting then that the UK would, essentially, end up continuing to mirror EU rules in this area because it’s in its interests to do so to in order to keep data flowing.

Fast forward five years and the Brexit bombast may have cranked up to Johnsonian levels of absurdity but the underlying necessity for the government to “maintain unhindered data flows”, as Hancock put it, hasn’t gone anywhere — or, well, assuming ministers haven’t abandoned the idea of actually trying to grow the economy.

But there again the government lists creating a “pro-growth” (and “trusted”) data protection framework as a key “purpose” for the data reform bill — one which it claims can both reduce “burdens” for businesses and “boosts the economy”. It just can’t tell you how it’ll pull that Brexit bunny out of the hat yet.

Arrikto’s mission is to enable data scientists to build and deploy their machine learning models faster. The company, which raised a $10 million Series A round in late 2020, is building its platform on top of Kubeflow, a cloud-native open-source project for building machine learning operations that was originally developed by Google but which is now mostly managed by the community. Until now, Arrikto’s main product was a self-managed enterprise distribution of Kubeflow for enterprises (aptly named ‘Enterprise Kubeflow’) that wanted to run it in their data centers or virtual private clouds. Today, the company is also launching a fully managed version of Kubeflow.

“Pushing ML models from experimentation all the way to production is incredibly complex,” Arrikto CEO and co-founder Constantinos Venetsanopoulos told me. “We see a few common reasons for this. Number one is data scientists are essentially not ops experts and ops people aren’t data scientists — and they don’t want to become data scientists. Second, we have seen an explosion of ML tools the last couple of years. They are extremely fragmented and they require a lot of integration. What we’re seeing is people struggling to stitch everything together. Both of those factors create a massive barrier to entry.”

Image Credits: Arrikto

With its fully managed Kubeflow, Arrikto aims to give businesses a platform that can help them accelerate their ML pipelines and free data scientists from having to worry about the infrastructure, while also allowing them to continue to use the tools they are already familiar with (think notebooks, TensorFlow, PyTorch, Hugging Face, etc.). “We want to break down the technical barrier that keeps most companies from deploying real machine learning capabilities,” said Venetsanopoulos.

With Kubeflow as a Service, the company argues, data scientists will get instant access to an end-to-end MLops platform. It’s essentially Arrikto’s Enterprise Kubeflow with a lot of custom automation tooling on top of it to abstract away all of the details of the Kubernetes platform it sits on top of.

For now, Arrikto will only run on a single cloud but in the long run, the plan is to support the three major cloud providers to ensure low latencies (and reduce the need to move lots of data between clouds).

Interestingly, Venetsanopoulos argues that the company’s biggest competitor right now isn’t other managed services like AWS’ SageMaker but businesses trying to build their own platforms by stitching together open-source tools.

“Kubeflow as a Service gives both data scientists and DevOps engineers the easiest way to use an MLOps platform on Kubernetes without having to request any infrastructure from their IT departments,” said Venetsanopoulos. “When an organization deploys Kubeflow in production – whether on-prem or in the cloud – Arrikto’s Kubeflow as a Service will turbocharge the process.”

The company, which now has about 60 employees, will continue to offer Kubeflow Enterprise in addition to this new fully managed service.

Supabase, which bills itself as an open source alternative to services like Google’s Firebase, today announced that it has raised an $80 million Series B funding round led by Felicis Ventures. Coatue and Lightspeed also participated in this round, which brings the company’s total funding to date to $116 million.

The service can’t, of course, match Firebase on a feature-by-feature basis, but it offers many of the core features that developers would need to get started, including a database, storage and authentication service, as well as the recently launched Supabase Edge Functions, a serverless functions-as-a-service offering.

Image Credits: Supabase

As Supabase CEO and co-founder Paul Copplestone told me, the company saw rapid growth in the last year, with a community that has now grown to more than 80,000 developers who have created over 100,000 databases on the service — a growth of 1,900% in the last 12 months.

“We’re moving upmarket and finding more and more customers who are using us as a Postgres-as-a-service offering — basically as an alternative to [Amazon] RDS. And Heroku, of course, at the moment is an interesting one since so many people are looking to migrate. So basically, anyone who would be looking at a Postgres-as-a-service offering, we’re starting to see more and more of these people reach out directly,” he said, and also noted that it’s not just the database itself that users are interested in but also the company’s tooling around it, including autogenerated APIs and GraphQL extensions, as well as the ability to scale the database up as needed. “It’s really focused on making it very easy for developers to get started and build on top of,” he said.

The investors, too, are looking at Supabase because of its focus on its database. “If you look at the world’s most valuable companies, even in the current market environment, it’s either database or security, both of which, right now, are two top areas,” explained Felicis founder and managing partner Aydin Senkut. “When we look at database companies […], first of all, I need to give credit to [Paul Copplestone] and [co-founder Anthony Wilson]. They are very impressive as founders. They truly move really fast. The speed at which they ship code and impress customers is honestly at a level that I’ve only seen at companies like Google and Shopify.”

He also noted that Supabase was obviously able to show impressive growth numbers, with the expectation that the team will now be able to execute on this and monetize the platform effectively. “We’re monetizing on usage of the infrastructure and it’s a very clear path to revenue and one that we hope to see a lot more growth in over the next few years,” said Copplestone when I asked him about the company’s monetization strategy.

With this new funding, Supabase plans to double its team in order to build out its platform (with a focus on enterprise features) and go-to-market strategy. Until now, most of the company’s growth has been organic.

Hardware, as the saying goes, is hard; but there remains an opportunity for startups that focus on specific niches to build viable businesses. In the latest example, reMarkable, the Oslo, Norway-based maker of a simple and slick $299 e-paper tablet of the same name, says that it has passed 1 million devices sold since 2017 and recently raised money at a $1 billion valuation after making revenues of $300 million and operating profits of $31 million in 2021.

Founder and CEO Magnus Wanberg said reMarkable is not disclosing the amount of the investment, nor who was involved, except to say that it’s a minority stake in the company and that it came from multiple international (not Norwegian) investors. The company employs 300+ people and Wanberg says it is still “majority employee owned.”

“Nothing’s fishy but we’re keeping it confidential,” he said when I asked why the reticence on the investment. He noted that while the deal was made last year, the startup is disclosing it now as “a good indication, a signal out to the world” of how the company is doing. “This is just sprinkles for us,” he said more than once during our interview.

Spark, which led a $15 million investment into the company in 2019 (when it had sold a mere 100,000 devices), remains a shareholder in the company, Wanberg added. And it seems that the startup is open to raising more to invest in growth (perhaps another reason for speaking about its latest investment now).

reMarkable’s growth and milestone investment are remarkable (sorry had to do it) in themselves, but what is also interesting is to consider why and how a company like reMarkable is finding traction.

A large number of consumers definitely do not seem to mind being very online, but there is definitely a seam of users looking for ways to use new technology that doesn’t at the same time spell being locked into the litany of pings and distractions that come with so much connected technology today. And increasingly we are seeing companies building for that seam of users. reMarkable is one of them. Wanberg believes that this company’s success to date is due in large part to the focus it has on “focus.”

“The future of the tablet as we see it is in the direction that Apple and others are heading, a fusion of laptop and tablet forms,” which complements how people also use smartphones, he said in an interview. “But our offering is a third device, a focussed space for books, drawing and notes, where you can really avoid distractions and procrastination. That is our positioning.”

Even its small concessions to aesthetics — the sound and feel of reMarkable’s pen on its screen are more akin to a writing utensil moving across paper than a stylus gliding on the glass of your iPad — feel in aid of trying to help people forget they are using a piece of electronics.

The company’s business model was originally banked around selling hardware, which today is used by “hundreds of thousands” of active users. The company’s reMarkable 2 model, launched in 2020 as the Covid-19 outbreak went global, really rode the wave of more people doing more things at home and trying to find more nuanced uses for their quiet time.

But in October of last year reMarkable made a bet on aligning itself closer with that idea of focus, launching a subscription service called Connect.

While others like Apple have also built out recurring services businesses based around its hardware, this was an especially important milestone for reMarkable, which has only released two devices since being founded in 2013 and touts that you do not need to buy a new device for at least 10 years when you buy one.

Billed monthly in two tiers (normal at $7.99/month and “lite” at $4.99/month), Connect is how the startup plans to make a substantial part of its money going forward (indeed, when I asked it declined to give any projections for device sales for 2022). Among its features, Connect provides continuous software updates; cloud storage; connectivity with Dropbox, Google Drive and One Drive if you want it; an extended warranty for the tablet; handwriting conversion; screen sharing and a feature to send by email — in other words, a few features to get information into and out of your reMarkable tablet, but otherwise nothing especially real-time and dynamic.

In this way, even though it calls itself a tablet, the reMarkable is more like an e-reader, Wanberg said.

“With an E-reader and you own and use it for quite a long time,” he explained. “In our business, it’s not a new-model-every-year dynamic. There is no emphasis on new model ownership. We don’t want to force our company to slap on some iteration for the sake of it. There is true innovation, major steps in terms of what we can offer the customer. We also think it’s great from a sustainability perspective [to move away from] pushing out new hardware.”

Wanberg didn’t disclose how many have adopted Connect so far, only noting that so far it has had a “great response” as reMarkable “tries to prove to customers that we can serve them on a running basis.” Given that Connect only launched in October of last year, it may be too early to tell. Its $31 million operating profit in 2021 was more than triple its profit in 2020 ($10 million), but reMarkable noted that this was “driven largely by sales of its latest paper tablet.”

 

2021

2020

2019

Revenue*

$303 million

$138 million

$42 million

EBITDA*

$31 million

$10 million

$-3 million

Annual growth

120%

229%

69%

*Converted from NOK to USD with rate as of 31.12.21 according to The Central Bank of Norway – 8,8194