Steve Thomas - IT Consultant

At its I/O developer conference, Google today launched Google Wallet, a new Android and Wear OS app that will allow users to store things like credit cards, loyalty cards, digital IDs, transit passes, concert tickets, vaccination cards and more.

That’s pretty straightforward, but from here on out, it gets a bit confusing. Google, after all, has long offered the Google Pay app (and yes — a Google Wallet app, too), where you could store your credit cards for online and contactless payments. Back in 2020, Google made some major changes to Google Pay to refocus it more on tracking your spending and sending and receiving money between friends and family members. At that point, Google even wanted to launch its own bank account, in partnership with financial institutions like Citi, that users would manage in Google Pay. That project, dubbed Plex, never saw the light of day and was quickly shelved after the executive behind the project left Google barely six months after the announcement.

Image Credits: Google

Currently, Google Pay is available in 42 markets, Google says. Because in 39 of those markets, Google Pay is still primarily a wallet, those users will simply see the Google Pay app update to the new Google Wallet app. But in the U.S. and Singapore, Google Pay will remain the payments-focused app while the Wallet app will exist in parallel to focus on storing your digital cards. Meanwhile, in India, Google says that “people will continue to use their Google Pay app they are familiar with today.”

Image Credits: Google

“The Google Pay app will be a companion app to the Wallet,” said Arnold Goldberg, the VP and GM of Payments at Google, who joined the company earlier this year after a long stint at PayPal. “Think of [the Google Pay app] as this higher value app that will be a place for you to make payments and manage money, whereas the wallet will really be this container for you to store your payment assets and your non-payment assets.”

Goldberg noted that Google decided to go this route because of the rapid digitization we’ve been seeing during the last two years of the pandemic. “We talk about ten years of change in two years from just a behavior perspective and people almost demanding now digitization versus it being a nice-to-have pre-COVID,” he said. “It’s clarified our focus on what we need to do, as a payments organization — what we need to do as a company — to reimagine not just what we’re doing from a payments perspective online and in-store, but also thinking about what we can enable people to do with their digital wallets.”

"Read

Google today announced that its Chrome browser will now offer users the ability to use a virtual credit card number in online payment forms on the web. These virtual card numbers allow you to keep your ‘real’ credit card number safe when you buy something online since they can be easily revoked if a merchant’s systems get hacked. A number of credit card issuers already offer these virtual credit card numbers, but they are probably far less mainstream than they should be.

Image Credits: Google /

Google says these virtual cards will roll out in the U.S. later this summer. Since Google is working with both card issuers like Capital One, which is the launch partner for this feature, but also the major networks like Visa and American Express, which will be supported at launch, with Mastercard support coming later this year. Having support from the networks is definitely a big deal here, because trying to get every individual card issuer on board would be a difficult task.

The new feature will be available on Chrome on desktop and Android first, with iOS support rolling out later.

“This is a landmark step in bringing the security of virtual cards to as many consumers as possible,” said Arnold Goldberg, the Vice President and General Manager of Payments at Google. “Shoppers using Chrome on desktop and Android can enjoy a fast checkout experience when shopping online while having the peace of mind knowing that their payment information is protected.”

From the user perspective, this new autofill option will simply enter the virtual card’s details for you, including the CVV that you can never remember for your physical cards, and then you can manage the virtual cards and see your transactions at pay.google.com.  While these virtual cards are typically used for one-time purchases, you will also be able to use these cards for subscriptions, too.

Since this is Google, some users will obviously worry that the company will use this additional data about your purchase habits, but Google says it will not use any of this information for ad targeting purposes.

"Read

Among the privacy and security-related updates announced today at Google’s I/O conference, the company says it’s bringing phishing protection to its suite of productivity apps, including Docs, Sheets, and Slides. It will also newly alert users to other possible security issues with their accounts directly on their account profile and offer a new tool that makes it easier to request the removal of your personal information from Google Search.

The company already developed technology to protect users against phishing scams elsewhere across its products and services, including in Gmail and Chrome, for example. Those protections have detected and blocked billions of threats to date, says Google, which has helped to further strengthen Google’s A.I.-powered protections. That’s why it’s able to now extend this protection to other apps that are often used in the workplace.

Image Credits: Google

Soon, if users are working in a document where Google spots a suspicious link, Google will alert you to the issue and take you back to safety, much as it does on the web. The addition will help to increase user safety amid a growing number of phishing scams, which are now responsible for over 90% of recent cyberattacks, the company notes. (The company pre-announced this feature ahead of I/O in April.)

Along with this release, Google Apps users will also be warned about other security issues right on their profiles.

“We were the first consumer tech company to offer two-step verification over 10 years ago. And last year, we were the first to turn it on by default…We don’t ever want people to worry about the safety of their accounts, so at I/O we’re also launching a new alert on the profile picture across all Google Apps, letting users know that if there’s a security issue that needs their attention,” said Guemmy Kim, Director, Account Security at Google.

The company at I/O announced it enrolled an additional 150 million accounts in two-step verification in the last year alone.

Image Credits: Google

When there’s an issue, a yellow alert will pop up on the screen on top of the account profile picture. When clicked, users will be taken to a page with a set of recommended actions they need to take in order to stay safe online. This isn’t necessarily offering new functionality in terms of the protections offered to users, but is highlighting potential risks in a more obvious way that users may be less inclined to ignore.

Google also introduced Protected Computing, a toolkit of technologies designed to minimize users’ data footprint, de-identify data, and restrict access to data. The feature powers Smart Reply in Messages by Google and Live Translation on Pixel.

Another new feature is also an iteration related to an existing protection.

In April, Google announced it would allow users to request the removal of their personal contact information from Google Search, including a phone number, email address, or physical address. The change followed the E.U.’s 2018 adoption of the General Data Protection Regulation, which included a section that gave individuals the right to have information about themselves removed from search engines, also known as the “right to be forgotten.”

Previously, this process involved filling out and signing a form.

But now, Google says it will roll out a new tool to streamline the request process.

When it launches, if you come across Google Search results that contain your phone number, home address, or email, you’ll be able to quickly request the removal from Google Search right where you found them.

Image Credits: Google

Instead of then filling out a form, you can use Google’s user interface to click on the type of result you want to be removed and submit it directly to Google. You’ll about be able to track your requests in a single place to see which ones you’ve submitted, which are pending and which have been approved.

Google says this feature will be available in the coming months in the Google app and will be accessible in individual Google search results on the web.

"Read

At its I/O developer conference, Google today announced the public preview of a full cluster of Google Cloud’s new Cloud TPU v4 Pods.

Google’s fourth iteration of its Tensor Processing Units launched at last year’s I/O and a single TPU pod consists of 4,096 of these chips. Each chip has a peak performance of 275 teraflops and each pod promises a combined compute power of up to 1.1 exaflops of compute power. Google now operates a full cluster of eight of these pods in its Oklahoma data center with up to 9 exaflops of peak aggregate performance. Google believes this makes this “the world’s largest publicly available ML hub in terms of cumulative computing power, while operating at 90% carbon-free energy.”

“We have done extensive research to compare ML clusters that are publicly disclosed and publicly available (meaning – running on Cloud and available for external users),” a Google spokesperson told me when I asked the company to clarify its benchmark. “Those clusters are powered by supercomputers that have ML capabilities (meaning that they are well-suited for ML workloads such as NLP, recommendation models etc. The supercomputers are built using ML hardware — e.g. GPUs (graphic processing units) — as well as CPU and memory. With 9 exaflops, we believe we have the largest publicly available ML cluster.”

At I/O 2021, Google’s CEO Sundar Pichai said that the company would soon have “dozens of TPU v4 pods in our data centers, many of which will be operating at or near 90% carbon-free energy. And our TPUv4 pods will be available to our cloud customers later this year.” Clearly, that took a bit longer than planned, but we are in the middle of a global chip shortage and these are, after all, custom chips.

Ahead of today’s announcement, Google worked with researchers to give them access to these pods. “Researchers liked the performance and scalability that TPU v4 provides with its fast interconnect and optimized software stack, the ability to set up their own interactive development environment with our new TPU VM architecture, and the flexibility to use their preferred frameworks, including JAX, PyTorch, or TensorFlow,” Google writes in today’s announcement. No surprise there. Who doesn’t like faster machine learning hardware?

Google says users will be able to slice and dice the new cloud TPU v4 cluster and its pods to meet their needs, whether that’s access to four chips (which is the minimum for a TPU virtual machine) or thousands — but also not too many, either, because there are only so many chips to go around.

As of now, these pods are only available in Oklahoma. “We have run an extensive analysis of various locations and determined that Oklahoma, with its exceptional carbon-free energy supply, is the best place to host such a cluster. Our customers can access it from almost anywhere,” a spokesperson explained.

Google today announced three new features for its voice-activated Assistant that will all make it easier and more natural to interact with it.

The first is about making it easier to initiate a conversation with the Assistant by simply looking at a device like the Nest Hub, with its built-in camera, and talking to the Assistant without using the “Hey Google” wake word. This will roll out later this week to users who pair their Nest Hub Max with an Android device, while iOS users will have to wait a few more weeks.

Image Credits: Google

The other new feature is extended support for quick phrases, that is, the ability to use a quick phrase to answer a phone call, turn off the lights or ask about the weather, all without having to use a wake word either. That means going forward, you’ll be able to simply set a timer without saying “Hey Google.” Google notes that this is an opt-in feature and that it will use the company’s voice match feature that’s already available on the Nest Hub today.

Image Credits: Google

Finally, Google is also making some changes to how the Assistant processes your requests so that it will be able to better understand your intent, even if have to correct yourself or make small pauses while you think about how you want to phrase your question, for example.

“We realize when evaluating real conversations, they’re full of nuances,” said Nino Tosca, the director of product management for the Google Speech team and the Google Assistant. “People say ‘uhm,’ interruptions when two people are speaking back and forth, pauses, self-corrections — but we realized that with two humans communicating, these things are natural. They don’t really get in the way with people understanding each other. […] We’re trying to bring these natural behaviors to the Google Assistant so that a user doesn’t have to think before they say a command — or actually process the command in their head, make sure they have every word right and then try to get it out perfectly. We want you to be able to just talk to the Google Assistant like you would with another human and we’ll understand the meaning and be able to fulfill your intent.”

Image Credits: Google

Sadly, this feature is still in development but should roll out sometime in early 2023. Google has always used I/O to showcase upcoming features, even though some of them never launch, so we’ll just have to wait and see where this one goes.

Overall, though, these seem like worthwhile additions to the Google Assistant feature set. Saying ‘Hey Google’ quickly gets old, after all, and continues to feel a bit weird. Indeed, I can’t help but think that the shine has worn off a bit from the Assistant (and its competitors). Personally, despite having a bunch of Nest Hubs and Google Homes at home, I don’t think I’ve used them for anything but turning on the lights using their touchscreen and setting the occasional cooking timer in recent months. Google has major ambitions around ‘ambient computing,’ but when the Assistant doesn’t understand you and then randomly starts playing a Justin Bieber video on your TV, it feels like that future still needs some tuning. Anything to remove those barriers is welcome.

"Read

In April, Google introduced a new “multisearch” feature that offered a way to search the web using both text and images at the same time. Today, at Google’s I/O developer conference, the company announced an expansion to this feature, called “Multisearch Near Me.” This addition, arriving later in 2022, will allow Google app users to combine either a picture or a screenshot with the text “near me” to be directed to options for local retailers or restaurants that would have the apparel, home goods, or food you’re in search of. It’s also pre-announcing a forthcoming development to multisearch that appears to be built with AR glasses in mind, as it can visually search across multiple objects in a scene based on what you’re currently “seeing” via a smartphone camera’s viewfinder.

With the new “near me” multisearch query, you’ll be able to find local options related to your current visual and text-based search combination. For example, if you were working on a DIY project and came across a part that you needed to replace, you could snap a photo of the part with your phone’s camera to identify it and then find a local hardware store that has a replacement in stock.

This isn’t all that different from how multisearch already works, Google explains — it’s just adding the local component.

Image Credits: Google

Originally, the idea with multisearch was to allow users to ask questions about an object in front of them and refine those results by color, brand, or other visual attributes. The feature today works best with shopping searches, as it allows users to narrow down product searches in a way that standard text-based web searches could sometimes struggle with. For instance, a user could snap a photo of a pair of sneakers, then add text to ask to see them in blue to be shown just those shoes in the color specified. They could choose to visit the website for the sneakers and immediately purchase them, as well. The expansion to include the “near me” option now simply limits the results further in order to point users to a local retailer where the given product is available.

In terms of helping users find local restaurants, the feature works similarly. In this case, a user could search based on a photo they found on a food blog or somewhere else on the web to learn what the dish is and which local restaurants might have the option on their menu for dine-in, pickup, or delivery. Here, Google Search combines the image with the intent that you’re in search of a nearby restaurant, and will scan across millions of images, reviews, and community contributions to Google Maps to find the local spot.

The new “near me” feature will be available globally in English and will roll out to more languages over time, Google says.

The more interesting addition to multisearch is the capability to search within a scene. In the future, Google says users will be able to pan their camera around to learn about multiple objects within that wider scene.

Google suggests the feature could be used to scan the shelves at a bookstore then see helpful several insights overlaid in front of you.

Image Credits: Google

“To make this possible, we bring together not only computer vision, natural language understanding, but we also bring that together with the knowledge of the web and on-device technology,” noted Nick Bell, Senior Director, Google Search. “So the possibilities and the capabilities of this are going to be huge and significant,” he noted.

The company — which came to the AR market early with its Google Glass release — didn’t confirm it had any sort of new AR glasses-type device in the works, but hinted at the possibility.

“With A.I. systems now, what’s possible today — and going to be possible over the next few years — just kind of unlocks so many opportunities,” said Bell. In addition to voice search, desktop, and mobile search, the company believes visual search will also be a bigger part of the future, he noted.

Image Credits: Google

“There are 8 billion visual searches on Google with Lens every single month now and that number is three times the size that it was just a year ago,” Bell continued. “What we’re definitely seeing from people is that the appetite and the desire to search visually is there. And what we’re trying to do now is lean into the use cases and identify where this is most useful,” he said. “I think as we think about the future of search, visual search is definitely a key part of that.”

The company, of course, is reportedly working on a secret project, codenamed Project Iris, to build a new AR headset with a projected 2024 release date. It’s easy to imagine not only how this scene-scanning capability could run on such a device, but also how any sort of image-plus-text (or voice!) search feature could be used on an AR headset. Imagine again looking at the pair of sneakers you liked, for instance, then asking a device to navigate to the nearest store you could make the purchase.

“Looking further out, this technology could be used beyond everyday needs to help address societal challenges, like supporting conservationists in identifying plant species that need protection, or helping disaster relief workers quickly sought through donations in times of need,” suggested Prabhakar Raghavan, Google Search SVP, speaking on stage at Google I/O.

Unfortunately, Google didn’t offer a timeframe for when it expected to put the scene-scanning capability into the hands of users, as the feature is still “in development.”

"Read

In the face of increased competition from Apple Maps and its 3D city views, Google today introduced its own vision for its next-generation Google Maps with a preview of its new more “immersive” viewing experience. The enhancement, presented during Google’s I/O conference keynote, leverages a combination of computer vision and A.I. technology to fuse together Street View and aerial imagery in order to offer a digital model of the world and a new way to explore cities, key landmarks, restaurants, venues, and other places of interest.

Google says it has fused together “billions” of images to create this immersive view, which allows users to explore by visually soaring over an area to see what it may look like. For example, if you were planning a trip to London, you might use the feature to look at landmarks like Big Ben or Westminster to get a better sense of the place and experience and the architecture. You’ll also be able to use a “time slider” to adjust what the area looks like at different times of day — a feature that somewhat resembles Apple Maps’ nighttime mode with a moonlight glow that activates at dusk, even when browsing 3D cities.

Image Credits: Google

Google’s immersive mode, however, will additionally allow users to look up local weather and traffic conditions to aid with planning.

This new mode won’t stop at just representing major cities in a more immersive perspective — it will also make it possible to more easily explore inside places, including neighborhood restaurants and other popular venues.

Image Credits: Google

Users will be able to glide down to the street level and then click to view what it looks like inside a place they may want to visit. This could help people to figure out what kind of vibe a restaurant may have, among other things. You can also see the area’s live busyness and nearby traffic from this level.

Image Credits: Google

The feature won’t be immediately available everywhere, though.

Initially, it will begin to roll out to major cities including L.A., London, New York, San Francisco, and Tokyo by the end of the year, with more cities to follow in the months ahead. It will work across platforms and devices, Google said, starting with a rollout on Android and iOS later this year.

The company also announced a handful of other Google Maps updates during the event. It said that its eco-friendly routing feature, which launched in the U.S. and Canada this past fall to help drivers find the most fuel-efficient routes, will expand to Europe later this year.

Image Credits: Google

So far, the addition is estimated to have saved more than half a million metric tons of carbon emissions, which is the equivalent of taking 100,000 cars off the road. Google says the European expansion will double this figure.

Meanwhile, for developers, Google announced the launch of a new ARCore Geospatial API which will bring Maps’ Live View feature to developers for use in third-party applications.

Image Credits: Google

Live View is the feature that overlays AR arrows and directions on top of the real world, as seen through the smartphone’s camera. The idea is that you could pull out your phone and see exactly which way to begin walking when in a new place where you may otherwise be disoriented. It’s as if you’ve dropped yourself right inside Google Street View.

Now, Google will also allow its developer partners to build products that utilize Live View technologies.

Image Credits: Google/Docomo

One partner is the micromobility company Lime, which is using the API to help commuters in London, Paris, Tel Aviv, Madrid, San Diego, and Bordeaux to find spots to park their e-scooters and e-bikes. Telstra and Accenture are using it to help sports fans and concertgoers find their seats, concession stands, and restaurants at Marvel Stadium in Melbourne. In Japan, Docomo and Curiosity are building a new game that has players fending off virtual dragons with robot companions in front of Tokyo landmarks, like the Tokyo Tower, also powered via the API.

"Read

In addition to improving Google Assistant’s ability to communicate with users in a more natural way, Google today also announced improvements to its Google Translate service. The company said it’s adding 24 new languages — including its first indigenous languages of the Americas with the additions of Quechua, Guarani, and Aymara.

Other new additions include the following:

In total, the 24 new languages are spoken by over 300 million people worldwide, Google said.

“This range from smaller languages, like Mizo spoken by people in the northeast of India — by about 800,000 people — up to very large world languages like Lingala spoken by around 45 million people across Central Africa,” said Isaac Caswell, a Google Translate Research Scientist.

He added that in addition to the indigenous languages of the Americas, Google Translate will support a dialect of English for the first time with Krio from Sierra Leone. The company said it selected this newest batch of languages to support by looking for languages with very large but underserved populations — which were frequently in the African continent and Indian subcontinent. It also wanted to address indigenous languages which are often overlooked by technology.

Google’s ability to add new languages has improved thanks to technological advances taking place over the past few years, Caswell said.

“Up until a couple of years ago, it simply was not technologically possible to add languages like these, which are what we call a low resource — meaning that there are not very many text resources out there for them,” he explained. But a  new technology called Zero-Shot Machine Translation has made it easier. “At a high level, the way you can imagine it working is you have a single gigantic neural AI model, and it’s trained on 100 different languages with translation. You can think of it as a polyglot that knows lots of languages. But then additionally, it gets to see text in 1,000 more languages that isn’t translated. You can imagine if you’re some big polyglot, and then you just start reading novels in another language, you can start to piece together what it could mean based on your knowledge of language in general,” he said.

The expansion brings the total number of languages supported by the service to 133. But Google said the service still has a long way to go, as there are still some 7,000 unsupported languages globally that Translate doesn’t address.

The new languages will be live today on Google Translate, but won’t reach all users worldwide for a couple of days, Google noted.

"Read

YouTube is expanding on its latest feature that made any public YouTube video potential fodder for its TikTok competitor, YouTube Shorts. Today, the company is announcing the launch of “Green Screen,” a tool that will allow users to use up to a 60-second video segment from any eligible YouTube video or YouTube Short as the background for their new original Short video.

The feature joins a number of other effects available now to YouTube Shorts creators, including the appearance-smoothing Retouch feature; a Lighting feature to boost dark environments; an Align feature that aligns the subject from the last frame of a video with a new video; a text and timeline editor to add messages over top videos; various video filters; and, most recently, Cut — the tool that effectively made all of YouTube’s public content possible Shorts material.

As with Cut, YouTube says the new Green Screen remix feature can be used with any public YouTube video unless the creator has opted out. The only exception to this involves music videos which include copyrighted content from YouTube’s partners or others with visual claims. Also similar to Cut, any video created with Green Screen includes a link back to the original content creator for attribution’s sake.

On iOS, creators can also use the Green Screen tool in the Shorts camera to choose any photo or video from their device gallery as the background, the company says.

Image Credits: YouTube

YouTube’s decision to make its platform’s videos available for remixing is meant to be a competitive advantage as the competition with TikTok heats up. It’s notable that it made the feature opt-out by default, meaning videos are essentially up for grabs unless a creator says otherwise. So far, there hasn’t been a major backlash to this decision, as some creators feel that Shorts is just another way to get their channel discovered, or generally aren’t worried about Shorts eating into their own audience, as it’s a different type of viewing experience.

Given the integration with YouTube content, Green Screen makes sense as the next new video effect for Shorts. On TikTok, a similar feature is heavily used to allow creators to comment on and reference each other’s content. But in Shorts’ case, the original video creator isn’t necessarily a Shorts creator, too — they may only produce long-form content for YouTube proper. That could lessen the appeal of the Green Screen tool as a community conversation tool, as the person whose video is being referenced may not even participate in the Shorts community itself.

Google says. the new Green Screen tool is beginning to roll out on iOS today and will come to Android soon.

Asked why the company was prioritizing iOS over Google’s own mobile platform, YouTube only replied that it was prioritizing the need to move quickly when launching the new features.

“Our priority is bringing the best experience to our creators as quickly as possible, and sometimes that means we bring particular features to one platform before another,” a spokesperson said.

The addition of Green Screen follows a rougher quarter for YouTube, where the company missed its projections for ad revenue, bringing in $6.87 billion, when it was forecast to pull in $7.51 billion. YouTube chalked this up to the lingering pandemic impacts, saying the slower growth is more of a reflection of last year’s gains. At the time, it also reported Shorts was now generating 30 billion views per day.

 

While (former) startups like Lemonade came along to attack the tired world of insurance, the travel insurance market is now coming in for the same treatment from the likes of Safetywing (covered by TC here) and Battleface.

In an ideal world, travel insurance would be easier to understand, would pay out quickly when things go wrong, and operate almost like Apple Pay or Google Pay in its simplicity. New ‘whole-trip travel insurance’ startup Faye – which exited stealth mode last month – hopes to bring that kind of vibe with its approach, and now it’s raised backing to do it.

The startup has pulled in $8 million in a seed funding round led by Viola Ventures and F2 Venture Capital. Also participating was Portage Ventures, Global Founders Capital (GFC) and former NBA player Omri Casspi.

It’s fair to say that most travel insurance products are not so much built for consumers as for distributors, where jargon-filled add-ons abound. Faye claims its approach is much more simple, where customers are asked 6 questions in order to find the right plan.

The platform covers trips, health, belongings, and pets via an app that sends alerts, has 24/7customer support, and enables digital claims filing, and electronic transfers of reimbursements to its Faye Wallet.

Co-Founder & CEO, Elad Schaffer said in a statement: “Travel insurance has become synonymous with lengthy, jargon-filled policies that leave travelers confused rather than well-informed… Faye is hitting the market as a solution to these pain points, at a time in which consumers are planning to travel more than ever before and are seeking solutions to look after them while on the road.”

Faye will also cover travelers if they contract COVID-19 pre-trip, and cancellations (subject to the terms of the plan), as well as coverage for emergency medical expenses and trip interruption

Omry Ben David, General Partner, Viola Ventures commented: “Faye is re-inventing the category of consumer travel insurance in the U.S. where data isn’t leveraged yet for bespoke, price-optimized coverage in an industry with very favorable growth characteristics and unit economics.”

The Faye team is comprised of Jeff Rolander (formerly at Allianz), Moran Treiser (formerly Lemonade) and Lauren Gumport (formerly Guesty).

Google’s developer conference Google I/O is back, which means that the company has a few things to announce. During the opening keynote, Google is expected to unveil new hardware products, new software updates and new features for Google’s ecosystem.

The conference starts at 10 a.m. PDT (1 p.m. on the East Cost, 6 p.m. in London, 7 p.m. in Paris) and you can watch the livestream right here on this page.

Rumor has it that Google could unveil the Pixel Watch. This isn’t the company’s first experience in the smartwatch space, but it represents a fresh new start with Google’s own hardware division leveraging Wear OS. If you’re a Pixel person, you can also expect some smartphone news and maybe new accessories.

More importantly, Google will likely share some news about its flagship services, such as Google Maps, Google’s search engine, YouTube and Google Play. It’s going to be interesting to see if Google has anything to share about Chrome and Android as well.

Whether you’re a Google user who relies a lot on Google’s ecosystem or a tech enthusiast who wants to see what’s next for Google, make sure to watch today’s keynote and read our coverage on TechCrunch.

Read more about Google I/O 2022 on TechCrunch

The UK government has confirmed it will move forward on a major ex ante competition reform aimed at Big Tech, as it set out its priorities for the new parliamentary session earlier today.

However it has only said that draft legislation will be published over this period — booting the prospect of passing updated competition rules for digital giants further down the road.

At the same time today it confirmed that a “data reform bill” will be introduced in the current parliamentary session.

This follows a consultation it kicked off last year to look at how the UK might diverge from EU law in this area, post-Brexit, by making changes to domestic data protection rules.

There has been concern that the government is planning to water down citizens’ data protections. Details the government published today, setting out some broad-brush aims for the reform, don’t offer a clear picture either way — suggesting we’ll have to wait to see the draft bill itself in the coming months.

Read on for an analysis of what we know about the UK’s policy plans in these two key areas… 

Ex ante competition reform

The government has been teasing a major competition reform since the end of 2020 — putting further meat on the bones of the plan last month, when it detailed a bundle of incoming consumer protection and competition reforms.

But today, in a speech setting out prime minister Boris Johnson’s legislative plans for the new session at the state opening of parliament, it committed to publish measures to “create new competition rules for digital markets and the largest digital firms”; also saying it would publish “draft” legislation to “promote competition, strengthen consumer rights and protect households and businesses”.

In briefing notes to journalists published after the speech, the government said the largest and most powerful platform will face “legally enforceable rules and obligations to ensure they cannot abuse their dominant positions at the expense of consumers and other businesses”.

A new Big Tech regulator will also be empowered to “proactively address the root causes of competition issues in digital markets” via “interventions to inject competition into the market, including obligations on tech firms to report new mergers and give consumers more choice and control over their data”, it also said.

However another key detail from the speech specifies that the forthcoming Digital Markets, Competition and Consumer Bill will only be put out in “draft” form over the parliament — meaning the reform won’t be speeding onto the statue books.

Instead, up to a year could be added to the timeframe for passing laws to empower the Digital Markets Unit (DMU) — assuming ofc Johnson’s government survives that long. The DMU was set up in shadow form last year but does not yet have legislative power to make the planned “pro-competition” interventions which policymakers intend to correct structural abuses by Big Tech.

(The government’s Online Safety Bill, for example — which was published in draft form in May 2021 — wasn’t introduced to parliament until March 2022; and remains at the committee stage of the scrutiny process, with likely many more months before final agreement is reached and the law passed. That bill was included in the 2022 Queen’s Speech so the government’s intent continues to be to pass the wide-ranging content moderation legislation during this parliamentary session.)

The delay to introducing the competition reform means the government has cemented a position lagging the European Union — which reached political agreement on its own ex ante competition reform in March. The EU’s Digital Markets Act is slated to enter into force next Spring, by which time the UK may not even have a draft bill on the table yet. (While Germany passed an update to its competition law last year and has already designated Google and Meta as in scope of the ex ante rules.)

The UK’s delay will be welcomed by tech giants, of course, as it provides another parliamentary cycle to lobby against an ex ante reboot that’s intended to address competition and consumer harms in digital markets which are linked to giants with so-called “Strategic Market Status”.

This includes issues that the UK’s antitrust regulator, the CMA, has already investigated and confirmed (such as Google and Facebook’s anti-competitive dominance of online advertising); and others it suspects of harming consumers and hampering competition too (like Apple and Google’s chokepoint hold over their mobile app stores).

Any action in the UK to address those market imbalances doesn’t now look likely before 2024 — or even later.

Recent press reports, meanwhile, have suggested Johnson may be going cold on the ex ante regime — which will surely encourage Big Tech’s UK lobbyists to seize the opportunity to spread self-interested FUD in a bid to totally derail the plan.

The delay also means tech giants will have longer to argue against the UK introducing an Australian-style news bargaining code — which the government appears to be considering for inclusion in the future regime.

One of the main benefits of the bill is listed as [emphasis ours]:

“Ensuring that businesses across the economy that rely on very powerful tech firms, including the news publishing sector, are treated fairly and can succeed without having to comply with unfair terms.”

“The independent Cairncross Review in 2019 identified an imbalance of bargaining power between news publishers and digital platforms,” the government also writes in its briefing note, citing a Competition and Markets Authority finding that “publishers see Google and Facebook as ‘must have’ partners as they provide almost 40 per cent of large publishers’ traffic”.

Major consumer protection reforms which are planned in parallel with the ex ante regime — including letting the CMA decide for itself when UK consumer law has been broken and fine violating platforms over issues like fake reviews, rather than having to take the slow route of litigating through the courts — are also on ice until the bill gets passed. So major ecommerce and marketplace platforms will also have longer to avoid hard-hitting regulatory action for failures to purge bogus reviews from their UK sites.

Consumer rights group, Which?, welcomed the government’s commitment to legislate to strengthen the UK’s competition regime and beef up powers to clamp down on tech firms that breach consumer law. However it described it as “disappointing” that it will only publish a draft bill in this parliamentary session.

“The government must urgently prioritise the progress of this draft Bill so as to bring forward a full Bill to enact these vital changes as soon as possible,” added Rocio Concha, Which? director of policy and advocacy, in a statement.

Data reform bill

In another major post-Brexit policy move, the government has been loudly flirting with ripping up protections for citizens’ data — or, at least, killing off cookie banners.

Today it confirmed it will move forward with ‘reforming’ the rules wrapping people’s data — just without being clear about the exact changes it plans to make. So where exactly the UK is headed on data protection still isn’t clear.

That said, in briefing notes on the forthcoming data reform bill, the government appears to be directing most focus at accelerating public sector data sharing instead of suggesting it will pass amendments that pave the way for unfettered commercial data-mining of web users.

Indeed, it claims that ensuring people’s personal data “is protected to a gold standard” is a core plank of the reform.

A section on the “main benefits” of the reform also notably lingers on public sector gains — with the government writing that it will be “making sure that data can be used to empower citizens and improve their lives, via more effective delivery of public healthcare, security, and government services”.

But of course the devil will be in the detail of the legislation presented in the coming months. 

Here’s what else the government lists as the “main elements” of the upcoming data reform bill:

  • Using data and reforming regulations to improve the everyday lives of people in the UK, for example, by enabling data to be shared more efficiently between public bodies, so that delivery of services can be improved for people.
  • Designing a more flexible, outcomes-focused approach to data protection that helps create a culture of data protection, rather than “tick box” exercises.

Discussing other “main benefits” for the reform, the government touts increased “competitiveness and efficiencies” for businesses, via a suggested reduction in compliance burdens (such as “by creating a data protection framework that is focused on privacy outcomes rather than box-ticking”); a “clearer regulatory environment for personal data use” which it suggests will “fuel responsible innovation and drive scientific progress”; “simplifying the rules around research to cement the UK’s position as a science and technology superpower”, as it couches it; and ensuring the data protection regulator (the ICO) takes “appropriate action against organisations who breach data rights and that citizens have greater clarity on their rights”.

The upshot of all these muscular-sounding claims boils down to whatever the government means by an “outcomes-focused” approach to data protection vs “tick-box” privacy compliance. (As well as what “responsible innovation” might imply.)

It’s also worth mulling what the government means when it says it wants the ICO to take “appropriate” action against breaches of data rights. Given the UK regulator has been heavily criticized for inaction in key areas like adtech you could interpret that as the government intending the regulator to take more enforcement over privacy breaches, not less.

(And its briefing note does list “modernizing” the ICO, as a “purpose” for the reform — in order to “[make] sure it has the capabilities and powers to take stronger action against organisations who breach data rules while requiring it to be more accountable to Parliament and the public”.)

However, on the flip side, if the government really intends to water down Brits’ privacy rights — by say, letting businesses overrule the need to obtain consent to mine people’s info via a more expansive legitimate interest regime for commercial entities to do what they like with data (something the government has been considering in the consultation) — then the question is how that would square with a top-line claim for the reform ensuing “UK citizens’ personal data is protected to a gold standard”?

The overarching question here is whose “gold standard” the UK is intending to meet? Brexiters might scream for their own yellow streak — but the reality is there are wider forces at play once you’re talking about data exports.

Despite Johnson’s government’s fondness for ‘Brexit freedom’ rhetoric, when it comes to data protection law the UK’s hands are tied by the need to continue meeting the EU’s privacy standards, which require the an equivalent level of protection for citizens’ data outside the bloc — at least if the UK wants data to be able to flow freely into the country from the bloc’s ~447M citizens, i.e. to all those UK businesses keen to sell digital services to Europeans. 

This free flow of data is governed by a so-called adequacy decision which the European Commission granted the UK in June last year, essentially on account that no changes had (yet) been made to UK law since it adopted the bloc’s General Data Protection Regulation (GDPR) in 2018 by incorporating it into UK law.

And the Commission simultaneously warned that any attempt by the UK to weaken domestic data protection rules — and thereby degrade fundamental protections for EU citizens’ data exported to the UK — would risk an intervention. Put simply, that means the EU could revoke adequacy — requiring all EU-UK data flows to be assessed for legality on a case-by-case basis, vastly ramping up compliance costs for UK businesses wanting to import EU data.

Last year’s adequacy agreement also came with a baked in sunset clause of four years — meaning it will be up for automatic review in 2025. Ergo, the amount of wiggle room the UK government has here is highly limited. Unless it’s truly intent on digging ever deeper into the lunatic sinkhole of Brexit by gutting this substantial and actually expanding sunlit upland of the economy (digital services).

The cost — in pure compliance terms — of the UK losing EU adequacy has been estimated at between £1BN-£1.6BN. But the true cost in lost business/less scaling would likely be far higher.

The government’s briefing note on its legislative program itself notes that the UK’s data market represented around 4% of GDP in 2020; also pointing out that data-enabled trade makes up the largest part of international services trade (accounting for exports of £234BN in 2019).

It’s also notable that Johnson’s government has never set out a clear economic case for tearing up UK data protection rules.

The briefing note continues to gloss over that rather salient detail — saying that analysis by the Department for Digital, Culture, Media and Sport (DCMS) “indicates our reforms will create over £1BN in business savings over ten years by reducing burdens on businesses of all sizes”; but without specifying exactly what regulatory changes it’s attaching those theoretical savings to.

And that’s important because — keep in mind — if the touted compliance savings are created by shrinking citizens’ data protections that risks the UK’s adequacy status with the EU — which, if lost, would swiftly lead to at least £1BN in increased compliance costs around EU-UK data flows… thereby wiping out the claimed “business savings” from ‘less privacy red tape’.

The government does cite a 2018 economic analysis by DCMS and a tech consultancy, called Ctrl-Shift, which it says estimated that the “productivity and competition benefits enabled by safe and efficient data flows would create a £27.8BN uplift in UK GDP”. But the keywords in that sentence are “safe and efficient”; whereas unsafe EU-UK data flows would face being slowed and/or suspended — at great cost to UK GDP…

The whole “data reform bill” bid does risk feeling like a bad-faith PR exercise by Johnson’s thick-on-spin, thin-on-substance government — i.e. to try to claim a Brexit ‘boon’ where there is, in fact, none.

See also this “key fact” which accompanies the government’s spiel on the reform — claiming:

“The UK General Data Protection Regulation and Data Protection Act 2018 are highly complex and prescriptive pieces of legislation. They encourage excessive paperwork, and create burdens on businesses with little benefit to citizens. Because we have left the EU, we now have the opportunity to reform the data protection framework. This Bill will reduce burdens on businesses as well as provide clarity to researchers on how best to use personal data.”

Firstly, the UK chose to enact those pieces of legislation after the 2016 Brexit vote to leave the EU. Indeed, it was a Conservative government (not led by Johnson at that time) that passed these “highly complex and prescriptive pieces of legislation”.

Moreover, back in 2017, the former digital secretary Matt Hancock described the EU GDPR as a “decent piece of legislation” — suggesting then that the UK would, essentially, end up continuing to mirror EU rules in this area because it’s in its interests to do so to in order to keep data flowing.

Fast forward five years and the Brexit bombast may have cranked up to Johnsonian levels of absurdity but the underlying necessity for the government to “maintain unhindered data flows”, as Hancock put it, hasn’t gone anywhere — or, well, assuming ministers haven’t abandoned the idea of actually trying to grow the economy.

But there again the government lists creating a “pro-growth” (and “trusted”) data protection framework as a key “purpose” for the data reform bill — one which it claims can both reduce “burdens” for businesses and “boosts the economy”. It just can’t tell you how it’ll pull that Brexit bunny out of the hat yet.