Steve Thomas - IT Consultant

Just over a year after launching a major project targeting thousands of sites blatantly flouting cookie tracking rules in Europe, local privacy campaign group noyb has fired off another batch of complaints targeting a hardcore of website operators that it says have ignored or not fully acted upon earlier warnings to bring their cookie consent banners into compliance with the EU’s legal standard for consent, such as the General Data Protection Regulation (GDPR).

Noyb says the latest batch of 226 complaints have been lodged with 18 data protection authorities (DPAs) around the bloc.

As with earlier actions by noyb, all the complaints relate to the most widely used cookie banner software, made by OneTrust. But it’s not the software itself that’s the issue — rather the complaints target deceptive settings it found being applied. Or even no choice at all being offered to site users to deny tracking in a clear breach of the law around consent.

Deceptive cookie pop-ups have had a corrosive impact not only on the privacy rights of web users in the region, systemically stripping people of their right to protect their information, but they have also been very damaging for the reputation of EU data protection rules like the GDPR — enabling critics to blame the regulation for spawning a tsunami of annoying cookie banners despite the fact the law clearly outlaws consent theft via cynical tactics like injecting one-way friction or offering users zero opt-out ‘choice’.

The vast scale of cookie consent violations has, nonetheless, posed a major enforcement challenge for the bloc’s network of under-resourced data protection authorities — hence noyb stepping in with a smart and strategic approach to help clean up the “cookie banner terror” scourge, as its campaign couches it.

Given noyb’s focus on impact, and the extremely widespread nature of cookie consent problems, the campaign group has sought to minimize how many formal complaints it’s filing with regulators — so its partially automated compliance campaign entails sending initial complaints to the offending sites in question, offering help to rectify whatever dark patterns (or other bogus consent issues) noyb has identified.

It’s only sites that have repeatedly ignored these nudges and step-by-step compliance guidance that are being targeted for formal complaints with the relevant oversight body now.

“We want to ensure compliance, ideally without filing cases. If a company however continues to violate the law, we are ready to enforce users’ rights,” said Max Schrems, chairman at noyb, in a statement.

“After one year, we got to the hopeless cases that hardly react to any invitation or guidance. These cases will now have to go to the relevant authorities,” he added.

Thus far, noyb credits its cookie consent campaign with generating what it couches as a “large spill-over effect” — with, not only directly targeted violating consent banners being amended but some non-targeted sites also adapting their settings after they heard about the complaints. “This shows that enforcement ensures compliance beyond the individual case,” argues Schrems. “I guess many users have realized that for example more and more ‘reject’ buttons gradually appeared on many websites in the last year.”

Discussing progress to date, a spokeswoman for noyb also told us: “We have seen an increasing compliance rate in our regular scans (where we scan several thousands websites in Europe using the CMP OneTrust) after our first round of warnings last May. This is probably due to an increased awareness due to our complaints, the ‘fear’ that this law might actually be enforced and because Onetrust proactively informed their customers about our complaints and adjusted their standard settings to be ‘noyb compliant’.

“Therefore we consider those websites that still violate the GDPR despite all warnings as ‘hopeless’ cases. All of them are new cases, so none of the companies targeted already last year are in that batch.”

The so-called “hopeless” cases include a mix of (smaller) media sites, popular retailers and local pages, per noyb’s spokeswoman.

Asked for examples of pages which still violate “almost everything” (i.e. where cookie consent rules are concerned) more than a year after the group’s compliance campaign kicked off, she pointed to media sites including https://www.elle.com/ and https://www.menshealth.com/; recipe site www.delish.com; online travel agency booking.com; and fashion retailer aboutyou (in various EU countries).

Other high profile sites that are being targeted for formal complaints now — and which have remedied “at least some violations” (though not others), in noyb’s assessment — include football site fifa.com; cosmetics retailers rituals.com and clinique.at/de; and streaming giant hbo.com.

While noyb says “most” of the sites it’s formally complaining about now don’t provide users with an option to withdraw their consent to tracking, its spokeswoman noted: “Others have implemented a reject button (30% of all warned websites) but are still ignoring other aspects like deceptive designs.”

Noyb’s cookie complaints have already led to some regulatory action, with the European Data Protection Board (EDPB) establishing a special taskforce last year to coordinate responses to what the group suggests could end up as as many as 10,000 cookie consent complaints being filed — although the first DPA decisions related to complaints it lodged last year are still pending.

“We hope for the coordinated approach by the EDPB taskforce,” said its spokeswoman, adding: “The Austrian DPA so far has been the most active one in processing the complaints followed by some of the German DPAs. We hope to receive the first decisions by the end of this year.”

Now that this final round of OneTrust complaints has been filed, the not-for-profit group says it will move onto sites using other so called consent management platforms (CMPs) — expanding the scope of its automated complaint-cum-compliance platform to cover rival CMPs, such as TrustArc, Cookiebot, Usercentrics and Quantcast.

So scores more sites which haven’t been caught up in noyb’s sweeps yet, despite operating blatantly bogus consent banners, will be on the receiving end of a pointed letter vis-a-vis their cookie compliance in the near future.

In parallel with firing off lots of these letters over the past year+, noyb has also been gathering data on the impact of the cookie complaint project — and plans to issue a report on what it’s learned later this year.

Separately, France’s DPA, the CNIL, has been pretty active on cookie consent enforcement — taking some tough action against a number of tech giants (Amazon, Facebook and Google), under the ePrivacy Directive, that has enabled it to issue some major fines over abusive cookie tracking practices — and which appears to have forced (some) reform.

The ePrivacy legal route has allowed the CNIL to circumvent the GDPR’s one-stop-shop mechanism, which critics blame for undermining enforcement of the bloc’s flagship data protection regulation, especially against Big Tech, by funnelling (and bottlenecking) complaints through a handful of so-called lead DPAs (Ireland being the biggest) on account of a handful of markets having large numbers of tech giants regionally located on their soil.

noyb’s approach of filing large batches of thematic GDPR complaints is another strategy to push back against enforcement delays by simultaneously looping in DPAs across the bloc to tackle an issue, encouraging coordination, joint working and (it hopes) a pipeline of decisions that defend European citizens’ rights.

A few months on from a tracking controversy hitting privacy-centric search veteran, DuckDuckGo, the company has announced it’s been able to amend terms with Microsoft, its search syndication partner, that had previously meant its mobile browsers and browser extensions were prevented from blocking advertising requests made by Microsoft scripts on third party sites.

In a blog post pledging “more privacy and transparency for DuckDuckGo web tracking protections”, founder and CEO, Gabe Weinberg, writes: “Over the next week, we will expand the third-party tracking scripts we block from loading on websites to include scripts from Microsoft in our browsing apps (iOS and Android) and our browser extensions (Chrome, Firefox, Safari, Edge and Opera), with beta apps to follow in the coming month.”

“This expands our 3rd-Party Tracker Loading Protection, which blocks identified tracking scripts from Facebook, Google, and other companies from loading on third-party websites, to now include third-party Microsoft tracking scripts. This web tracking protection is not offered by most other popular browsers by default and sits on top of many other DuckDuckGo protections,” he added.

DDG claims this third party tracker loading protection is not offered by most other popular browsers by default.

“Most browsers’ default tracking protection focuses on cookie and fingerprinting protections that only restrict third-party tracking scripts after they load in your browser. Unfortunately, that level of protection leaves information like your IP address and other identifiers sent with loading requests vulnerable to profiling. Our 3rd-Party Tracker Loading Protection helps address this vulnerability, by stopping most 3rd-party trackers from loading in the first place, providing significantly more protection,” Weinberg writes in the blog post.

“Previously, we were limited in how we could apply our 3rd-Party Tracker Loading Protection on Microsoft tracking scripts due to a policy requirement related to our use of Bing as a source for our private search results. We’re glad this is no longer the case. We have not had, and do not have, any similar limitation with any other company.”

“Microsoft scripts were never embedded in our search engine or apps, which do not track you,” he adds. “Websites insert these scripts for their own purposes, and so they never sent any information to DuckDuckGo. Since we were already restricting Microsoft tracking through our other web tracking protections, like blocking Microsoft’s third-party cookies in our browsers, this update means we’re now doing much more to block trackers than most other browsers.

Asked if DDG will be publishing its new contract with Microsoft, or whether it’s still bound by an NDA, Weinberg said: “Nothing else has changed and we don’t have other information to share on this.”

The carve-out for DDG’s search supplier was picked up in May via an independent audit conducted by privacy researcher, Zach Edwards.

At the time DDG ‘fessed up to anomaly but said it essentially had no choice to accept Microsoft’s terms, although it also said it wasn’t happy about the restriction and hoped to be able to remove it in the future.  

Asked whether the publicity generated by the controversy helped persuade the tech giant to relax the restriction on its ability to block Microsoft ad scripts on non-Microsoft sites, DDG referred us back to Microsoft.

When we put the same question to the tech giant a spokeswoman told us:

Microsoft has policies in place to ensure that we balance the needs of our publishers with the needs of our advertisers to accurately track conversions on our network. We have been partnering with DuckDuckGo to understand the implications of this policy and we are pleased to have arrived at a solution that addresses those concerns.

In a transparency-focused steps being announced today, DDG said it’s publishing its tracker protection list — available here on Github — although the company told us the information was available before but suggested it’s easier to find now.

It also sent us the following list of domains where it said it will be blocking Microsoft tracking requests:

Despite this expansion of DDG’s ability to block Microsoft tracking requests, there are still instances where Microsoft ad scripts are not blocked by DDG’s tools by default — related to processes used by advertisers to track conversions (i.e. to determine whether an ad click actually led to a purchase).

“To evaluate whether an ad on DuckDuckGo is effective, advertisers want to know if their ad clicks turn into purchases (conversions). To see this within Microsoft Advertising, they use Microsoft scripts from the bat.bing.com domain,” explains Weinberg in the blog post. “Currently, if an advertiser wants to detect conversions for their own ads that are shown on DuckDuckGo, 3rd-Party Tracker Loading Protection will not block bat.bing.com requests from loading on the advertiser’s website following DuckDuckGo ad clicks, but these requests are blocked in all other contexts. For anyone who wants to avoid this, it’s possible to disable ads in DuckDuckGo search settings.

DDG says it wants to go further to protect user privacy around ad conversion tracking — but admits this won’t happen any time soon. In the blog post Weinberg writes that “eventually” it wants to be able to replace the current process for ad conversions checks by migrating to a new architecture for assessing ad effectiveness privately.

“To eventually replace the reliance on bat.bing.com for evaluating ad effectiveness, we’ve started working on an architecture for private ad conversions that can be externally validated as non-profiling,” he says.

DDG is by no means alone here. Across the industry, all sorts of moves are afoot to evolve/rethink adtech infrastructure in response to privacy backlash — and to rising regulatory risk attached to individual tracking — efforts such as Google’s multi-year push to replace support for tracking cookies in Chrome with an alternative adtech stack (aka its ‘Privacy Sandbox’ proposal; which remains a (delayed) work in progress).

“DuckDuckGo isn’t alone in trying to solve this issue; Safari is working on Private Click Measurement (PCM) and Firefox is working on Interoperable Private Attribution (IPA). We hope these efforts can help move the entire digital ad industry forward to making privacy the default,” adds Weinberg. “We think this work is important because it means we can improve the advertising-based business model that countless companies rely on to provide free services, making it more private instead of throwing it out entirely.”

Asked about the timeline for developing such an infrastructure, he says: “We don’t have a timeline to share right now but it’s not an imminent announcement.”

Despite DDG’s assertion that viewing ads via its browsers is “anonymous”, its ad disclosure page confirms that it passes some personal data (IP address and user string) to Microsoft, its ad partner — for “accounting purposes” (aka “to charge the advertiser and pay us for proper clicks, which includes detection of improper clicks”, as Weinberg puts it).

“Per our ad page, Microsoft has committed [that] “when you click on a Microsoft-provided ad that appears on DuckDuckGo, Microsoft Advertising does not associate your ad-click behavior with a user profile. It also does not store or share that information other than for accounting purposes,” he says when pressed on what guarantees he has from Microsoft that user data passed for ad conversions doesn’t end up being repurposed for broader tracking and profiling of individuals.

In back and forth with TechCrunch, DDG also repeatedly emphasizied that its policy states that Microsoft does not link this data to a behavioral profile (or, indeed, share a user’s actual IP address etc).

However Weinberg concedes there are limits on how much control DDG can have over what happens to data once it’s passed — given, for example, the adtech ecosystem’s penchant for sharing (and synching) pseudonymized identifiers (e.g. hashes of identifiers) in order that digital activity may still be linked back to individual profiles, say after a few hops through a chain of third party data processors/enrichers, and thereby removing an earlier privacy screen… So, tl;dr, trying to shield your users’ privacy from prying third parties whilst operating in an ad ecosystem that’s been designed for pervasive surveillance (and allowed to sprawl all over the place) remains a massive firefight. 

“Staying anonymous ‘through the adtech ecosystem’ is a different story because once someone clicks on a site (whether or not they got there through DuckDuckGo search), they become subject to the website owner’s privacy policy and related practices,” Weinberg admits. “In our browsers, we try to limit that through our web privacy protections but we cannot control what the website owner (the ‘first party’) does, which could be sharing data with third-parties in the ad tech ecosystem.”

“The ad disclosure page makes clear viewing ads is anonymous and further covers ad clicks, which has a commitment from Microsoft to not profile users on ad click, which includes any behavioral profiling by them or others. This commitment includes not passing that data on to anyone,” DDG also claims.

“Our privacy policy states that viewing all search results (including ads) is anonymous, and Microsoft Advertising (or anyone else) does not get anything that can de-anonymize user searches at that time (including full IP address) in terms of being able to tie individual searches to individuals or together into a search history,” it adds.

In further developments being highlighted by the company today, DDG said it’s updated the Privacy Dashboard that’s displayed in its apps and extensions — to show “more information” about third-party requests, per its blog post.

“Using the updated Privacy Dashboard, users can see which third-party requests have been blocked from loading and which other third-party requests have loaded, with reasons for both when available,” Weinberg writes on that.

It has also relaunched its help page — with a promise that the overhauled content offers “a comprehensive explanation of all the web tracking protections we provide across platforms”.

“Users now have one place to look if they want to understand the different kinds of web privacy protections we offer on the platforms they use. This page also explains how different web tracking protections are offered based on what is technically possible on each platform, as well as what’s in development for this part of our product roadmap,” its blog post suggests.

A ruling put out yesterday by the European Union’s top court could have major implications for online platforms that use background tracking and profiling to target users with behavioral ads or to feed recommender engines that are designed to surface so-called ‘personalized’ content.

The impacts could be even broader — with privacy law experts suggesting the judgement could dial up legal risk for a variety of other forms of online processing, from dating apps to location tracking and more. Although they suggest fresh legal referrals are also likely as operators seek to unpack what could be complex practical difficulties arising from the judgement.

The referral to the Court of Justice of the EU (CJEU) relates to a Lithuanian case concerning national anti-corruption legislation. But the impact of the judgement is likely to be felt across the region as it crystalizes how the bloc’s General Data Protection Regulation (GDPR), which sets the legal framework for processing personal data, should be interpreted when it comes to data ops in which sensitive inferences can be made about individuals.

Privacy watchers were quick to pay attention — and are predicting substantial follow-on impacts for enforcement as the CJEU’s guidance essentially instructs the region’s network of data protection agencies to avoid a too-narrow interpretation of what constitutes sensitive data, implying that the bloc’s strictest privacy protections will become harder for platforms to circumvent.

In an email to TechCrunch, Dr Gabriela Zanfir-Fortuna, VP for global privacy at the Washington-based thinktank, the Future of Privacy Forum, sums up the CJEU’s “binding interpretation” as a confirmation that data that are capable of revealing the sexual orientation of a natural person “by means of an intellectual operation involving comparison or deduction” are in fact sensitive data protected by Article 9 of the GDPR.

The relevant bit of the case referral to the CJEU related to whether the publication of the name of a spouse or partner amounted to the processing of sensitive data because it could reveal sexual orientation. The court decided that it does. And, by implication, that the same rule applies to inferences connected to other types of special category data.

“I think this might have broad implications moving forward, in all contexts where Article 9 is applicable, including online advertising, dating apps, location data indicating places of worship or clinics visited, food choices for airplane rides and others,” Zanfir-Fortuna predicted, adding: “It also raises huge complexities and practical difficulties to catalog data and build different compliance tracks, and I expect the question to come back to the CJEU in a more complex case.”

As she noted in her tweet, a similarly non-narrow interpretation of special category data processing recently got the gay hook-up app Grindr into hot water with Norway’s data protection agency, leading to fine of €10M, or around 10% of its annual revenue, last year.

GDPR allows for fines that can scale as high as 4% of global annual turnover (or up to €20M, whichever is greater). So any Big Tech platforms that fall foul of this (now) firmed-up requirement to gain explicit consent if they make sensitive inferences about users could face fines that are orders of magnitude larger than Grindr’s.

Ad tracking in the frame

Discussing the significance of the CJEU’s ruling, Dr Lukasz Olejnik, an independent consultant and security and privacy researcher based in Europe, was unequivocal in predicting serious impacts — especially for adtech.

“This is the single, most important, unambiguous interpretation of GDPR so far,” he told us. “It’s a rock-solid statement that inferred data, are in fact [personal] data. And that inferred protected/sensitive data, are protected/sensitive data, in line of Article 9 of GDPR.”

“This judgement will speed up the evolution of digital ad ecosystems, towards solutions where privacy is considered seriously,” he also suggested. “In a sense, it backs up the approach of Apple, and seemingly where Google wants to transition the ad industry [to, i.e. with its Privacy Sandbox proposal].”

Since May 2018, the GDPR has set strict rules across the bloc for processing so-called ‘special category’ personal data — such as health information, sexual orientation, political affiliation, trade union membership etc — but there has been some debate (and variation in interpretation between DPAs) about how the pan-EU law actually applies to data processing operations where sensitive inferences may arise.

This is important because large platforms have, for many years, been able to hold enough behavioral data on individuals to — essentially —  circumvent a narrower interpretation of special category data processing restrictions by identifying (and substituting) proxies for sensitive info.

Hence some platforms can (or do) claim they’re not technically processing special category data — while triangulating and connecting so much other personal information that the corrosive effect and impact on individual rights is the same. (It’s also important to remember that sensitive inferences about individuals do not have to be correct to fall under the GDPR’s special category processing requirements; it’s the data processing that counts, not the validity or otherwise of sensitive conclusions reached; indeed, bad sensitive inferences can be terrible for individual rights too.)

This might entail an ad-funded platforms using a cultural or other type of proxy for sensitive data to target interest-based advertising or to recommend similar content they think the user will also engage with. Examples of inferences could include using the fact a person has liked Fox News’ page to infer they hold right-wing political views; or linking membership of an online Bible study group to holding Christian beliefs; or the purchase of a stroller and cot, or a trip to a certain type of shop, to deduce a pregnancy; or inferring that a user of the Grindr app is gay or queer.

For recommender engines, algorithms may work by tracking viewing habits and clustering users based on these patterns of activity and interest in a bid to maximize engagement with their platform. Hence a big-data platform like YouTube’s AIs can populate a sticky sidebar of other videos enticing you to keep clicking. Or automatically select something ‘personalized’ to play once the video you actually chose to watch comes to an end. But, again, this type of behavioral tracking seems likely to intersect with protected interests and therefore, as the CJEU rules underscores, to entail the processing of sensitive data.

Facebook, for one, has long faced regional scrutiny for letting advertisers target users based on interests related to sensitive categories like political beliefs, sexuality and religion without asking for their explicit consent — which is the GDPR’s bar for (legally) processing sensitive data.

Although the tech giant now known as Meta has avoided direct sanction in the EU on this issue so far, despite being the target of a number of forced consent complaints — some of which date back to the GDPR coming into application more than four years ago. (A draft decision by Ireland’s DPA last fall, apparently accepting Facebook’s claim that it can entirely bypass consent requirements to process personal data by stipulating that users are in a contract with it to receive ads, was branded a joke by privacy campaigners at the time; the procedure remains ongoing, as a result of a review process by other EU DPAs — which, campaigners hope, will ultimately take a different view of the legality of Meta’s consent-less tracking-based business model. But that particular regulatory enforcement grinds on.)

In recent years, as regulatory attention — and legal challenges and privacy lawsuits — have dialled up, Facebook/Meta has made some surface tweaks to its ad targeting tools, announcing towards the end of last year, for example, that it would no longer allow advertisers to target sensitive interests like health, sexual orientation and political beliefs.

However it still processes vast amounts of personal data across its various social platforms to configure “personalized” content users see in their feeds. And it still tracks and profiles web users to target them with “relevant” ads — without providing people with a choice to deny that kind of intrusive behavioral tracking and profiling. So the company continues to operate a business model that relies upon extracting and exploiting people’s information without asking if they’re okay with that.

A tighter interpretation of existing EU privacy laws, therefore, poses a clear strategic threat to an adtech giant like Meta.

YouTube’s parent, Google/Alphabet, also processes vast amounts of personal data — both to configure content recommendations and for behavioral ad targeting — so it too could also be in the firing line if regulators pick up the CJEU’s steer to take a tougher line on sensitive inferences. Unless it’s able to demonstrate that it asks users for explicit consent to such sensitive processing. (And it’s perhaps notable that Google recently amended the design of its cookie consent banner in Europe to make it easier for users to opt out of that type of ad tracking — following a couple of tracking-focused regulatory interventions in France.)

“Those organisations who assumed [that inferred protected/sensitive data, are protected/sensitive data] and prepared their systems, should be OK. They were correct, and it seems that they are protected. For others this [CJEU ruling] means significant shifts,” Olejnik predicted. “This is about both technical and organisational measures. Because processing of such data is, well, prohibited. Unless some significant measures are deployed. Like explicit consent. This in technical practice may mean a requirement for an actual opt-in for tracking.”

“There’s no conceivable way that the current status quo would fulfil the needs of GDPR Article 9(2) paragraph by doing nothing,” he added. “Changes cannot happen just on paper. Not this time. DPAs just got a powerful ammunition. Will they want to use it? Keep in mind that while this judgement came this week, this is how the GDPR, and EU data protection law framework, actually worked from the start.”

The EU does have incoming regulations that will further tighten the operational noose around the most powerful ‘Big Tech’ online platforms, and more rules for so called very large online platforms (VLOPs), as the Digital Markets Act (DMA) and the Digital Services Act (DSA), respectively, are set to come into force from next year — with the goal of levelling the competitive playing field around Big Tech; and dialling up platform accountability for online consumers more generally.

The DSA even includes a provision that VLOPs that use algorithms to determine the content users see (aka “recommender systems”) will have to provide at least one option that is not based on profiling — so there is already an explicit requirement for a subset of larger platforms to give users a way to refuse behavioral tracking looming on the horizon in the EU.

But privacy experts we spoke to suggested the CJEU ruling will essentially widen that requirement to non-VLOPs too. Or at least those platforms that are processing enough data to run into the associated legal risk of their algorithms making sensitive inferences — even if they’re not consciously instructing them to (tl;dr, an AI blackbox must comply with the law, too).

Both the DSA and DMA will also introduce a ban on the use of sensitive data for ad targeting — which, combined with the CJEU’s confirmation that sensitive inferences are sensitive data, suggests there will be meaningful heft to an incoming, pan-EU restriction on behavioral advertising which some privacy watchers had worried would be all-too-easily circumvented by adtech giants’ data-mining, proxy-identifying usual tricks.

Reminder: Big Tech lobbyists concentrated substantial firepower to successfully see off an earlier bid by EU lawmakers, last year, for the DSA to include a total ban on tracking-based targeted ads. So anything that hardens the limits that remain is important.

Behavioral recommender engines

Dr Michael Veal, an associate professor in digital rights and regulation at UCL’s faculty of law, predicts especially “interesting consequences” flowing from the CJEU’s judgement on sensitive inferences when it comes to recommender systems — at least for those platforms that don’t already ask users for their explicit consent to behavioral processing which risks straying into sensitive areas in the name of serving up sticky ‘custom’ content.

One possible scenario is platforms will respond to the CJEU-underscored legal risk around sensitive inferences by defaulting to chronological and/or other non-behaviorally configured feeds — unless or until they obtain explicit consent from users to receive such ‘personalized’ recommendations.

“This judgement isn’t so far off what DPAs have been saying for a while but may give them and national courts confidence to enforce,” Veal predicted. “I see interesting consequences of this judgment in the area of recommendations online. For example, recommender-powered platforms like Instagram and TikTok likely don’t manually label users with their sexuality internally — to do so would clearly require a tough legal basis under data protection law. They do, however, closely observe how users interact with the platform, and mathematically cluster together user profiles with certain types of content. Some of these clusters are clearly related to sexuality, and male users clustered around content that is aimed at gay men can be confidently assumed not to be straight. From this judgment, it can be argued that such cases would need a legal basis to process, which can only be refusable, explicit consent.”

As well as VLOPs like Instagram and TikTok, he suggests a smaller platform like Twitter can’t expect to escape such a requirement thanks to the CJEU’s clarification of the non-narrow application of GDPR Article 9 — since Twitter’s use of algorithmic processing for features like so called ‘top tweets’ or other users it recommends to follow may entail processing similarly sensitive data (and it’s not clear whether the platform explicitly asks users for consent before it does that processing).

“The DSA already allows individuals to opt for a non-profiling based recommender system but only applies to the largest platforms. Given that platform recommenders of this type inherently risk clustering users and content together in ways that reveal special categories, it seems arguably that this judgment reinforces the need for all platforms that run this risk to offer recommender systems not based on observing behaviour,” he told TechCrunch.

In light of the CJEU cementing the view that sensitive inferences do fall under GDPR article 9, a recent attempt by TikTok to remove European users’ ability to consent to its profiling — by seeking to claim it has a legitimate interest to process the data — looks like extremely wishful thinking given how much sensitive data TikTok’s AIs and recommender systems are likely to be ingesting as they track usage and profile users.

TikTok’s plan was fairly quickly pounced upon by European regulators, in any case. And last month — following a warning from Italy’s DPA — it said it was ‘pausing’ the switch so the platform may have decided the legal writing is on the wall for a consentless approach to pushing algorithmic feeds.

Yet given Facebook/Meta has not (yet) been forced to pause its own trampling of the EU’s legal framework around personal data processing such alacritous regulatory attention almost seems unfair. (Or unequal at least.) But it’s a sign of what’s finally — inexorably — coming down the pipe for all rights violators, whether they’re long at it or just now attempting to chance their hand.

Sandboxes for headwinds

On another front, Google’s (albeit) repeatedly delayed plan to depreciate support for behavioral tracking cookies in Chrome does appear more naturally aligned with the direction of regulatory travel in Europe.

Although question marks remain over whether the alternative ad targeting proposals it’s cooking up (under close regulatory scrutiny in Europe) will pass a dual review process, factoring in competition and privacy oversight, or not. But, as Veal suggests, non-behavior based recommendations — such as interest-based targeting via whitelisted topics — may be less risky, at least from a privacy law point of view, than trying to cling to a business model that seeks to manipulate individuals on the sly, by spying on what they’re doing online.

Here’s Veal again: “Non-behaviour based recommendations based on specific explicit interests and factors, such as friendships or topics, are easier to handle, as individuals can either give permission for sensitive topics to be used, or could be considered to have made sensitive topics ‘manifestly public’ to the platform.”

So what about Meta? Its strategy — in the face of what senior execs have been forced to publicly admit, for some time now, are rising “regulatory headwinds” (euphemistic investor-speak which, in plainer English, signifies a total privacy compliance horrorshow) — has been to elevate a high profile former regional politician, the ex UK deputy PM and MEP Nick Clegg, to be its president of global affairs in the hopes that sticking a familiar face at its top table, who makes metaverse ‘jam tomorrow’ jobs-creation promises, will persuade local lawmakers not to enforce their own laws against its business model.

But as the EU’s top judges weigh in with more jurisprudence defending fundamental rights, Meta’s business model looks very exposed, sitting on legally challenged grounds whose claimed justifications are surely on their last spin cycle before a long overdue rinsing kicks in, in the form of major GDPR enforcement — even as its bet on Clegg’s local fame/infamy scoring serious influence over EU policymaking always looked closer to cheap trolling than a solid, long-term strategy.

If Meta was hoping to buy itself yet more time to retool its adtech for privacy — as Google claims to be doing with its Sandbox proposal — it’s left it exceptionally late to execute what would have to be a truly cleansing purge.

As regulation, platform dynamics and consumer choice continue to eat into the adtech stalwart known as cookies, it’s leaving a gap in the market for advertising solutions that can work well without relying on cookie functionality. Today, an adtech out of Spain that’s doing just this has raised a big round of funding to double down on the opportunity.

Seedtag, a contextual advertising startup that uses AI tools both to “read” content on a page to match that up with advertisers’ aims, as well as to subsequently track how those ads perform, has raised “over” €250 million (more than $252 million, exact amount unspecified). The money is coming in the form of an equity investment from a single investor, Advent International, and it will be used to help the company expand beyond Europe, specifically deeper into the U.S.

Seedtag was co-founded and is co-led by Jorge Poyatos and Albert Nieto, two ex-Googlers, and as part of the investment Nieto will be relocating to the U.S. to help grow the business there.

“We’re very excited about this partnership with Advent,” Nieto and Poyatos said in a joint statement. “This investment will massively accelerate our US expansion, boost our growth and reinforce our team and the development of our technology. This move further supports our mission of building the global leading platform for contextual advertising, offering an effective solution for cookieless advertising on the open web.”

The company is not disclosing its valuation with the round, but this investment is a significant step up for it. Founded in Madrid in 2014, Seedtag had only raised around $46 million in the last eight years, with past investors including Intelectium Business, Oakley Capital, All Iron Ventures and Adara Ventures. Its past investors are remaining shareholders with this round, along with the two co-founders. In another signal of its progress, it’s picked up a number of big-name clients, including the likes of Unilever brands, LG, Levis and more.

The situation for these brands is that the nature of how they connect with consumers, raise awareness among them and even deliver their products have all been drastically changing in the last several years, pushed along by a massive swing towards digital screen usage, changing data protection and privacy priorities, and advances in technology, among other trends. And just as many of them were getting their heads around the move away from static, analogue campaigns for marketing, their understanding of how they can and should use digital platforms has changed, too.

One of the casualties of that has been the cookie — the unit that had been built to track what users are doing online that could in turn be used to present more relevant information to them, based on that activity — which has fallen afoul of privacy and security experts, and subsequently regulators and platform operators. Subsequently, they’re now getting phased out of usage.

Seedtag is part of a wave of tech companies building what they like to describe as “privacy first” alternatives in advertising, typically solutions that continue to allow companies to serve ads in relevant places on programmatic platforms, but without collecting the kind of data that previously would have been needed to do so.

Seedtag has built a what it describes as “contextual AI technology,” which it has branded “LIZ”, which uses AI to do some of the work that cookies might have done in the past: it “reads” what is on a page or within a site, rather than what a user is browsing across all of the web, and combines that with its own algorithms to determine what kinds of interests that particular user might have, and serves ads that are in turn relevant to that particular experience, which appear within a particular piece of content.

What is less clear is how Seedtag tracks the effectiveness of that ad after it has been seen, if it’s not using cookies. (I’ve asked and will update as and when I hear back on this point.)

In any case, as cookies become a more problematic, and definitely less effective, route for tracking user interest and intent, it will push more attention to a wider set of tools, and companies building them. The two co-founders both worked in industry analytics and strategy at Google so understand not just what adtech companies like Google built but what advertisers are looking to do, giving them a unique position in being able answer the big questions about what is lacking in the market today, and how to address that.

“Seedtag has established itself as a leading player in Europe and Latin America in the very dynamic contextual advertising sector. We are delighted to partner with Jorge and Albert as they continue to build on this momentum,” said Gonzalo Santos, MD at Advent International and Head of Spain, in a statement. “With our international presence and deep sector expertise, Advent will work with the Seedtag management team to further expand the business internationally. We look forward to supporting this hugely exciting business to grow and scale-up and to taking it to the next level.”

Snap, the parent company of the popular Snapchat social media service, reported earnings last week that investors rejected. In the wake of its second-quarter financial reporting, shares of Snap cratered from $16.81 Thursday afternoon, before its earnings report, to around $10 per share as of this morning.

Snap was not the only victim of its lackluster earnings digest — other companies that make money off of advertising incomes saw their share prices dip on concerns that the social network was not an outlier. Alphabet, Meta, and Pinterest also took blows, cutting their worth ahead of earnings disclosures as investors lowered their hopes for ad-based incomes.


The Exchange explores startups, markets and money.

Read it every morning on TechCrunch+ or get The Exchange newsletter every Saturday.


Given the sheer number of mega-tech companies that are betting on the advertising market, the news matters. Mix in the fact that startups are also pursuing ads as a monetization lever, and concerns about the health of advertising spending matter to tech companies big and small.

Consumer rights groups in Europe have filed a new series of privacy complaints against Google — accusing the advertising giant of deceptive design around the account creation process which they say steers users into agreeing to extensive and invasive processing of their data.

The tech giant profiles account holders for ad targeting purposes — apparently relying on user consent as its legal basis. But the EU’s flagship data protection law, the General Data Protection Regulation (GDPR), bakes in a requirement for privacy by design and default, as well as setting clear conditions around how consent must be gathered for it to be lawful.

Hence the consumer groups’ beef — if deceptive design by Google is tricking users into accepting its tracking.

They argue the design choices the tech giant deploys around account creation make it far easier for users to agree to Google’s processing of their information to target them with “personalized” ads than to deny consent to its profiling of them for behavioral advertising.

Fast track to being tracked

The complaints highlight how more privacy-friendly options — described by Google as “manual personalization” — require users to take five steps and ten clicks (“grappling with information that is unclear, incomplete, and misleading”, as they put it); whereas it offers a one-click “Express personalisation” option that activates all the tracking, making it terrible for privacy.

They also point out that Google does not provide consumers with the option to turn all tracking ‘off’ in one click, further noting that Google requires account creation to use certain of its own products, such as when setting up an Android smartphone.

In other cases, users may voluntarily create a Google account — but, either way, the tech giant still presents skewed options nudging consumers to agree to its tracking of them.

“Regardless of the path the consumer chooses, Google’s data processing is un-transparent and unfair, with consumers’ personal data being used for purposes which are vague and far reaching,” the complainants also argue in a press release.

The series of GDPR complaints are being coordination by members group BEUC, aka the European Consumer Organisation.

Per BEUC, complaints have been filed to data protection agencies across EU Member States and markets, including by its member organizations in France, the Czech Republic, Norway, Greece and Slovenia.

It also notes that its German member, the vzbv, has written a warning letter to Google — ahead of potentially filing a civil lawsuit. While consumer groups in the Netherlands, Denmark and Sweden have written to their national DPAs to alert them to the practices, it adds.

Commenting on the action in a statement, Ursula Pachl, deputy DG of BEUC, said:

“Contrary to what Google claims about protecting consumers’ privacy, tens of millions of Europeans have been placed on a fast track to surveillance when they signed up to a Google account. It takes one simple step to let Google monitor and exploit everything you do. If you want to benefit from privacy-friendly settings, you must navigate through a longer process and a mix of unclear and misleading options. In short, when you create a Google account, you are subjected to surveillance by design and by default. Instead, privacy protection should be the default and easiest choice for consumers.”

This is not the first privacy-related complaint EU consumer rights have made about Google’s practices. They also raised a complaint focused on its collection of location data back in 2018 — but it took until February 2020 for Google’s lead EU data supervisor, Ireland’s Data Protection Commission (DPC), to start an enquiry. And, more than two years later, that data probe remains ongoing.

Back in May, the DPC’s deputy commissioner, Graham Doyle, told TechCrunch it was expecting to submit a draft decision on the Google location data enquiry to other DPAs for review “over the coming months”. However if there is disagreement over Ireland’s approach it could add many more months before agreement on a final, consensus decision is reached. So a resolution of that long-running complaint may still not arrive this year.

The DPC also still hasn’t issued decisions on other long-running GDPR complaints against Google. Such as a major complaint about its adtech  which it began investigating in May 2019 — and is now being sued over for inaction.

Another complaint — against’s Google use of so-called ‘forced consent’ on its Android mobile platform — dates back to May 2018. Although it’s not clear if the DPC ever opened an enquiry in that case. France’s data protection watchdog, the CNIL, proceeded to investigate — and fined Google $57M back in January 2019 over breaches of transparency and consent attached to how it operates Android. (The CNIL decided to had competence in that case since Android-related decisions were likely taken in the US, rather than in Dublin, where Google’s regional HQ is based.)

But Ireland has yet to issue a single GDPR decision against Google.

BEUC is not hiding its frustration at the DPC’s lack of enforcement over complaints against the tech giant.

“Google is a repeat offender,” said Pachl. “It is more than three years since we filed complaints against Google’s location-tracking practices and the Irish DPC in charge has still not issued a decision on the case. Meanwhile Google’s practices have not changed in essence. The tech giant still carries out continuous tracking and profiling of consumers and its practices set the tone for the rest of the market.”

“We need swift action from the authorities because having one of the biggest players ignoring the GDPR is unacceptable,” she added. “This case is of strategic importance for which cooperation among data protection authorities across the EU must be prioritised and supported by the European Data Protection Board.”

Issues around Google’s tracking of account users is separate to the advertising giant’s cookie-based tracking — where it deploys technologies to track users across third party websites and apps.

The latter process has been the subject of other EU complaints that have led to some enforcements in recent years, with France’s data protection watchdog hitting Google with fines approaching $300M for cookies tracking-related breaches under the bloc’s ePrivacy Directive — after which Google made some changes to the cookie consent banner it shows web users in Europe.

Strategic complaint

Pachl’s remark about the Google account sign-up complaint being of “strategic importance” refers to BEUC’s expectation that the case will trigger the launch of a procedure under the GDPR’s cooperation mechanism (aka Article 60) — which it hopes will function more smoothly than it has been since 2018, when the Google location data complaint was filed.

The reason BEUC is hoping for smoother sailing now is because of an agreement EU DPAs reached in April — aka the “Vienna declaration” —  when they committed to enhance their enforcement cooperation on cross-border GDPR cases of “strategic importance”.

A complaint against a tech giant like Google clearly hits that bar. But the older, Google location data complaint has been saddled with a number of cooperation-related issues which have contributed to slowing down investigation and delaying a decision in that case.

Discussing what changes BEUC hopes to see being applied by regulators in tackling this fresh cross-border Google complaint, David Martin Ruiz, team leader for digital policy at the organization, told us: “We expect that the treatment of the complaints is prioritised as it touches upon practices by a major market player in the surveillance economy which affect millions of Europeans. The first time it took around 6 months just to name the lead authority. Also, we expect better, closer cooperation among the authorities, for example in terms of checking the admissibility of the complaints, and that this is done only once by the authority which receives the complaints. Of course, we expect that closer cooperation and strategic prioritisation by the authorities involved leads to a swift, comprehensive investigation of the complaints and efficient enforcement.”

Still, Ruiz declined to offer a prediction for how much faster the revised cooperation procedure will be able to deliver enforcement against Google, saying: “It is hard to put a concrete number on this but we certainly hope it takes less than the one that is ongoing, and we are not here 3 years from now still waiting for a draft decision.”

The European Commission, which has also been critical of adtech giants’ approach to compliance with EU privacy laws, recently defended slower regulatory enforcements in these major, cross border cases.

In a letter to the European ombudsperson — which has been looking into the EU executive’s monitoring of the GDPR following complaints about the Commission’s own oversight of the regulation — justice commissioner, Didier Reynders, likened the level of complexity involved in these big investigations to antitrust cases, writing:

” … it is important to make a distinction between cases which are relatively straightforward and do not require extensive investigations and cases which require complex legal and economic assessment or pose novel issues. Those complex cases, for instance those touching on issues relating to the business model of big tech multinational companies, might require several months or years of investigations, similarly to what happens for competition law investigations. This is particularly relevant for Ireland since many of such companies have their main establishment in this Member State.”

Responding to Reynders’ point, Ruiz told TechCrunch: “We agree and understand that these are complex issues and the authorities need time to build strong cases. However, we have seen problems that go beyond the time it takes to investigate these cases (e.g. a DPA narrowing down the scope of complaints when deciding to open their own investigation). Moreover, a lot of the big complaints that are taking years are actually not normal complaints, in the sense that they come already backed with a lot of legal analysis and factual evidence, aiming to facilitate the tasks of the DPAs. Also, of course, the time it takes to resolve these cases is also an illustration of deeper issues, like a lack of sufficient resources. Hopefully, strengthened cooperation and strategic prioritisation, as per the Vienna declaration, will help reduce the time it takes to investigate these cases. Complexity and the time it takes to investigate cannot be an excuse for inaction.”

BEUC isn’t calling for major revisions to GDPR to solve the problem of timely enforcement against Big Tech. But it is pushing for DPAs to make a whole series of process changes, individually and collectively, in order to address issues like the bottleneck of cases linked to the regulation’s one-stop-shop/lead data supervisor structure, which has enabled the problem of forum shopping.

“In a nutshell, regarding Big Tech, the first step is to stop the ‘bottleneck’,” he said. “Basically, DPAs, in particular one DPA which has oversight over many of the Big Tech companies, needs to deliver decisions on the open cases. And both the lead DPA, and the rest of the DPAs in the EDPB, need to be strict and ambitious in their interpretation and application of the rules. Also, if the lead DPA is not delivering the decisions, the others must make full use of their powers and take urgent measures. There needs to be a clear signal to Big Tech that window dressing and cosmetic transparency measures won’t do anymore. There are some fundamental issues in their core business practices that must be addressed, because they run contrary to the very essence of the GDPR.”

“Of course it is a concern that enforcement does not move as fast as market practices, and companies are changing things all the time. It is very important to underline that a company tweaking and correcting something should not erase past infringements and leave them unpunished, especially if they have been going on for years and they have affected millions of people. Otherwise, it is a very dangerous signal we are sending to companies,” he added. “We would be telling them ‘it is ok to infringe the GDPR as long as you are not caught, and if you are caught, just fix it quickly and there will be no consequences.’ This is the opposite of what should happen. Infringements must have consequences. Otherwise there is no justice, and no deterrent effects.”

As Google works on reconfiguring its adtech stack to move away from cookie-based ad targeting to something else that’s not yet fixed but which it claims will be better for individual web users’ privacy — and after Apple’s move last year to lock down third-party tracking of app users on iOS, also on a claim its better for user privacy — a number of telcos in Europe are sniffing opportunity to press in the polar opposite direction.

In recent months it’s emerged that several telcos in the region are testing what they describe as a “cross-operator infrastructure for digital advertising and digital marketing” — aka TrustPid, as they’re branding the ad targeting initiative — although, as is customary with respawning adtech, they’re claiming their approach is “secure and privacy-friendly.”

Users of mobile networks — who pay their hard-earned money to get cellular connectivity, not to be clobbered with (yet) more consent pop-up spam and/or be ad-stalked around the internet — may well take a very different view, as they wonder how many times they’re going to have to keep slaying the tracking zombie.

EU privacy regulators are also on early alert, having fielded complaints and/or raised concerns over the telcos’ approach — which suggests regulatory intervention could follow if carriers decide to move ahead with a full launch.

The carriers are dubbing their plan a “counter-design to third-party cookies” — and say it involves the creation of “pseudo-anonymous tokens” that are linked to the mobile device user’s IP and mobile phone number (which is classified as personal data under EU law).

The ‘twist,’ if you can call it that, is that different tokens are generated for each ad partner — which they claim “limits” the merging of data from different ad partners to create profiles on customers. But individual level ad targeting is still individual level ad targeting. (And consent spam may still be unlawfully attention sapping.)

The telcos involved in TrustPid are proposing to manage — and presumably monetize — advertisers’ access to this network-based infrastructure.

Technical details of how the tracking-based targeting is intended to work in practice are not immediately clear — but here’s how Vodafone, which is leading the initiative — explains the approach online:

  • Your mobile number and IP address will be used by your network provider, e.g. Vodafone or Deutsche Telekom, to generate a pseudonymous network identifier based on which we generate your pseudonymous unique token (“TrustPid”). The IP address is considered traffic data. Traffic data is personal data processed while delivering a telecommunications service.
  • We use this TrustPid to create additional marketing tokens for the websites of advertisers and publishers you visit (“website specific tokens”). Advertisers and publishers aren’t able to identify you as a person via the website specific tokens. Where you have provided consent, advertisers and publishers will use the website specific tokens to provide you with personalised online marketing or conduct analytics.
  • We will keep a list of advertisers and publishers that you have consented to provide you with personalised online marketing or conduct analytics based on your TrustPid in order to show you this list via our Privacy Portal so you can manage your consent for those parties at any time.

As noted above, the proposal by European telcos to embed themselves into the ad-tracking game has quickly attracted plenty of the wrong kinds of attention — with regulators and data protection experts querying the legal basis for the processing — as well as, more broadly — questioning the ethics of repurposing mobile network traffic for ad tracking.

News of the proposal to fire up individual-level ad-targeting at the carrier level in Europe made it into German press late last month where it was reported that Vodafone and Deutsche Telekom were testing TrustPid locally — with the German publisher Bild/Springer initially signed up (another local publisher, NTV/RTL Group, has since also been reported to have joined the tests).

A report in Spiegel called the TrustPid trial “the return of the supercookie” — a reference to a deeply unpopular tracking technique used by U.S. carrier Verizon about a decade ago (which also attracted FCC sanction).

“Cellular providers like Vodafone and Deutsche Telekom are in a unique position. Even if the browser routinely deletes cookies or even changes the IP address, the provider can still link the data traffic to the respective cell phone number,” Spiegel wrote in the report [translated from German with machine translation]. “Advertisers don’t want access to names or real mobile phone numbers, only to a pseudonymous identifier. However, this can quickly be reassigned to a specific user profile, for example when shopping in an online shop or logging in to an e-mail provider.”

The newspaper went on to quote a spokesperson for the data protection authority in North Rhine-Westphalia — raising questions about the appropriateness of TrustPid’s stated reliance on user consent for its legal basis. The DPA’s spokesperson added that the authority would be taking a closer look at the initiative’s compliance with EU data protection law.

Media attention to the TrustPid trial in Germany was quickly followed by an announcement by the country’s federal data protection authority, the BfDI — presumably getting a lot of alarmed inbound from citizens of the famously privacy-loving country at that point — admitting that the project was presented to it in 2021. But it emphasized it had not given any kind of sign-off on lawfulness of the approach.

Indeed, on the contrary, the federal authority said it had flagged a number of “data protection issues” vis-a-vis the proposal, including its focus on relying on consent for its legal basis.

“At that time, we pointed out various data protection problem areas, in particular the requirements for effective consent. However, we have NOT made any final project assessment or given any kind of approval. It was only agreed that there will be further consultations with the relevant telecommunications service providers in the future,” the authority wrote [in German; we’ve used machine translation] at the end of May.

Nonetheless, Vodafone et al. appear to have pressed on with their tests — which, earlier this month, were reported to have spread to Spain, via local carriers Movistar and Orange.

Asked about the legal basis being relied upon for the experimental tracking system, Simon Poulter, a senior spokesman for Vodafone, denied that TrustPid is akin to a ‘supercookie.’

“What we’re trialling in Germany is a system based on digital tokens which do not include any directly identifiable information. Participation in the trial is only possible after having previously given voluntary and explicit consent (so-called opt-in),” he told TechCrunch.

“For a single user, the token generated will be different for each different partner. This limits the merging of data from different parties to create extensive profiles on customers — one of the big drawbacks for consumers in the way digital advertising works today. The tokens are expired after 90 days providing consumers with further protection. The telecommunications providers do not enhance the tokens with any customer, traffic or location data nor is this provided by the service in any other way. Neither the partners, nor TrustPid itself, can identify an individual by means of the tokens created by TrustPid.”

In further remarks, Vodafone’s spokesman also claimed:

The service doesn’t intercept or alter the data flows between a user and a website in any way, contrary to how other technologies sometimes called supercookies work” — and went on to dub it a “win-win” for users who he also claimed can “take control over their online privacy and decide who can show them personalized content and advertising.”

While there are some technical differences between assigning a permanent, fixed ad identifier per mobile device and linking single-use pseudo-anonymous tokens to target ads per device, at bottom both are setting out to repurpose mobile network infrastructure for tracking. And many mobile users would say that sums to the same kind of creepy.

In TrustPid’s case, telcos banding together with select publishers to erect a whole new attention-sapping vector targeting mobile users — which requires them to keep denying consent to ad-tracking as they go about their business on the mobile web as they’re faced with yet another unfamiliar-sounding ‘partner’ in the laundry list of cookie pop-up consent demanding data processors — does not sound like the kind of ‘control’ most people would prize.

It also pays to remember that a large chunk of current online advertising was recently found in breach of EU data protection rules — after the IAB Europe and its TCF framework were deemed to be delivering compliance theatre (rather than lawful compliance), exactly because of bogus reliance on non-compliant consent spam.

The IAB was given a few months to come up with a reformed approach. So a bunch of European carriers proposing a new wave of consent-based tracking of regional mobile users looks ill-thought through, to put it mildly.

Genuine user control — if that’s what Vodafone et al. actually want to deliver — would require this tracking infrastructure to be always off at source. Unless or until a mobile user instructed their telco to turn it on. Aka, making it opt-in.

But — as far as we can gather — that’s not how TrustPid has been designed to work.

TrustPid’s website claims users can withdraw their consent at any time via its Privacy Portal (i.e., in addition to repeatedly denying consent at the publisher website level). However when TechCrunch attempted this process — by accessing TrustPid’s bespoke “manage your consent” process via a mobile device connected to a participating mobile network — we were unable to access any controls that allowed us to actually opt out. (It’s possible the test has only been rolled out to a portion of participating carrier network’s users; but if it’s not clear who can even opt out that is not exactly looking amazing on the transparency front, either.)

The convoluted process TrustPid has devised to ‘opt out’ also merits a mention — as it requires browsing to this brand name website (not your carrier’s own site) while connected to a participating mobile network (not Wi-Fi) and clicking on a “Verify me” button that’s accompanied by an off-putting chunk of text which states that you agree to the processing of your personal data “as detailed in the Privacy Notice [which is hyperlinked] in order to verify you and enable access to the “manage your consent” section of the Privacy Portal” (Actual quote; I kid ye not!).

When we tapped on this horrible-sounding “Verify me” button it disappeared and was replaced by the tedious-sounding word “Accessing…” which was accompanied by a looping status bar that just kept looping infinitely and never actually progressed to displaying anything — such as an ‘opt-out’ button.

So, in our experience, TrustPid’s claimed ‘opt out’ was indeed pure dark pattern theatre.

Moreover, since the TrustPid tokens are designed to re-spawn every 90 days, the opt-out-seeking user must — presumably — return afresh every three months to restate their desire not to be tracked.

If that’s control, it’s an exceptionally tedious flavor that makes a mockery of user agency by requiring exercising it a never-ending chore.

Failing TrustPid requiring affirmative user consent via an opt-in, the telcos could at least provide a persistent, centralized opt-out.

Instead they seem to have devised a ‘control’ that’s either decentralized/scattered (i.e., across an unknown number of various publisher consent flows); and/or complex and inherently ephemeral as it perpetually resets on TrustPid’s own multilayered “Privacy Portal” — and ofc they’ve branded all this as “privacy-friendly.”

Frankly it’s exhausting just describing it. (Let alone having to mark a calendar with a recurring event to refresh an opt-out of a thing we never asked to be included in in the first place.)

TechCrunch contacted Spain’s data protection watchdog about TrustPid’s tests in the country to ask if it has any concerns. The regulator confirmed it has received a complaint and the AEPD’s spokesperson told us it would process the complaint following standard procedures — so it remains to be seen whether it (or any German DPAs) progress to opening a formal investigation.

(The AEPD received a similar complaint against Apple’s IDFA — an ad-tracking ID (albeit a fixed one) the iPhone maker links to iOS devices — back in November 2020 and said at the time it would investigate that, though we’ve not seen any public outcome yet.)

Prior to a few DPAs expressing concerns, the TrustPid experiment landed on the radar of the Washington Post’s privacy engineering lead, Aram Zucker-Scharff — who tweeted this unreassuring assessment of what he’d spotted back in April, while pointing out that T-Mobile was already doing something similar in the U.S. on an opt-out basis.

Thing is, the U.S. does not have comprehensive data protection legislation to regulate how mobile users can be tracked. Whereas the European Union does — via the ePrivacy Directive, which regulates tracking technologies and mandates that users are asked for their consent to such tracking.

Europe’s top court has also weighed in in recent years — making it clear that consent for non-essential tracking must be obtained prior to storing or accessing the tracking tech.

There is also the EU’s General Data Protection Regulation (GDPR) — and its requirement for privacy by design and default; for transparency — and for consent to be informed, specific/non-bundled and freely given.

All of which should count for something when it comes to protecting European mobile users from creepy, network-level tracking.

Asked about TrustPid’s approach to consent, Poulter claimed no processing of users’ personal data occurs within the TrustPid system prior to a user accepting a cookie pop-up on a participating publishers’ website. “Explicit consent is collected via participating partners before the point of data processing,” he told us. “This consent is then used to provide the service. No tokens are generated unless consent is obtained. Each participating partner requires their own consent.”

However, per his description of the system, none of the participating carriers themselves ever proactively ask for user consent at any point — which, if they did that, would at least surface the fact they are trying to repurpose subscribers’ mobile network traffic as ad-tracking infrastructure. So the source of the tracking looks obfuscated by design.

The average mobile user getting a pop-up on their device from their carrier — asking if they can use their IP and mobile number so websites can target them with “personalized” ads — would surely insta-hit the ‘no way José!’ button.

By outsourcing the gathering of consents to third party ad ‘partners,’ TrustPid’s approach looks intended to dodge denials — but by doing that it risks running counter to key principles baked into EU law.

There is also just the pure creepy optics. It looks hella baaaaaaad. Because this is mobile network traffic data. And can a telco really delegate consent collection of that to a random grab bag of other advertising ‘partners’?

“Companies that operate communication networks should neither track their customers nor should they help others to track them,” Wolfie Christl, a researcher at Cracked Labs in Austria — who raised early concerns about TrustPid’s approach — told TechCrunch.

“I consider the project an irresponsible abuse of their very specific trusted position as communication network operators. It is a dangerous attack on the rights of millions. It appears they want to legally justify it with the misleading and meaningless pseudo-consent banners we have to deal with on websites every day, which is irresponsible and outrageous.”

“The project undermines trust into communication technology and should be stopped immediately,” Christl added. “I hope that European data protection authorities quickly team up and stop the project.”

Dr. Lukasz Olejnik, a privacy researcher and consultant based in Europe — who was similarly quick to query whether the telcos’ experiment complies with the EU’s ‘privacy by design’ requirements — also highlights how unpopular this sort of tracking tends to be with users.

“While some U.S. carriers tried to field test such systems years ago, it never really caught on. The thing is, people rather disliked such systems and it’s no wonder why. Building it with privacy is hard. I am not aware of any privacy considerations or thinking put into this TrustPid endeavour,” he said.

“When people subscribe to telecom carrier services, what they expect is a telecom service. Such additions are unexpected,” he added.

Other carriers involved in the TrustPid project that we contacted for comment referred us back to Vodafone — whose spokesperson did finally confirm that carriers do not intend to gather any consents themselves.

“The participating website must obtain explicit consent from the user at the point before any data processing begins,” said Poulter.

“TrustPid makes use of Vodafone’s network connectivity to anonymously identify a user on a website — once their consent has been expressly given. Only once that unique digital token is issued can advertisers and publishers use them for targeted advertisements. The tokens do not include any personally identifiable information. The tokens have a reduced lifespan and are specific to individual advertisers and publishers. The consumer is free to opt out at any time via the privacy portal that provides a transparent view of what consent they have given (i.e., opt in).

“Every brand or publisher token holds a consent against it, which can be revoked by the user at any time through a privacy portal. Once revoked, that brand or publisher can no longer use it for advertising. Vodafone does not control that process.”

Vodafone’s spokesman added: “We believe it is relevant to offer advertisers and publishers … a level playing field for the digital advertising sector but, most importantly, to offer end users greater control, choice and transparency.”

If Vodafone believes the tracking system it wants to subject mobile users to is indeed fair and transparent — and compliant with EU data protection law — why are experts and regulators concerned?

Poulter did not offer a direct response to that question — merely confirming that the telco “engaged with the BfDI to get its view from a telco regulation perspective.”

“We will also engage with other regional or national regulators where they have any queries,” he also told us, adding: “Specifically, the BfDI gave guidance on how to ensure compliance, including transparency and ensuring users can ‘reject’ with a single click at the first layer of consent request in the interface.”

Of course Vodafone et al. won’t be in control of the look and feel of cookie compliance on participating publishers’ websites — so won’t be in a position to ensure a clear ‘reject’ option is offered at the first layer. And given we all know what a total compliance trash fire cookie consent pop-ups generally remain, as resource-strapped DPAs have largely looked the other way at such widespread privacy breaches, it looks safe to assume TrustPid’s partners will deliver more of the same.

There’s a further twist in the tale, too, as the BfDI told us TrustPid itself has been established as a U.K.-based company — meaning it won’t be regulated by EU-based regulators — at a time when the U.K. government is moving forward on a plan to diverge domestic legislation from the EU’s data protection framework, including by loosening the rules around consent for cookies … Fancy that!

The German federal data protection authority also confirmed it was “merely informed” by Vodafone about its trial of the TrustPid-technology together with Deutsche Telekom, as it regulates the two carriers.

“For TrustPID, the responsible data protection authority is not us but the British data protection authority ICO. The U.K.-based company TrustPid itself has not contacted the BfDI at any time,” it told us.

“The mobile network provider creates a unique, pseudonymous network identifier for TrustPid. Therefore TrustPid technology could be seen as a value-added service according to the ePrivacy Directive. But the BfDI emphasizes that only an informed and voluntary given consent is an acceptable foundation for the use of this technology,” the authority went on, expressing scepticism about the use of consent for this type of tracking.”

“High standards must be set here and we are sceptical that the current consent fulfils that aim,” it added. “The BfDI has not yet made a final decision regarding the data processing by Vodafone and Deutsche Telekom.”

A long-running EU engagement with TikTok — initiated following a series of complaints over child safety and consumer protection complaints filed back in February 2021 — has ended, for now, with the video sharing platform offering a series of commitments to improve user reporting and disclosure requirements around ads/sponsored content; and also to boost transparency around its digital coins and virtual gifts.

“Thanks to our dialogue, consumers will be able to spot all kinds of advertisement that they are exposed to when using this platform,” said the EU’s commissioner for justice, Didier Reynders, in a statement yesterday.

“Despite today’s commitment, we will continue to monitor the situation in the future, paying particular attention to the effects on young users,” he added.

TikTok was contacted for comment.

In its press release announcing the development, the Commission summarized the “main commitments” TikTok has agreed to — which are that:

  • users can now report ads and offers that could potentially push or trick children into buying goods or services;
  • branded content now abides by a policy protecting users, which prohibits the promotion of inappropriate products and services, such as alcohol, “get rich quick” schemes and cigarettes;
  • users are prompted to switch on a toggle when they publish content captioned with specific brand-related keywords such as #ad or #sponsored;
  • if a user has more than 10,000 followers, their videos are reviewed by TikTok against its Branded Content Policy and Community Guidelines to ensure that the content is appropriate;
  • policies clarify how to purchase and use coins, and pop-up windows will provide the estimated price in local currencies. Consumers are allowed to withdraw within 14 days from the purchase, and their purchase history is also available;
  • policies also clarify how to get rewards from TikTok and how to send gifts, for which users will be able to easily calculate their price;
  • paid advertisement in videos will be identified with a new label, which will be tested for effectiveness by a third party;
  • users are able to report undisclosed branded content, and new rules for hashtags and labels will be implemented.

However European consumer organization, BEUC — which originated the complaint — has warned “significant concerns” remain over how TikTok operates its platform which raises questions over the decision, at the EU level, to accept TikTok’s commitments and monitor implementation — rather than take tougher enforcement action.

“We welcome that TikTok has committed to improve the transparency of marketing on their platform but the impact of such commitments on consumers remains highly uncertain. Despite over a year of dialogue with TikTok, the investigation is now closed, leaving significant concerns that we raised unaddressed,” warned BEUC’s deputy director general, Ursula Pachl, in a statement.

“We are particularly worried that the profiling and targeting of children with personalised advertising will not be stopped by TikTok. This is in contradiction with the five principles on advertising towards children adopted by the data protection and consumer protection authorities last week.”

“We now urge the authorities to closely monitor TikTok’s activities and to take national enforcement actions if commitments do not deliver. This must not be the end of the story. BEUC and our members will keep a close eye on the developments,” she added.

The Commission’s own press release — which kicks off with a headline claim that TikTok has agreed to align its rules with the EU’s on consumer protection — can’t avoid sounding doubtful that the full series of concerns has, in fact, been addressed by this grab-bag of policy tweaks. Especially in the case of children — which is the group of most concern here, given the platform’s overriding popularity with younger Internet users and children’s relative vulnerability to ‘sharp’ commercial practices vs adults.

And the Commission PR acknowledges that Member State level consumer protection agencies may end up taking action at a national level to address remaining concerns.

If that happens the whole saga will have (very slowly) come full circle, since a series of national consumer protection bodies fed the original complaint series that triggered the Commission coordinating the year+ long dialogue with TikTok in the first place — raising questions about how effective the EU’s modernization of its consumer protection framework has been for co-ordinating meaningful action where concerns are wide-ranging/cut across national borders.

If EU lawmakers’ strategy is to soft peddle on hard consumer complaints to encourage platforms to serve up a minimum of operational changes without the need for local bodies to resort to a patchwork of enforcement then perhaps the increased co-ordination — and expanded role for the Commission itself in the process — is working as intended.

But, well, that scenario would suggest it’s EU citizens who are losing out in this ‘modernization’ as enforcement seems to have been de-emphasized — despite a parallel adoption by the bloc of more dissuasive penalties for widespread consumer protection infringements which have empowered national authorities to be able to issue fines of at least up to 4% of global annual turnover.

“The Consumer Protection Cooperation Network (CPC) will actively monitor the implementation of these commitments, in 2022 and beyond,” writes the EU’s executive of TikTok’s commitments. “CPC authorities will, in particular, monitor and assess compliance where concerns remain, such as whether there is sufficient clarity around children’s understanding of the commercial aspects of TikTok’s practices. For example, for what concerns personalised advertising, in light of the recently published ‘5 key principles of fair advertising to children‘.”

“The CPC will also carefully check the outcome of the testing of labels, as well as their implementation, and the adequacy of the display of the estimated unit price per coin in local currency when sending a gift,” it adds. “In addition, actions at national level may be launched to ensure that EU standards are respected and to guarantee that all platforms abide by the same rules.”

So, while further action could come at a national level to address remaining concerns — or, indeed via the monitoring process, if TikTok is found failing to live up to its commitments, for now, it appears to have escaped tougher action.

The Commission PR does point out that the EU’s network of data protection authorities “remain competent” to assess compliance of TikTok’s new policies and practices with the bloc’s data protection rules. However that’s a line doing a lot of heavy lifting given a mechanism within the EU’s General Data Protection Regulation (GDPR) that’s intended to streamline investigations of cross-border issues by funnelling complaints through a lead DPA has been accused of contributing to major enforcement bottlenecks.

Ireland’s Data Protection Commission (DPC), which is TikTok’s lead EU DPA — and also happens to be one of the most complained about DPAs when it comes to cross-border GDPR enforcement — opened two investigations into the platform in September 2021, one of which explicitly concerns how it processes children’s data. Both those probes remain ongoing.

On the children’s data inquiry, the DPC told TechCrunch today that it expects to send a draft decision to other interested EU DPAs to review (and potentially object to) by the end of August — suggesting a final decision on the kids’ data inquiry is not imminent.

This is because the (GDPR Article 60) review stage can take several months to play through. Plus, if objections are lodged by other DPAs, it may add many more months before a final decision is arrived at (either by majority consensus; or, if that can’t be found, by the EDPB stepping in) — which means there may not be a final call on whether TikTok’s processing of children’s data complies with EU data protection law until well into 2023.

In another cross-border GDPR case, for example — related to Twitter — it took from May 2020, when the DPC submitted its draft decision for review, to December 2020 for consensus to be reached, via a majority voting decision (following objections).

Additionally, in the case of the DPC’s GDPR transparency probe of WhatsApp, its draft decision was sent to other DPAs in late December 2020 — but a final decision wasn’t handed down until September 2021 after unreconcilable disputes between DPAs required the EDPB to step in and issue a binding decision on the DPC to substantially revise upward the size of the penalty, adding around a half year extra to the process.

So it’s a safe bet that TikTok’s processing of children’s data for ads isn’t facing immediate action from “competent” data protection authorities in the EU.

Nor, seemingly, is this issue compelling any of the bloc’s consumer protection authorities to act — despite all their months of concerns about TikTok’s practices. (Which includes the CPC Network endorsing the aforementioned ‘fair advertising’ principles for kids — which stipulate that: “Certain marketing techniques, e.g., personalised marketing, could be inappropriate to use due to the specific vulnerabilities of children.”)

The problem on the consumer protection agency front is likely down to regulators needing to ‘stay in their lane’ — or, basically, the CPC Network is waiting on Ireland’s DPC and the GDPR’s cross-border joint-working processes to do the work and reach a decision.

But while EU regulators play pass the parcel on child protection issues, TikTok of course gets to keep processing kids’ data for ads.

The platform is also evolving its legal terms — recently announcing an incoming change which will apply to (all) users in the region from July 13 that means it’s switching from relying on consent to process user data for targeted ads to claiming a legal basis known as ‘legitimate interests’.

So, basically, TikTok won’t be asking for EU users’ consent to process their data to run ‘personalized ads’ from next month.

Since the platform announced the planned switch, EU data protection experts have been raising reg flags — querying the viability of TikTok using the LI legal base for such a purpose; and suggesting the change may mean TikTok won’t provide users with any choice but to accept behavioral advertising if they want to use its platform.

It’s not clear whether TikTok’s lead EU data protection regulator, Ireland’s DPC, has been consulted on these incoming changes which look extremely material to the data protection rights of all EU citizens.

We asked the DPC about the planned change of legal base by TikTok but at the time of writing we were still waiting on a response to a series of questions.

We also asked the Commission about the decision, taken via the coordinated consumer protection process it led, to accept TikTok’s commitments, despite consumer groups continuing to warn of significant concerns. But, again, at press time we were still waiting for comment.

Last year, Pinterest began its pivot from being an online image board to being more of a creator platform with the launch of Idea Pins. The new feature allows Pinterest users to tell their stories using a combination of video, images, music, and other editing tools, resulting in something that’s a cross between TikTok’s short videos and a Stories product with multiple pages of content. Today, Pinterest is opening up this new format to its advertisers with the launch of its new “Idea Ads.”

This ad format is designed to give brands a way to connect with their audience on Pinterest’s platform, which can then in turn drive traffic to their websites or inspire future shopping purchases — similar to much of Pinterest’s organic content. The Idea Ads themselves are basically just Idea Pins either crafted directly by marketers or produced in collaboration between a business and a creator who are working together. The latter is referred to as “Idea ads with paid partnership,” in Pinterest’s lingo.

The company envisions the new format as a better way for brands to reach their audience beyond using just video alone, as on TikTok and other video-first platforms. While video, of course, works well for storytelling, it can be difficult to share other crucial details — like the supply list for a DIY project, the ingredients or instructions needed to make a recipe, or the names of products used in a makeup tutorial along with links as to where to shop.

Both TikTok and YouTube have been experimenting with solutions to some aspects of this problem which involve placing shoppable items below the video in question. More recently, YouTube even laid the groundwork for a second-screen experience aimed at those watching videos on TV where they could shop on their phones while watching videos on the big screen.

In Pinterest’s case, however, brands may want to do more than just generate clicks to their website. They also may want to offer inspirational content to raise brand awareness or share a series of step-by-step instructions to help viewers complete a specific project. Those are things that aren’t as easily accomplished via merch shelves underneath videos alone.

Image Credits: Pinterest

According to Pinterest’s tests, people who saw Idea Ads were 59% more likely to recall that brand. Meanwhile, brands that worked with creators saw 38% higher brand awareness and 37% higher Pin awareness, it said. Scotch and Gatorade were among Idea Ads’ early adopters. Scotch worked with craft-focused creator Kailo Chic on a back-to-school shopping campaign that saw 64% lower cost per impression than Scotch’s benchmark goals. Gatorade worked with fitness creators VeraLaRo and Domonique Panton on Idea Ads that generated more than 14 million views and 34 million impressions.

In addition to today’s public launch of the new ad format, Pinterest also introduced a new paid partnership tool. The tool allows creators to both disclose and promote their brand partnerships. With the tool, creators can tag their brand partners directly in their content. Pinterest says brands who had tested the tool include Gatorade, 3M, Coty, and M.A.C. Cosmetics.

The new products were announced ahead of Pinterest’s return to the Cannes Lions International Festival of Creativity — a global ad festival that Pinterest will return to for the first time in three years, it said. It also follows this week’s news of its multi-million dollar partnership with Tastemade to scale creator content and live streaming across Pinterest starting later this year, which will also involve Idea Pins.

The company says the new ad formats and tools are now generally available to advertisers in over 30 countries worldwide.

A major privacy feature Apple launched last year, called App Tracking Transparency (ATT) — which requires third party apps to request permission from iOS users to track their digital activity for ad targeting — is facing another antitrust probe in Europe: Germany’s Federal Cartel Office (FCO) has just announced it’s investigating the framework over concerns that Apple could be breaching competition rules by self-preferencing or creating unfair barriers for other companies.

Last year, France’s antitrust regulator declined to pre-emptively block Apple from implementing ATT — but said it would be watching how Apple operates the feature. Poland also opened probe of the feature at the end of last year.

The UK’s Competition and Markets Authority (CMA) also set out concerns about Apple’s implementation of ATT in a deep dive mobile market study published last week. However the UK watchdog has so far deferred intervention over the feature — prioritizing other areas of Apple’s business to investigate (such as App Store rules) as it continues to wait for the government to enact a major competition reform targeting tech giants’ market power which was confirmed as incoming in November 2020 but is still pending legislation (that’s not now expected until next year at the earliest).

Germany is ahead of the curve here as its ex ante digital competition reboot came into force at the start of 2021 — targeting tech giants that are judged to have so-called “paramount significance for competition across markets” with tighter abuse controls.

Since then the FCO has been busy determining which giants the regime applies to — confirming, in a first decision in January, that Google meets the bar.

A number of other assessments are ongoing. And it’s still considering whether or not Apple’s business is in scope of the updated regime after opening a market power procedure last summer. But — as regards the ATT probe being announced today — the regulator says it’s taking action against conduct that “can possibly” be classified as meeting the definition for the ex ante powers to apply.

So it looks like the FCO is leaning towards a view that Apple will be in scope of the beefed up regime — and, consequently, trying to optimize  future enforcement by opening a probe of Apple’s tracking rules now that’s based on the updated law (specifically it cites Section 19a(2) sentence 1 of the updated competition act). Such parallel procedures should save time vs sequential working (i.e. if it waited for the market power procedure to complete first before probing ATT).

“In this context, the possibilities for Apple itself to combine data across services and users’ options regarding the processing of their data by Apple can be relevant, just like the question whether these rules may lead to a reduction of users’ choice of apps financed through advertising,” the FCO notes in a press release which gives some further hints of its concerns.

Commenting in a statement, Andreas Mundt, president of the regulator, added:

“We welcome business models which use data carefully and give users choice as to how their data are used. A corporation like Apple which is in a position to unilaterally set rules for its ecosystem, in particular for its app store, should make pro-competitive rules. We have reason to doubt that this is the case when we see that Apple’s rules apply to third parties but not to Apple itself. This would allow Apple to preference its own offers or impede other companies. Our proceeding is largely based on the new competencies we received as part of the stricter abuse control rules regarding large digital companies which were introduced last year (Section 19a German Competition Act — GWB). On this basis, we are conducting or have already concluded proceedings against Google/Alphabet, Meta/Facebook and Amazon.”

Apple was contacted for comment — and it sent this statement, attributed to a spokesperson:

“Apple believes in thriving and competitive markets, and through the App Store, we’ve helped millions of developers turn their brightest ideas into apps that change the world. In Germany alone, the iOS app economy supports hundreds of thousands of jobs and has given developers of all sizes the same opportunity to share their passion and creativity with users, while creating a secure and trusted place for customers to download the apps they love.

“Privacy has always been at the center of our products and features. At Apple, we believe that a user’s data belongs to them and they should get to decide whether to share their data and with whom. We have long believed in the power of advertising to connect businesses with customers — and that you can have great advertising with great privacy. App Tracking Transparency (ATT) simply gives users the choice whether or not they want to allow apps to track them or share their information with data brokers. ATT does not prevent companies from advertising or restrict their use of the first-party data they obtain from users with their consent.

“These rules apply equally to all developers — including Apple — and we have received strong support from regulators and privacy advocates for this feature. Apple holds itself to a higher privacy standard than almost any other company by providing users with an affirmative choice as to whether or not they would like personalized ads at all.

“We will continue to engage constructively with the FCO to address any of their questions and discuss how our approach promotes competition and choice, while protecting users’ privacy and security.”

European regulators’ concerns over ATT appear to be centered — not on the fact that Apple is requiring app developers to ask users for consent to track them, which has been acknowledged by several competition watchdogs as a privacy benefit for users — but rather on the concern that Apple is tilting the playing field by not applying the same user-facing process to its own ‘personalized’ ads, which do not trigger the ATT pop-up that’s been accused of generating friction for rivals’ ads.

For its part, Apple argues such a comparison is an unfair one — since it’s using first party (i.e. iOS user) data for ads; and further claims higher privacy standards vs third parties in how it targets ads.

However European antitrust regulators may take a different view, given their overriding focus on competition — and they could force Apple to make changes to how it implements ATT in these markets as they continue to scrutinze the operational detail.

For example, in a section in its mobile market report on Apple’s ATT privacy framework, the UK’s competition regulator writes: “It is clear that there are privacy benefits associated with the introduction of ATT as it enhances users’ privacy and control over their personal data and significantly improves developers’ compliance with data protection law, which requires developers to have user or subscriber consent to access information from their device” — before switching gears to assess “whether and to what extent ATT undermines the current model of advertising to users of mobile devices”; and discuss how its implementation “may benefit Apple’s own advertising services and reinforce its position in app distribution”, with the CMA concluding that Apple’s choice architecture for ATT is “potentially problematic”.

The UK regulator also goes on to raise concerns that Apple’s current implementation of ATT “is likely to result in harm to competition, make it harder for app developers to find customers and to monetise their apps, and ultimately harm consumers [by raising app prices and/or reducing quality/variety]”, adding: “[W]e consider that there are a number of ways in which the potential competition harms of ATT could be mitigated while retaining the benefits in terms of user choice and privacy.”

Another recent regulatory development is hanging over tech giants operating in the region: Incoming changes to European Union competition law aimed at setting ‘rules of the road’ for how so called Internet gatekeepers must do business — which are set to pre-emptively ban stuff like self-preferencing — were agreed earlier this year; and are due to come into force in Spring 2023.

The UK’s competition watchdog has published its final report on a comprehensive, year-long mobile ecosystem market study — cementing its view that there are substantial concerns about the market power of Apple and Google which require regulatory intervention.

Back in December, its preliminary report on the market study also identified concerns and discussed potential remedies for tackling lock-in and opening up the pair’s “largely self-contained ecosystems”, such as by making it easier for consumers to switch and reducing barriers for app developers.

The Competition and Markets Authority (CMA)’s 356-page final report goes into greater depth and detail on all fronts, analyzing a smorgasbord of competition concerns attached to how Apple and Google operate their respective, dominant mobile ecosystems, iOS and Android — and digging into topics as varied as Apple’s App Tracking Transparency feature; a Google developer revenue-sharing agreement codenamed ‘Project Hug’; and the merits of developing web apps that features a chat with the maker of popular puzzle game, Wordle, to pull out a few highlights — but with the regulator pointing to the pair’s sustained profitability, and profits it assess as “high in absolute terms”, as an indelible, top-line signal that market distortion is afoot.

In a press release accompanying the report, the CMA sums up its conclusions by asserting that Apple and Google “hold all the cards” in the mobile ecosystems market — and that interventions are needed “to give innovators and competitors a fair chance to compete”.

While there’s likely to be a fair degree of déjà vu for industry watchers — given the CMA’s preliminary report last year also flagged some of the same problems and discussed potential remedies — this time the UK regulator is taking action. Albeit, the processes this will entail are not quick so it could be years before it’s in a position to actually intervene and order changes to how the tech giants operate in relation to concerns its report has identified. But, well, the train is now starting to leave the station at least.

Specifically, the CMA is now proposing to open an in-depth probe with two points of focus: One on Apple and Google’s market power in mobile browsers; and another on Apple’s restrictions on cloud gaming through its App Store. (NB: The regulator has a duty to consult before it opens what’s called a market investigation reference, or MIR, relying on its existing competition powers.)

On mobile browsing, the CMA is concerned about Apple’s ban on non WebKit-based browsers on iOS — which it suspects severely limits rival browsers from being able to differentiate vs Apple’s Safari, as well as suggesting the restriction limits Apple’s incentive to further develop its own browser.

The CMA is further worried about how Apple’s ban on non WebKit-based browsers on iOS limits the capabilities of web apps on its platform — hampering their ability to compete with native apps (which Apple of course monetizes via its App Store fees).

Mobile browser defaults also appear to be in scope of the proposed MIR, with the CMA noting that mobile devices typically have either Google’s Chrome or Apple’s Safari pre-installed and set as default at purchase — “giving them a key advantage over other rival browsers”.

On cloud gaming, the CMA says it wants to look into Apple blocking these services on its App Store and how that might be harming consumers, such as if its action is hampering the sector from growing. It further notes that gaming apps are a key source of revenue for the iPhone maker, suggesting the tech could also pose a threat to Apple’s strong position in app distribution.

Its consultation on the proposed MIR will run until July 22.

In parallel, the regulator is also announcing that it’s taking enforcement action against Google in relation to its app store payment practices — where it says it suspects the adtech giant of anti-competitive practices.

This competition law investigation will focus on Google’s rules governing apps’ access to listing on its Play Store — looking at conditions it sets for how users can make in-app payments for certain digital products. (NB: The CMA has an open investigation into Apple’s App Store, announced in March last year — so this looks like a mirror action to address Google’s practices but one that’s likely to lag the more advanced investigation into Apple’s mobile app store terms.)

According to its report, the CMA has decided to step up a gear now because mobile developers have been complaining to it in the months since its preliminary report also flagged a grab-bag of competition concerns.

But the regulator is also acting now using its existing powers because it’s essentially being forced to as a result of the UK government’s decision to decelerate a planned ex ante reboot of digital competition rules (which the CMA had previously envisaged as the best vehicle to address antitrust concerns linked to Big Tech market power, including in mobile) — hence its report acknowledging (with quasi regret) “we now understand this [legislation to empower the Digital Markets Unit] will not be in the current parliamentary session (ie within the next year)”, adding: “Based on these developments, we now consider it to be the right time to consult on making a market investigation reference [MIR] into mobile browsers and cloud gaming.”

So the bottom line is that the UK’s competition regulator is having to make do with its current (ex post) competition powers to address substantial and sustained antitrust concerns attached to fast-moving digital giants — because the UK government has failed to prioritize the necessary ex ante reforms.

The CMA’s report acknowledges that European Union regulation could, therefore, end up having a first mover impact on strategic digital market power — since the bloc has already agreed its own ex ante competition reform (the Digital Markets Act; DMA), which is likely to come into force early next year.  So, er, so much for Brexit taking back regulatory control then!

“[T]he DMA will be one starting point for Apple and Google when deciding how to address these international competition concerns, many of which are similar to ours,” the CMA writes in a chapter of the report discussing international developments. “As a result, Apple and Google may make changes to the mobile ecosystem that will address some of the current restrictions on effective competition on a global basis, which could resolve the competition concerns that have been raised in a number of jurisdictions, including the UK.”

One slight potential upside of the UK’s legislative delay on digital competition reform is that the CMA has at least used this interim period to undertake detailed scrutiny of the mobile market — the consequences of which are likely to be long and deep, as the regulator suggests its conclusions will feed future interventions by the DMU, aka the dedicated unit established inside the regulator last year to oversee a “pro-competition” regime in digital markets that’s intended to target the most powerful platforms (but sill lacks the necessary legislation).

“We expect the findings of this market study to be an input into any DMU assessment of whether Apple and Google should be designated with SMS in particular activities,” the CMA writes, making a reference to Strategic Market Status; aka the status in the planned reform that would mean they are in-scope of the future ex ante code of conduct (and also able to be subject to so-called ‘pro-competition interventions’ which are set to be tailored per entity, not one-sized fits all). “The study will also inform the appropriate range and design of potential interventions that the DMU could put in place, were it to find either Apple or Google to have SMS.”

“Our expectation based on the findings in this study and the evidence to date, is that Apple and Google would meet the criteria (as currently outlined in the government’s consultation response) to be found to have SMS in respect of the following activities within their ecosystems; mobile operating systems (and for Apple, together with the mobile device on which it is installed, to the extent these are inextricably linked), native app distribution, and mobile browsers and browser engines. As a result, we expect that the interventions which we have considered in this study would generally be in scope of the new regime,” it adds.

The UK regulator will surely be hoping that time spent waiting for the government to empower the DMU can — eventually — turn into future enforcement gains, i.e. once the DMU is on a proper legal footing, and as a result of it undertaking all this comprehensive market analysis in the meanwhile. (The CMA has previously done a deep dive into the digital advertising market — where it also concluded there are major structural problems with Google but, similarly, opted to wait for the government to legislate.)

But there’s no doubt the government’s decision to kick the reform down the road means tech giants like Apple and Google have bought themselves a lot more time to keep extracting UK rents.

Commenting on the mobile market study in a statement, the CMA’s CEO, Andrea Coscelli, said:

“When it comes to how people use mobile phones, Apple and Google hold all the cards. As good as many of their services and products are, their strong grip on mobile ecosystems allows them to shut out competitors, holding back the British tech sector and limiting choice.

“We all rely on browsers to use the internet on our phones, and the engines that make them work have a huge bearing on what we can see and do. Right now, choice in this space is severely limited and that has real impacts – preventing innovation and reducing competition from web apps. We need to give innovative tech firms, many of which are ambitious start-ups, a fair chance to compete.

“We have always been clear that we will maximise the use of our current tools while we await legislation for the new digital regime. Today’s announcements — alongside the eight cases currently open against major players in the tech industry, ranging from tackling fake reviews to addressing problems in online advertising — are proof of that in action.”

Apple and Google were contacted for a response to the CMA’s findings.

Both tech giants sought to play down the idea that their stewardship of their respective mobile ecosystems has any negative impacts for consumers or other businesses.

Here’s Apple’s statement: 

“We believe in thriving and competitive markets where innovation can flourish. Through the Apple ecosystem we have created a safe and trusted experience users love and a great business opportunity for developers. In the UK alone, the iOS app economy supports hundreds of thousands of jobs and makes it possible for developers big and small to reach customers around the world.

“We respectfully disagree with a number of conclusions reached in the report, which discount our investments in innovation, privacy and user performance — all of which contribute to why users love iPhone and iPad and create a level playing field for small developers to compete on a trusted platform. We will continue to engage constructively with the Competition and Markets Authority to explain how our approach promotes competition and choice, while ensuring consumers’ privacy and security are always protected.”

A Google spokesperson also sent us this statement:

“Android phones offer people and businesses more choice than any other mobile platform. Google Play has been the launchpad for millions of apps, helping developers create global businesses that support a quarter of a million jobs in the UK alone. We regularly review how we can best support developers and have reacted quickly to CMA feedback in the past. We will review the report and continue to engage with the CMA.”

For a hint of what (more) may be to come, finally — if/when the DMU finally gets empowered and a new UK competition regime is up and running — Chapter 8 of the CMA’s report discusses a broad range of potential remedies for addressing competition concerns attached to the Apple-Google mobile duopoly, from making switching ecosystems easier for consumers; to lowering barriers for new OSes; to making interventions to aid native app distribution, or at the level of app store commission, or to support competition between app developers.

The report also touches on a number of potential separation remedies — namely data separation; operational separation; and structural separation — but the CMA sounds wary of going that far, without entirely ruling it out. “Given the significant costs, business disruptions, and risks of unintended consequences associated with these forms of intervention, we consider there are alternatives available with the potential to deliver many of the benefits with significantly lower cost and risks,” it writes on that. 

“In particular, we envisage that at this stage the interventions proposed above to level the playing field between Apple’s and Google’s own apps and third parties, would have the potential to deliver many of the benefits with comparably lower costs,” it goes on, before adding: “However, should Apple and Google act against consumers interests by making it unreasonably difficult for competing apps to successfully enter and expand, then separation could be reconsidered as an alternative which directly addresses their incentives to favour their own businesses.”

Returning to the immediately proposed interventions, if the MIR goes ahead as the CMA is proposing, it will have 18 months from the date the reference is made to conclude the investigation of Apple and Google’s market power in mobile browsers and Apple’s approach to cloud gaming — with the possibility of an extension of a further 6 months in exceptional circumstances. So it could be spending two full years digging into this.

The aim of a market investigation is to consider whether there are features of a market that have an adverse effect on competition (aka AEC).

If the CMA finds there is an AEC, it has a range of (existing) powers to impose its own remedies, such as being able to enforce behavioral requirements or even order the sale of parts of a business, as well as being able to make recommendations to other bodies (such as sectoral regulators or the government) for other appropriate interventions to support improving competition.

But, again, such interventions aren’t likely to deliver overnight results as they can also take time to implement, plus there’s the high possibility that enforcement orders would be appealed. So, again, any UK fix for the Apple-Google duopoly won’t be quick. 

Jailed Kremlin critic, Alexey Navalny, has hit out at adtech giants Meta and Google for shutting off advertising inside Russia following the country’s invasion of Ukraine which he argues has been a huge boon to Putin’s regime by making it harder for the opposition to get out anti-war messaging.

The remarks came after Navalny was asked to address a conference on democracy. Not in person of course as he remains incarcerated in Russia — rather he posted the comments on his website.

“It would be downright banal to say that the new information world can be both a boon for democracy and a huge bane. Nevertheless, it is so,” he writes. “Our organization has built all its activities on information technology and has achieved serious success with it, even when it was practically outlawed. And information technology is being actively used by the Kremlin to arrest participants in protest rallies. It is proudly claimed that all of them will be recognized even with their faces covered.

“The Internet gives us the ability to circumvent censorship. Yet, at the same time, Google and Meta, by shutting down their advertising in Russia, have deprived the opposition of the opportunity to conduct anti-war campaigns, giving a grandiose gift to Putin.”

Navalny has previously called for Meta and Google to allow their adtech to be weaponized against Putin’s propaganda machine — arguing that highly scalable ad targeting tools could be used to circumvent restrictions on access to free information imposed by the regime as a way to show Russian citizens the bloody reality of the “special military operation” in Ukraine.

Now, in thinly veiled criticism of the tech giants — which would presumably be delivered in a sarcastic tone if his address were being given in person — Navalny writes: “Should the Internet giants continue to pretend that they it’s ‘just business’ for them and act like ‘neutral platforms’? Should they continue to claim that social network users in the United States and Eritrea, in Denmark and Russia, should operate under the same rules? How should the internet treat government directives, given that Norway and Uganda seem to have slightly different ideas about the role of the internet and democracy?

“It’s all very complicated and very controversial, and it all needs to be discussed while keeping in mind that the discussion should also lead to solutions.”

“We love technology. We love social networks. We want to live in a free informational society. So let’s figure out how to keep the bad guys from using the information society to drive their nations and all of us into the dark ages,” he adds.

Meta and Google were contacted for a response to the criticism but at the time of writing neither had sent comment.

The tech industry’s response to the war in Ukraine remains patchy, with Western companies increasingly closing down services inside Russia — but not all their services.

For example, despite shuttering advertising inside Russia, Meta and Google have not shut down access to their social platforms, Facebook and YouTube — likely as they would argue these services help Russians access independent information vs the state-controlled propaganda that fills traditional broadcast media channels in the country.

In Facebook’s case, it’s an argument that was bolstered when Russia’s Internet regulator targeted the service soon after the invasion of Ukraine — initially restricting access; and then, in early March, announcing that Facebook would be blocked after the company had restricted access to a number of state-linked media outlets.

Interestingly, though, Google owned YouTube appears to have escaped a direct state block — although it has received plenty of warnings from Russia’s Internet regulator in recent months, including for distributing “anti-Russian ads“.

This discrepancy suggests the Kremlin continues (for now) to view YouTube as an important conduit for its own propaganda — likely owing to the platform’s huge popularity in Russia, where use of YouTube outstrips locally developed alternatives (like GazProm Media-owned Rutube), which would be far easier for Putin’s regime to censor.

This is not the case for Facebook — where the leading local alternative, VK.com, has been massively popular for years — making it easier for the Kremlin to block access to the Western equivalent since Russians have less incentive to try to circumvent a block by using a VPN.

However if the Kremlin is intent on shaping citizens’ access to digital information over the long haul it may not be content to let YouTube’s popularity stand — and could opt to use technical means to degrade access while actively promoting local alternatives, as a strategy to drive usage of local rivals until they’re big enough to supplant the influence of the foreign giant. (And, indeed, reports have suggested the Kremlin is sinking money into Rutube.)

Given YouTube’s ongoing influence in Russia — coupled with rising threats from Russia’s state regulator that YouTube remove ‘banned content’ or face fines and/or a slowdown of the service — Navalny may have, at least, an overarching point that Google risks playing right into Putin’s hands.

The jailed opposition politician has also been even more critical of local search giant, Yandex — over its equivalent service to Google News which operates in a regulatory regime that requires it aggregate only state-approved media sources, allowing the Kremlin to shape the narrative it presents to the millions of Russians who visit a search portal homepage where this News feed is displayed.

Back in April, Yandex announced that it had signed a deal to sell News and another media property, called Zen, to VK — but it remains to be seen how, or indeed whether, this ownership change will make any difference to the state-controlled news narrative Russians are routinely exposed to when they visit popular local services.