Steve Thomas - IT Consultant

Google has won an appeal against a class action-style privacy litigation at the UK Supreme Court — avoiding what could have been up to £3BN in damages had it lost the case.

The long-running litigation was brought by veteran consumer rights campaigner, Richard Lloyd, who, since 2017, has been pursing a collective lawsuit, alleging Google applied a Safari workaround to override iPhone users’ privacy settings in Apple’s Safari browser between 2011 and 2012 — and seeking compensation for the breach for the estimated 4 million+ UK iPhone users affected.

Lloyd’s litigation had sought damages for privacy harms. More broadly, the suit sought to establish that a representative action could be brought in the UK seeking compensation for data protection violations — despite the lack of generic class action regime in UK law.

Back in 2018 the High Court blocked the suit from proceeding — but the following year the Court of Appeal overturned the judgment, allowing the lawsuit to be heard.

However today’s unanimous Supreme Court judgment essentially reverts to the High Court’s view: Blocking the representative action.

The Supreme Court justices took the view that damage/loss must be suffered to claim compensation and that the need to prove damage/loss on an individual basis cannot be skipped — meaning compensation cannot simply be applied uniformly for “loss of control” of personal data for each member of the claimed representative class, as the Lloyd litigators had sought. 

“Without proof of these matters, a claim for damages cannot succeed,” the Supreme Court writes, summarizing its judgement.

The ruling is major blow to UK campaigners’ hopes of being able to bring class action-style suits against the tracking industry.

Had Google lost the judgement it would have opened the gate to more representative actions being brought for privacy violations. But with the adtech giant winning the appeal it is likely to put a major chill on UK class action-style suits targeting data-mining tech giants — which had, in recent years, been attracting commercial litigation funders.

Responding to the judgement today, one law firm, BLM, wrote that the outcome of the case “will be cause for celebration for Google and any organisation that handles significant amounts of data or bases its business model on the use of personal data (as well as their shareholders and/or insurers)”.

Another law firm, Linklaters LLP, described the judgement as “a big blow to claimant law firms and funders who had hoped to create a new opt out regime for damages in the data breach sphere”.

“We would expect a lot of similar claims issued in its wake now to fall away,” added Linklaters’ Harriet Ellis, dispute resolution partner in a statement. “Claimant firms will be studying the decision carefully to see if there any viable opt-out class actions that can still be brought. But it looks really tough.”

We’ve reached out to Mishcon de Reya, the law firm representing Lloyd, for comment.

In its own response to the Supreme Court judgement, Google avoided any discussion of the case detail — writing only:

“This claim was related to events that took place a decade ago and that we addressed at the time. People want to know that they are safe and secure online, which is why for years we’ve focused on building products and infrastructure that respect and protect people’s privacy.”

But a spokesperson for the tech giant also pointed to a statement put out by the techUK trade association — which had intervened in the case in support of Google; and which writes today that “had the appeal been rejected, this would have opened the door for speculative and vexatious claims to be made against data controllers, with far-reaching consequences for both public and private organisations”.

The UK trade association goes on to claim that it “does not oppose representative legal action, however, we believe it is right that any action must first seek to establish whether damage has been caused to the individual as a result of a data breach before seeking compensation”.

However, as the Supreme Court justices note — in discussion of the costs of ‘opt in’ (rather than ‘opt out’) litigation regimes — the bar to accessing justice can simply be pushed out of reach in cases where individual claims are only worth a few hundred pounds apiece (in the Lloyd litigation the suggestion was a sum of £750 per person) because the associated case administration costs of processing individual claimants “may easily exceed the potential value of the claim”.

So — to be clear — techUK is opposing representative legal actions being brought over almost any data violation.

The UK’s data protection watchdog, meanwhile, has shown a complete lack of willingness to enforce the law against the data-mining adtech industry — despite the ICO warning, since 2019, of rampantly unlawful tracking.

The UK government is also now consulting on weakening the domestic data protection regime.

So the question of how exactly the average UK citizen can obtain the privacy rights UK law claims wraps their information on paper looks, well, pretty murky right now…

Rights groups have responded to the Supreme Court judgement by calling for the government to legislate for collective redress.

In a statement the Open Rights Group‘s executive director, Jim Killock, said: “There must be a way for people to seek redress against massive data breaches, without having to risk their homes, and without relying on the Information Commissioner alone.

“The ICO cannot act in every case, and is sometimes unwilling to do so. We have waited over two years for action against the Adtech industry, which the ICO says is operating unlawfully. There is no sign of action.

“Yet it would be completely unreasonable for someone to risk their home over court fees in cases like this. Without a collective mechanism, that is where we are left: in many cases data protection is very hard to enforce against tech giants.

“The Government should keep its word, and consider implementing collective action under GDPR, which i[t] specifically rejected in February on the grounds that Lloyd vs Google showed that existing rules could provide a path for redress.”

Is this the beginning of the end for the hated tracking cookie consent pop-up? A flagship framework used by Google and scores of other advertisers for gathering claimed consent from web users for creepy ad targeting looks set to be found in breach of Europe’s General Data Protection Regulation (GDPR).

A year ago the IAB Europe’s self-styled Transparency and Consent Framework (TCF) was found to fail to comply with GDPR principles of transparency, fairness and accountability, and the lawfulness of processing in a preliminary report by the investigatory division of the Belgian data protection authority.

The complaint then moved to the litigation chamber of the DPA — and a whole year passed without a decision being issued, in keeping with the glacial pace of privacy enforcement against adtech in the region.

But the authority is now in the process of finalizing a draft ruling, according to a press statement put out by the IAB Europe today. And the verdict it’s expecting is that the TCF breaches the GDPR.

It will also find that the IAB Europe is itself in breach. Oopsy.

The online advertising industry body looks to be seeking to get ahead of a nuclear finding of non-compliance, writing that the DPA “will apparently identify infringements of the GDPR by IAB Europe”, and trying to further spin the finding as ‘fixable’ within six months (it doesn’t say how, however) — while simultaneously implying the breach finding may not itself be fixed because other EU DPAs still need to weigh in on the decision as part of the GDPR’s standard cooperation procedure (which applies to cross-border complaints).

The pre-emptive statement (and its Friday afternoon timing) looks very much like the IAB Europe trying to both fuzz and bury bad news and thereby calm the nerves of the tracking industry ahead of looming headlines that a flagship tool is unlawful — something EU privacy campaigners have of course been saying for literally years.

In terms of timing, a final verdict on the investigation is still likely months off — and may not emerge ’til deep into 2022. Appeals are also almost inevitable. But the tracking industry’s problems are starting to look, well, appropriately sticky. 

In the short term, the IAB says it expects a draft ruling to be shared by Belgium with other EU DPAs in the next two to three weeks — at which point they get 30 days to review it and potentially file objections.

If DPAs don’t agree with the lead authority’s finding, and can’t agree among themselves, the European Data Protection Board may need to step in and take a binding decision — such as happened in another cross-border case against WhatsApp (which led to a $267M fine, a larger penalty that the lead DPA in that case had originally proposed).

So this GDPR cooperation mechanism can spin procedures out for many more months yet.

Complainants against the IAB Europe and its TCF, meanwhile, told us they have not seen nor been given details of the draft ruling by the DPA.

So it looks pretty whiffy that the ad industry body has had sight of an incoming decision ahead of the other parties to the complaint.

But one of complainants, the Irish Council for Civil Liberties’ Johnny Ryan, quickly posted a press statement of his own, in which he writes: “We have won. The online advertising industry and its trade body, ‘IAB Europe’, have been found to have deprived hundreds of millions of Europeans of their fundamental rights.

“IAB Europe designed the misleading ‘consent’ pop-ups that feature on almost all (80%+) European websites and apps. That system is known as IAB Europe’s ‘Transparency & Consent Framework’ (TCF). These popups purport to give people control over how their data are used by the online advertising industry. But in fact, it does not matter what people click.”

The looming finding of unlawfulness comes at an interesting time for the tracking ads industry with moves afoot in the European Parliament to push for an outright ban on behavioral advertising to be incorporated into incoming pan-EU regulations for digital services — in favor of privacy-safe alternatives like contextual advertising.

A finding that the flagship tool used by the tracking industry to claim ‘consent’ to behavioral ads isn’t actually operating lawfully under EU law will surely amplify calls to clean house by outlawing the practice entirely.

 

According to the IAB Europe, the draft ruling by the Belgian DPA will find that it is a data controller for TCF “TC Strings”, aka “the digital signals created on websites to capture data subjects’ choices about the processing of their personal data for digital advertising, content and measurement”, as it puts it.

(Or — in Ryan’s words — “the identification code created about a person, based on which apps they use and which websites they visit, and what they click in consent popups”.)

It will also find the IAB Europe is a “joint controller” for TC Strings that are used in OpenRTB (Real Time Bidding) — meaning the industry body will have a string of risky new responsibilities attached to the data processing around programatic behavioral advertising (with legal liability aplenty and the risk of big fines if they fail to live up requirements in the GDPR such as privacy by design and default; consent that’s specific, informed and freely given; and appropriate security wrapping people’s data).

Here’s Ryan again, laying out the parallel case against RTB in brief:

“For almost four years, websites and apps have plagued Europeans with this ‘consent’ spam. But our evidence reveals that IAB Europe knew that conventional tracking-based advertising was “incompatible with consent under GDPR” before it launched the consent system.

“This is because the primary tracking-based ad system, called ‘Real-Time Bidding’ (RTB), broadcasts internet users’ behaviour and real-world locations to thousands of companies, billions of times a day. RTB is the biggest data breach ever recorded. There is no way to protect data in this free-for-all. (We are litigating against RTB in Hamburg, too.)

“In proceedings initiated by a group of complainants coordinated by the Irish Council for Civil Liberties, the Belgian Data Protection Authority is close to adopting a draft decision that will find IAB Europe’s its ‘consent’ pop-up system infringes the GDPR, vindicating our arguments over several years.”

The IAB Europe’s spin for trying to eschew responsibility for protecting people’s data is to try to spread blame elsewhere — claiming it has not considered itself a data controller “based on guidance from other DPAs up to now”, among other excuses.

“Therefore, it has naturally not fulfilled certain obligations that accrue to data controllers under the Regulation,” the IAB Europe goes on in — studiously avoiding making any kind of apology.

(Here’s Ryan’s take again: “IAB Europe is jointly responsible and liable with thousands of online advertising firms when personal data are broadcast in to the RTB data free-for-all. IAB Europe had tried to deny this.”)

Instead of apologizing, the IAB Europe directs its energy toward suggesting there will be an easy way to fix the tracking industry’s lawfulness problem, writing: “The draft ruling will require IAB Europe to work with the APD to ensure that these obligations are met going forward.”

Making more market calming noises, it also describes itself as “optimistic” that the TCF can be fixed.

But, well, it would say that wouldn’t it?

The online ad industry body has previously denied there was any case to bring against the TCF or RTB’s use of people’s data.

So, well, its record here shouldn’t inspire confidence.

“Google and the entire tracking industry relies on IAB Europe’s consent system, which will now be found to be illegal”, added Ryan in a statement. “IAB Europe created a fake consent system that spammed everyone, every day, and served no purpose other than to give a thin legal cover to the massive data breach in at the heart of online advertising. We hope the decision of the Belgian Data Protection Authority will finally force the online advertising industry to reform.”

Another complainant in the case, Jef Ausloos, a postdoc researcher in data privacy at the University of Amsterdam, suggests the IAB Europe’s statement is an attempt to sew doubt among other EU DPAs — and called its claim that identification codes used for targeted advertising aren’t personal data “preposterous”.

He also described the Belgian finding as “only the very start of the process as I see it”, adding: “We’ve come a long way already but, regardless, this will still take a while”.

At the time of the writing the Belgian DPA had not responded to our request for confirmation of an impending draft ruling.

A spokeswoman for the IAB Europe claimed it has “only been informed about the headline findings of the draft ruling”. She did not specify how it had obtained the information ahead of the complainants.

Google has responded to allegations contained in a recently unsealed US antitrust lawsuit that it worked covertly to stall European Union privacy legislation that could have blasted a huge hole in its behaviorial advertising business.

Per the US states’ suit, a couple of years after a European Commission proposal to update the EU’s ePrivacy Directive — to replace it with a more widely applicable Regulation — the tech giant was privately celebrating what it described as a “successful” tilt at “slowing down and delaying” the privacy legislation.

The update to the EU’s privacy rules around people’s electronics communications (and plenty more besides) remains stalled even now, with negotiations technically ‘continuing’ — just without any agreement in sight. So Google’s ‘success’ looks overwhelming.

That said, the adtech giant can’t take all the credit: The US states’ case against Google quotes an internal memo from July 2019 — in which it claims to have been “working behind the scenes hand in hand” with the other four of the ‘big five’ tech giants (GAFAM) to forestall consumer privacy efforts.

Here’s the relevant allegation from the antitrust case against Google:

“(b) Google secretly met with competitors to discuss competition and forestall consumer privacy efforts. The manner in which Google has actively worked with Big Tech competitors to undermine users’ privacy further illustrates Google’s pretextual privacy concerns. For example, in a closed-door meeting on August 6, 2019 between the five Big Tech companies—including Facebook, Apple, and Microsoft—Google discussed forestalling consumer privacy efforts. In a July 31, 2019 document prepared in advance of the meeting, Google memorialized: “we have been successful in slowing down and delaying the [ePrivacy Regulation] process and have been working behind the scenes hand in hand with the other companies.””

As well as putting questions to Google, TechCrunch contacted Amazon, Apple, Facebook and Microsoft about the August 6 meeting referenced in Google’s memo.

A spokeswomen for Microsoft declined comment — saying only: “We have nothing to share.”

Amazon and Facebook did not respond to repeated requests for comment.

However last December Politico reported on an internal Amazon document, dating from 2017, which showed the ecommerce giant making an eerily similar boast about eroding support for the ePrivacy Regulation.

“Our campaign has ensured that the ePrivacy proposal will not get broad support in the European Parliament,” Politico reported Amazon writing in the document. “Our aim is to weaken the Parliament’s negotiation position with the Council, which is more sympathetic to industry concerns,” the text went on.

According to its report, Amazon’s lobbying against ePrivacy focused on pushing the Parliament for “less restrictive wording on affirmative consent and pushing for the introduction of legitimate interest and pseudonymization in the text” — which, as the news outlet observes, are legal grounds that would give companies greater scope to collect and use people’s data.

Amazon’s motivation for wanting to degrade the level of privacy protections wrapping Europeans’ data is clear when you consider the $42M fine it was slapped with last year by a single EU data protection watchdog (France’s CNIL) under current ePrivacy laws. Its infringement? Dropping tracking cookies without consent.

Amazon’s digital advertising business isn’t as massive as Google’s (although it is growing). But the ecommerce behemoth has plenty of incentive to track and profile Internet users — not least for targeting them with stuff for sale on its ‘everything store’.

A beefed up ePrivacy could put limits on such tracking. And evidently Amazon would prefer that it didn’t have to ask your permission for its algorithms to figure out how to get you to buy more stuff on Amazon.fr or .de or .es and so on.

But what about Apple? It’s certainly unusual in the list as a (rare) tech giant that’s built a reputation as a champion of user privacy.

Indeed, back in 2018, Apple’s CEO personally stood in Brussels lauding EU privacy laws and calling for the region’s lawmakers to go further in reining in the ‘data industrial complex‘, as Tim Cook dubbed the dominant strain of adtech at the time. So seeing Apple’s name in an anti-privacy lobbying list is certainly surprising.

Asked about Google’s memo, Apple did at least response: It told us that no Apple representative was present at the August 6, 2019 meeting.

However it did not provide a broader public statement distancing itself from Google’s claim of joint work to forestall consumer privacy efforts. So Apple’s limited rebuttal leaves plenty of questions about the aligned interests of tech giants when it comes to processing people’s data.

Zooming out, the ruinous damage to people’s privacy* which flows from the dominance of a handful of overly powerful Internet giants has been a slowly emerging thread in antitrust cases against big tech. (See for example: The German FCO’s pioneering litigation against Facebook’s superprofiling which Europe’s top court is set to weigh in on.)

Unfortunately, competition regulators have generally been slow to recognize privacy abuse as a key lever for Internet giants to unfairly lock in market dominance. Although the penny does finally seem to be dropping.

Just last week, a report by Australia’s ACCC highlighted how alternative search business models, which don’t rely on tracking people, are being held back by Google’s competitive lock on the market — and, crucially, by its grip on people’s data.

Effective remedies for breaking big tech’s hold over consumers and competition, alike, will therefore require public authorities to grasp the key role privacy plays in protecting people and markets.

*not to mention the other human rights that privacy helps protect

Mind the ePrivacy gap

The EU’s proposed update to ePrivacy is aimed at extending the scope of existing rules attached to the privacy of electronic communications so that they cover, among other things, comms and metadata that’s travelling over Internet platforms; not just message data that telcos carry on their mobile networks.

That change would put plenty of big tech platforms and products in the frame. Google’s Gmail, for example.

And it’s interesting to note that, later in the same year the ePrivacy proposal was presented, Google announced it would stop scanning Gmail message data for ads.

But of course Google hasn’t stopped tracking user activity across the lion’s share of its products or the majority of the mainstream web, via data-harvesting tools like Google Analytics, Maps, YouTube embeds and so on.

Rules that cover how Internet users can be tracked and profiled for behavioral ads are also in scope in the Commission’s ePrivacy Regulation proposal. And that could present a far more existential threat to an adtech giant like Google — which makes almost all its money by tracking and profiling Internet users’ via their digital activity (and other data sources), to calculate how best to sell their attention to advertisers.

Another consumer friendly goal for the ePrivacy proposal is to simplify the EU’s much hated cookie consent rules — potentially by reviving a ‘Do Not Track’ style mechanism where consent could be pre-defined ahead of time and signalled automatically via the browser, doing away with loads of annoying pop-ups.

So blocking progress on that front has basically consigned Europeans to years of tedious clicking which — all too often — still leaves people with no real choice over how their data is used, given how widely the EU’s consent rules are flouted by adtech.

There’s other stuff in the ePrivacy proposal too. EU lawmakers want the update to cover machine-to-machine comms — to regulate privacy around the still nascent but rapidly expanding connected devices space (aka, IoT or the Internet of Things), in order to keep pace with the rise of smart home technologies — which offer new and intimate avenues for undermining privacy.

The scope of the regulation does, therefore, touch other industries beyond pure adtech. And it’s true that big tech hasn’t been alone in opposing ePrivacy. (Big telco has also lobbied fiercely against it too, for example, basically in the hopes of getting the same free license over data as big adtech.)

But a 2018 report by the NGO, Corporate Lobby Europe, which tracks regional lobbying, fingered digital publishers and the digital advertising lobby as the biggest guns ranged against ePrivacy — pointing to frequent collaboration between them (“As print media circulations fall, online advertising has become increasingly important to publishers”).

This collaboration saw publishers stepping up “fear-mongering” hyperbole against ePrivacy — claiming it would mean the ‘end of the free press’.

There was no nuance in this narrative. No recognition that other forms of (non-tracking) advertising are available. And no air time for the salient point that if tracking ads were not the dysfunctional ‘norm’, publishers could recover the value of their own audiences — rather than having them arbitraged by platform giants like Google (now facing antitrust litigation on both sides of the Atlantic for how it operates its adtech) and the adtech middlemen enabled by this opaque model. Publishers have essentially been used as a table for others to feast and left with the scraps.

But the problem is big (ad)tech’s influence extends to selling the notion of market dominance via a business model that views people as attention nodes to be data-mined and manipulated for profit.

This means it’s also ‘advertising’ tracking as a business model for other industries to follow — providing newly digitizing (or, in the case of publishers, revenue-challenged) industries with an incentive to lobby alongside it for lower consumer protections in the hopes of cutting a slice of a Google-sized pie.

Publishers’ ‘share’ of the digital ad pie is of course nowhere near Google-sized. Their ad revenues have in fact been declining for years, which has led to the rash of paywalls now gating previously free content as scores switch to subscription models. (So so much for big adtech’s other cynically self-serving claim that the data-mining of Internet users supports a ‘free and open’ web; not if it’s quality information and professional journalism you want vs any old bs clickbait…)

Beyond publishers, the data harvesting potential of ‘smart’ things — be it a connected car or an in-home smart meter or doorbell — means many traditionally mechanical products (like cars) are becoming Internet-connected and increasingly data-driven. And with that switch comes the question of how will these industries and companies treat user data?

Big adtech is ready with an answer: Luring others to conspire against privacy by adopting surveillance so they too can build lucrative ad targeting businesses of their own. (Or, well, plug into big adtech’s data-feeder systems to fuel its self-serving profit machine).

And since adtech giants like Google have faced almost no regulatory action in the EU over the privacy apocalypse of their mass surveillance, other industries that are just starting to fire up their own data businesses could almost be forgiven for thinking it’s okay to copy-paste an anti-privacy model. So where GAFAM lobbies, car makers, publishers and telcos are willing to follow — fondly believing they are following the money.

Fondly — because the aforementioned antitrust suits suggest big adtech’s collusion against privacy extends to anti-competitive measures that have actively prevented others from getting a fair spin of the dice/slice of the pie. (See, for example, the ‘Jedi Blue’ allegations of a secret deal between Google and Facebook to rig the ad market against publishers and in their own favor.)

So, well, others lobbying against privacy alongside big tech tech risk looking like GAFAM’s useful idiots.

But wait, isn’t the EU supposed to have comprehensive privacy legislation? How is any of this even possible in the first place?

While it’s true the EU was (wildly) successful in passing the General Data Protection Regulation (GDPR) — which updated long standing data protection rules when it came into application in May 2018, beefing up requirements around consent as a legal basis for processing people’s data (for example) and adding some much needed teeth — the regulation has nonetheless been hamstrung by the presence of an outdated ePrivacy Directive sitting alongside it.

The Internet advertising industry has been able to leverage this legislative mismatch to claim loopholes for its continued heist of people’s data for ads. (And, unsurprisingly, disputes over the legal basis under which people’s electronic comms and metadata can be used have been a key ePrivacy sticking point.)

But the forces ranged against the update have seen an even bigger prize than just stopping ePrivacy: They’re trying to force the bloc into reverse and rip out protections the GDPR so recently cemented.

In a particularly ironic recent development, cookie consent friction has been seized upon and spun by ministers in the UK as suggested justification for downgrading the UK’s level of data protection — as Boris Johnson’s Conservatives look at diverging from the GDPR, post-Brexit.

Yet it’s not ‘simplified’ rules that are needed to fix cookie consent; it’s enforcement against systematic rule-breakers that have been allowed to make a mockery of the law in order they they can keep profiting by ignoring everyone’s right to privacy.

The long and short of this is that the damage to consumers and civic society across Europe as a result of regulatory inaction on adtech — and because of Google’s ‘successful’ lobbying against ePrivacy — looks staggeringly high.

While GDPR enforcement on adtech has been largely stalled these past three+ years, thanks (in no small part) to big tech’s forum shopping, if there had been an updated ePrivacy Regulation sitting alongside GDPR — adding enhanced transparency and consent requirements — it could have blasted tracking-based business models right out of EU waters years ago.

“The lobbying in context of ePrivacy was definitely of majestic size,” says Dr Lukasz Olejnik, an independent privacy researcher and consultant based in the EU. “This was readily felt by policymakers. Concerning the scale, it was the Olympic Games in lobbying.”

Asked what he believes has been the impact within the EU of the delaying of ePrivacy, Olejnik says the lobbying has likely slowed down data protection enforcement across the bloc and made outcomes more patchy, as well as holding up progress on further adapting the bloc’s rulebook to account for newer tracking techniques.

Or, to put it another way, the roadblock on ePrivacy has bought the adtech industry more time to get even further ahead of regulators — while simultaneously profiting off of its consentless exploitation of consumers’ data. (See for example, in the case of adtech giant Facebook, its shiny new placeholder about building “the Metaverse“, aka a new type of immersive, data-capturing infrastructure Facebook intends to stoke the fires of its ad engines into the far future under its new brand name ‘Meta’.)

“ePrivacy is currently not well aligned with GDPR. While in context of ePrivacy, all references to the previous Data Protection Directive are understood to be upgraded to GDPR, the current framework for protecting privacy in electronic communication is obsolete,” says Olejnik. “What is worse, some Member States still divide the data protection regulations, leaving enforcement of ePrivacy Directive to other, non-DPA [data protection agency] regulators. This means that one regulator is dealing with GDPR, and another with ePrivacy.

“It’s as problem mainly because the old ePrivacy Directive is not well adapted to GDPR. Meanwhile, the old ePrivacy fails to account for new tracking and ad targeting methods, as well as phenomenons of microtargeting political content based on the processing of personal data.”

There are currently moves by a number of MEPs to push for adtech-related amendments to another legislative proposal — the Digital Services Act (DSA) — to try to tackle rampant adtech data abuse by outlawing behavioral advertising entirely (in favor of contextual ads that don’t require mass surveillance).

But Olejnik doesn’t see that as the ideal route to defeat surveillance-based advertising.

“It’s not a good idea to take the fight and ideas such as curbing of microtargeting to other unrelated regulations, such as the DSA, just because some actors were late to the process,” he argues. “It is much better to finalise ePrivacy as soon as possible, and then immediately start another process of updating.

“That’s how it should work from the point of view of EU law and regulation baking.”

The Council did finally adopt a negotiating position on ePrivacy earlier this year (February) during the Portuguese presidency — proposing a version of the text that, critics say, waters down protections for data and provides fresh loopholes for adtech to exploit.

That in turn could mean ePrivacy ends up creating legislative cover for surveillance-based business models — reversing the stronger protections earlier EU lawmakers had intended and further undermining the GDPR’s (already weak) application against adtech… In short, a disaster for fundamental rights.

Whereas, if the ePrivacy update had been passed at around the same time as the GDPR, Olejnik reckons it would have resulted in a more practically successful upgrade of EU data protection rules.

“There would be chances to synchronise the upgrade,” he suggests. “It would also avert the subsequent backlash due to ‘GDPR paranoia’, which instantly made everybody — including policymakers and the industry — ultra-careful and less happy about any changes in this domain. So the changes would be of a practical nature.

“It would also be simpler and more coherent to do compliance preparations to the two at the same time.”

The EU now has a whole suite of new and even more ambitious digital legislative proposals on its plate — some of which the Commission proposed at the end of last year — including the (aforementioned) DSA; and the (tech giant-targeting) Digital Markets Act (DMA); where it wants to legislate for ex ante powers to tackle so-called “gatekeeper” platforms in order to reboot competition in tipped digital markets.

And of course all this further stretches (limited) legislative resources, while EU lawmakers are still stuck trying to pass an already out-of-date ePrivacy update.

So when adtech giant Facebook’s chief spin doctor and former UK deputy PM, Nick Clegg, makes a big show of claiming regulators are simply too slow and bumbling to keep up with fast-paced technology innovators, it pays to remember how much resource tech giants spend on intentionally delaying external oversight and retarding regulation — including shelling out on a phalanx of in-house lawyers to maintain a pipeline of cynical appeals to delay any actual enforcement.

In Facebook’s case, this includes the claim that the Irish Data Protection Commission (DPC) moved ‘too quickly’ when it arrived at a preliminary decision on a complaint — despite the complaint itself being over seven years old at that point…

One EU diplomat, who we are not identifying because they were not authorized to speak on the record about the ePrivacy file, spoke plainly about its problems. “This has been going on for fucking ages,” the source told us. “Big tech is lobbying like crazy.

“Everybody says they’re lobbying like crazy. Of course they are trying to hold back these issues — like consent for cookies.”

“If you look at the Commission it has a team of around 40 people who deal with the DMA, just to work on that, but Google probably has, like, 300 lawyers already in Brussels — just to paint a picture there,” the EU source added.

Other Brussels chatter this person reported hearing included a recent incident in which Google had apparently briefed journalists that it doesn’t have a dominant market position in search engines — and said its lawyers were “able to prove it”.

Wild, if true. (NB: A Google search for its market share in Europe points to Statcounter data which pegs its share at 92.98% between September 2020 and 2021.)

Back in reality, the trilogue phase of the ePrivacy discussions has technically been ongoing for months, involving the Commission, the Parliament and the Council — with another legislative tug-of-war over the final shape of any text that would then need to be put to a vote.

Slovenia currently holds the rotating Council presidency, meaning it is steering the file and representing the other Members States in the talks.

We contacted the Slovenian representation to the EU to ask whether it has been lobbied by Google (or any other tech giants) on ePrivacy and to ask for minutes of any lobbyist meetings.

A spokesperson denied any meetings with Google — and flat denied the file had been held up by the Council.

“The file is not being blocked within the EU Council, on the contrary. The work is [being] very intense,” it said. “After the first political trilogue under the Portuguese presidency at end of May 2021, the Slovene Presidency held numerous and regular meetings with the European Parliament at the technical level to discuss open issues and prepare the file for the second political trilogue that is planned for November 18, 2021.”

“As for your question related to meetings behind closed doors with Google on ePrivacy in 2019, no such meetings took place,” the spokesperson said, adding: “The Slovene Presidency has put digital files on top of our priorities and this goes for ePrivacy as well. We are actively working and negotiating on the ePrivacy legislation in line with the mandate that was given to us by the EU Council.”

The country’s permanent representation to the EU does publish a “transparency register” of meetings with lobbyists.

However the data on its website only goes back to the start of this year, and the entries are severely limited — with, in the vast majority of cases, no details provided on the topics discussed.

The register for this year, for example, shows a meeting back in March with a Facebook representative, Aura Salla, the adtech giant’s managing public policy director and head of EU affairs — but there are no details about the meeting’s contents.

The Slovene list also records a number of meetings with third party business associations that are affiliated with tech giants and known to parrot their talking points. But, again, no detail is given about what the lobbyists were pressing for.

Portugal’s permanent representation to the EU also publishes a transparency register of lobby meetings with its ambassadors.

Similarly, though, its list is partial — only dating back to 2020 (with no details provided on any of the listed meetings).

Portugal’s register also records a meeting between its ambassador and Facebook’s Salla.

‘Look at the size of our lobby network!’

Google’s public messaging about people’s information typically features at least one claim that it “cares deeply for user privacy and security”.

Google does certainly cherish your data. Of course it does. Your information is the fuel for an ad targeting empire that raked in $182.5BN last year.

Google Cloud generated a small slice of that ($13BN). Its ‘other bets’ division added a morsel too ($657M). But almost all of Google’s vast profits come from targeting advertising at eyeballs based on what it knows about the mind behind the peepers.

TechCrunch asked Google about the discrepancy between fine-sounding claims that fall from its execs’ lips — such as CEO Sundar Pichai telling US lawmakers last year that Google “deeply cares about the privacy and security of our users” (as he was being accused of destroying anonymity on the Internet); or the text of Google’s own privacy policy, where it writes that it: “work[s] hard to protect your information and put you in control” even as it applies labyrinthine settings that make it almost impossible to opt out of tracking and remain opted out — vs allegations in the States’ lawsuit of backroom dealings to derail European privacy legislation. 

In response, Google sought to divert attention by claiming other businesses were also lobbying against ePrivacy and, therefore, that it was “not alone” in opposing the update — using a similar tack to the strategy Facebook applies when Europeans talk about banning microtargeted ads altogether; or even just enforcing current EU laws against Facebook (per Clegg this would ‘kill SMEs’ and be “disastrous” for Europe’s economy)

Interestingly, the statement that Google’s spokesman sent us on its anti-ePrivacy lobbying contains a centerpiece reference to an open letter, released in May 2018 — which the tech giant fastidiously observes was signed by “56 business associations from multiple sectors — not just tech”, seemingly presenting a united front to urge EU Member States to apply the legislative brakes.

Here’s Google’s claim:

“The tech industry was not alone in raising concerns with the ePrivacy regulation as it was then drafted, as a broad range of European organizations — from news to automotive to banking to small business — raised their voices in multiple public and private statements on the same issues. In May 2018, 56 business associations from multiple sectors — not just tech — published an open letter asking for ‘more time’ to assess the draft’s impact ‘on all sectors of the economy’ given the simultaneous arrival into force of the GDPR.”

“We supported the industry in asking for time on ePrivacy so that we could all assess the impact of the GDPR and get that right, first. We also raised our concerns directly with policymakers in our meetings with them over multiple years,” Google’s statement goes on, apparently admitting to extensive lobbying.

The statement ends with a segue into a claim of (Google’s own) GDPR compliance — combined with the subtle suggestion that any privacy abuse would therefore be the fault of third parties that plug into its ad ecosystem (ohhai publishers!), as Google writes: “We’ve invested heavily in building our products to be private by design, secure by default, and compliant with Europe’s GDPR. As well as working on our own compliance, we’ve launched tools to support our partners’ efforts.”

Google’s claimed ‘compliance’ with GDPR ignores the fact that its adtech business is the subject of multiple complaints (not to mention wider antitrust investigations) in the region. Some of these complaints have spent years sitting on the desk of Ireland’s DPC — which continues to face accusations of impeding effective enforcement of the regulation. (The country’s low corporate tax economy has attracted scores of tech giants — so tech industry interests may be viewed as generally aligned with Irish interests.)

Google’s compliance claim also glosses over a $57M GDPR fine in France, at the start of 2019 (when its EU business hadn’t yet restructured to put users under the jurisdiction of Ireland on data protection matters) — a fine Google was handed for failing transparency requirements, meaning the consents it claimed hadn’t actually been legally obtained.

It also omits a $120M fine Google got at the end of last year under current ePrivacy rules — for dropping tracking cookies without consent (also from France’s CNIL).

It is blindingly obvious self-interest for Google to want to gut ePrivacy.

Nonetheless, it’s interesting the tech giant reaches for a fig-leaf defence for its lobbying that tries to dilute attention by claiming multiple other businesses feel the same way too. (Not to mention how it has specifically maneuvered publishers into an invidious position where are supposed to shield its adtech empire from legal risk around privacy abuse because Google requires these ‘partners’ obtain impossibly broad ‘consents’ from users for the ad targeting that Google makes such a handsome return on… )

Sure, some other business than Google/tech giants also don’t like ePrivacy.

But more missing context here is how big tech’s lobbying in Europe — pegged at eye-watering levels in recent years — has led to the creation of a sprawling, obfuscated lobby network where affiliated third parties are happy to parrot its talking points in public while, behind closed doors, accepting its checks.

A report this summer by two civil society groups, (the aforementioned) Corporate Europe Observatory along with Germany-based LobbyControl, found Google topped the lobbying list of Big Tech big spenders in the EU — with Mountain View shelling out €5.8M (~$6.7M) annually on trying to influence the shape and detail of EU tech policy.

But that’s likely just the tip of Google’s policy influence spending in Europe.

It looks to be the same story for all of GAFAM: Facebook (€5.5M annually); Microsoft (€5.3M); Apple (€3.5M); Amazon (€2.8M) — although Google and Facebook, the Internet’s adtech duopoly, top the list of big spenders in the digital industry.

The report highlights how big tech’s regional lobbying relies on an obfuscated network of third parties to “push through its messages” — including think tanks, SME and startup associations and law and economic consultancies — with platform giants exerting their influence by providing funding via sponsorships or membership fees.

Unsurprisingly, this policy influence network appears to have an outsized megaphone as a result of the wealthy company it keeps: Per the report, the lobbying budget of business associations lobbying in Europe on behalf of big tech “far surpasses that of the bottom 75 per cent of the companies in the digital industry”.

Or, put another way, the smallest and least well-funded players — individuals, civil society and businesses/startups with privacy-preserving approaches that don’t align with the surveillance model of big adtech — are having their views drowned out by adtech astroturfing.

“The rising lobby firepower of big tech and the digital industry as a whole mirrors the sectors’ huge and growing role in society,” the report notes. “It is remarkable and should be a cause of concern that the platforms can use this firepower to ensure their voices are heard — over countervailing and critical voices — in the debate over how to construct new rules for digital platforms.”

It’s notable that big tech also directly funds a number of startup associations — which may (otherwise) be seen by policymakers as dissociated from platform giants.

Google’s entry on the EU’s Transparency Register notes sponsorships of Allied for Startups, for example. And the advocacy organization’s website does at least disclose what it describes as “sponsorship” by a “Corporate Board” — which as well as Google includes Amazon, Apple, Facebook and Microsoft, among other platform and digital giants.

(“We are proud to be sponsored by our Corporate Board. It supports our activities, as selected by our members, but has no voting rights,” is the official Allied for Startups line on taking funding from tech giants while claiming to represent the interests of startups free from the influence of its platform giant funders.)

Links between big tech and a sprawling array of worthy sounding associations and business alliances are often far less plainly disclosed. The report emphasizes that affiliations are frequently fuzzed or not disclosed at all — thereby concealing “potential biases and conflicts of interest”.

“We still don’t have a complete picture of this network,” the report further warns. 

TechCrunch shared the list of signatories in the open letter cited by Google as cover for its anti-ePrivacy lobbying with the two transparency organizations to ask for their verdict on how ‘clean’ it is — i.e. from a Google and/or big tech influence point of view.

Both confirmed that many of the listed groups have some form of affiliation with Google and/or other tech giants.

Margarida Silva, a co-author of the aforementioned big tech EU lobbying report, said a “quick check” of the list of signatories turned up 28 organizations that count Google as member or sponsor.

“With direct links to 28 out of 57 signatories, Google’s footprint is very clear here,” she told TechCrunch, adding: “The majority of the list is made up of lobby associations for big tech (i.e. EDIMA, CCIA, ITI, DigitalEurope, IAB), plus national level business associations that often have big tech as members.”

Silva also highlighted that some of the signatories are from other sectors (rather than tech) — including car manufacturers and publishers. But on that she pointed to the organization’s earlier findings, when it examined ePrivacy lobbying directly — and identified what she said was “a strong push by big tech [that was] matched with intense lobbying by telecoms and publishers and even other sectors who also want to benefit from surveillance advertising”.

So, again, big adtech’s wider influence is hard at work exerting an anti-privacy pull that’s redefining the center of gravity for other industries.

When we raised Google’s comments about working “hand in hand” with other tech giants in its lobbying against privacy, Silva also suggested that “coordination seems very likely”.

“GAFAM are members of the big tech lobby groups (e.g. EDIMA) who were themselves active pushing back against ePrivacy”, she noted, adding: “Usually these forums are useful for their members to discuss, agree shared approaches or delegate some lobbying activities.”

LobbyControl’s Max Bank also took a look at the list of signatories — and his pass turned up “at least 23” with an affiliation with Google — i.e. meaning the tech giant is “a member and most probably provides a member fee”.

Per the pair’s analysis, examples of Google-affiliated associations whose names appear on the open letter include industry-wider groups like Digital Europe, BusinessEurope, EDIMA and the Computer and Communications Industry Association — but also regional industry bodies like the Confederation of Danish Industry, Technology Industries of Finland, Digital Poland and Tech in France, to name a few.

Across Europe, big adtech’s reach has grown long indeed.

“Not all of these memberships are reflected in the EU transparency register,” Bank added. “It reflects again the in-transparent and powerful lobby network Google has in the EU.”

Talking of powerful lobby networks, it’s instructive to return to Facebook’s recent flashy rebrand and ‘pivot’ to “building the Metaverse”, as it describes its plan to extend its ad tracking model’s grip on people’s attention for decades to come.

This looks very interesting from a regional lobbying point of view because Facebook’s announcement included an explicit bung for Europe — with the adtech giant saying it would be hiring 10,000 highly skilled tech workers to develop the metaverse “within the European Union”. (And where exactly in the EU those jobs end up could be an instructive way to map Facebook’s regional influence network.)

The timing here is key — with EU lawmakers busy negotiating the detail of the next suite of EU digital regulations — including rules that are exclusively set to apply to big tech (aka, the DMA)…

So the question for the bloc is whether Member States’ narrow, local interests will continue to allow EU citizens’ fundamental rights be traded away on the vague promise of jobs for a few techbros…

(Going on Facebook average wage data in the EU in recent years, it’s maybe offering to spend a little over €1.5BN on this spot of local hiring — or just a few hundreds of millions per year until 2026.)

A gaping hole in EU transparency

The statement Google sent us in response to public disclosure of its anti-ePrivacy lobbying ends with what sounds almost like a (micro)apology on being caught claiming to champion consumer privacy in public while simultaneously pressing lawmakers to stall progress on the self-same subject behind closed doors.

To wit:

“As lawmakers debate new rules for the internet, citizens expect companies to engage with legislative debate openly and in ways that fully account for the concerns of all of society. We know we have a responsibility to take this understanding into our work on internet policy, and consistently strive to do so.”

But, essentially, even Google’s own assessment of how it operates is an expression of never actually achieving full accountability. Which also looks instructive of how big tech works.

Meanwhile, the ePrivacy Regulation remains undone — more than four years since the Commission’s original reform proposal (all the way back in January 2017).

The blockage has centered on the European Council — the EU institution that’s (mostly) made up of heads of governments from the 27 Member States — which has spent years failing to agree a negotiating position, meaning the Regulation couldn’t move through to discussions with the Parliament to arrive, very likely amended, at some kind of consensus and ultimate adoption.

An EU source on the European Council declined to comment on the States’ lawsuit’s allegations of Google celebrating success in stalling ePrivacy, saying only: “Negotiations on this regulation are ongoing, and I cannot provide you with more information or comments.”

A European Commission spokesman also declined to comment.

But the EU’s executive, which was responsible for drafting the original proposal, told us it stands by it — and also sounded a warning against deviation from core objectives, writing: “As far as the Commission is concerned, we stand by our proposal and remain committed to supporting the European Parliament and the Council in the trilogues to find a compromise, in keeping with the objectives of the Commission’s proposal.”

There were disagreements in the Parliament over ePrivacy. But MEPs did arrive more quickly at a negotiating position. So the real culprit for stalling ePrivacy is the Member States.

The European Parliament’s rapporteur on the ePrivacy trilogues, Birgit Sippel, declined to comment on Google’s lobbying against the file. But we also contacted a couple of shadow rapporteurs — who were more open in their views.

MEP Sophie In ‘t Veld said that while lobbying transparency around EU institutions has improved in recent years there is still a major blindspot when it comes to corporate influence ops targeting Member States’ governments directly.

“The lobbying to national governments is invisible,” she told TechCrunch — dubbing it “a big gaping hole in transparency.”

“In a way it’s not surprising,” she said of ePrivacy. “There are big interests at stake, everybody’s always trying to influence the lawmakers — it’s interesting to see how worried [big tech] are up to the point that they feel that they have to join forces but it also confirms what we are always saying that the Member States are much more susceptible to this kind of corporate and industrial interest.”

“[ePrivacy] has been stuck in Council for a very long time and it’s always the same problem,” In ‘t Veld added. “The European Parliament is of course made up of different political groups, political families, political convictions — but ultimately we always find a common line. In the Council you never know, it’s opaque… it’s not even related to political color but they’re just a lot more susceptible to corporate and industrial lobbying for reasons I fail to grasp.

“They always, systematically, take a line which is closest to big industry, to the big international corporations, big tech, American big tech — which I find even more surprising. But that’s what they do systematically. So the fact they are being successfully lobbied is not surprising.”

She pointed out that the situation was the same when the EU was negotiating the GDPR — and other pieces of pan-EU legislation, like the law enforcement directive.

So the lobbying itself is nothing new (even if the scale keeps stepping up).

However, given the blistering pace and iterations of technology change — and the market power that has accrued to a handful of the biggest data-mining giants — legislative logjams affecting the passage of digital regulations start to look like a fundamental crisis for the rule of law, as well as for Europeans’ fundamental rights.

“GDPR in its final version was not what big tech had in mind. So the European Parliament is doing its job — but it’s very annoying that we always have to push back against the Council,” said In ‘t Veld, adding that perpetual stalling is unfortunately a common Council tactic.

“Their tactic is to not actually negotiate or debate — they just stall, they just say oh we can’t agree,” she said, adding: “There are so many files which have been blocked in Council for years — in some cases ten, 15 years — they just block, it’s their tactic, rather than trying to find solutions.”

Again, though, in the digital sphere this delaying tactic looks particular concerning.

And given the huge vested interests ranged against other nascent EU digital regulations, like the DSA and DMA, how can the bloc confidently claim it can regulate Internet “gatekeepers” — when it can’t even stop big adtech’s lobbyists from stalling its own lawmakers?

MEP Patrick Breyer, another rapporteur on the ePrivacy file, was equally withering in his assessment of the ePrivacy situation — saying that while trilogue negotiations have started, the Council has “succumbed to lobbying to a degree that rather than accept this it would be far better to abandon the reform altogether”.

“Industry (including the ad business and publishers) and national governments have colluded to block ePrivacy rules which the European Parliament wants to ban surveillance tracking walls and eliminate the cookie banner nuisance by making browser signals mandatory, among other things,” he told TechCrunch.

“This year Member States have adopted a position which doesn’t deserve to have privacy in its name. I have more on this dossier on my homepage.”

“Shame on national governments for succumbing to this lobbying,” Breyer added. “The online activities of an individual allow for deep insights into their (past and future) behaviour and make it possible to manipulate them. Users have a right not to be subject to pervasive tracking when using digital services.”

The reason why lobbying is “so easy to do is precisely because Council members refuse to publish lobby meetings (unlike, to some degree, Parliament and Commission)”, he added.

With lines in to such an extensive third party influence network in Europe, and so many opaque avenues of approach to tickle friendly national governments across Europe until they adopt helpful policy positions — or else spend years refusing to adopt a position at all — it’s not hard to see how Google bought its business years more profits by delaying ePrivacy.

Postcards from Pichai’s European tour

Without full transparency into both the number and content of lobby meetings between EU Member States and tech giants we are left to speculate on how exactly adtech giants like Google went about derailing legislative progress.

Perhaps by targeting certain ‘friendly’ governments — dangling the prospect of a little local investment, either in tech infrastructure or jobs (or both), in exchange for not ‘rushing’ (aka stalling) the negotiations.

On this front it’s instructive to look through press photography of the Google CEO, back in 2018 and 2019 — when Pichai took the time to personally tour a number of European cities — and can be seen in conversation with heads of state, including France’s president Emanuel Macron, who he met in Paris in January 2018. (In the below shot at least, Macron does not look very friendly though.)

There is also an intimate tête-à-tête with Poland’s prime minister, Mateusz Morawiecki, in what was surely a very chilly Warsaw in January 2019.

In another in-person visit, Pichai held a joint press conference with a beaming Finnish prime, Minister Antti Rinne, in September 2019 in Helsinki.

In Helsinki the Google CEO announced a plan to invest €3BN to expand its data centers across Europe over the next two years, supporting a total of 13,000 full-time jobs in the EU per year. The Google announcement also trailed included the construction of more than €1BN in new energy infrastructure in the EU, including a new offshore wind project in Belgium; five solar energy projects in Denmark; and two wind energy projects in each Sweden and Finland, according to press coverage at the time.

Which does rather smell like ‘pork barrel politics’, big tech style (i.e. with glossier press photography).

 

[gallery ids="2224713,2224723,2224724,2224749"]

Pichai also made it to Berlin in January 2019 to cut the ribbon on a new policy office for Google Germany.

Press pics show him standing smiling alongside Philipp Justus, Google’s VP for Central Europe, and Annette Kroeber-Riel, its senior director public policy and government relations — ahead of an official opening that night in which Berlin Mayor, Michael Muller, was slated to attend.

We’re sure the canapés and cava flowed freely.

LobbyControl’s Bank says public criticism has forced Google to be more open about its “in-transparent” lobbying — highlighting a campaign the organization ran last year in Germany calling for the tech giant to publish its local lobby network. (He said Google did make some disclosures — but only after that public pressure.)

The lobby network on Google’s EU transparency register was also only included after public criticism, per Bank.

In a blog post about its campaign last year, LobbyControl noted ongoing disclosure limitations that prevent the full picture of Google’s regional influence from being seen. In its answer, Google lists the organizations of which the group is a member in Germany. This ranges from the Atlantic Bridge to the Federal Association of German Startups and the Digital Association Bitkom to the Economic Forum of the SPD and the Economic Council of the CDU. For Europe, Google refers to its entry in the European transparency register,” it wrote.

“However, we also asked Google about the organizations that are financially supported by the company. Unfortunately, we have not received an answer to these questions. Google continues to deny comprehensive transparency of its lobby network in Germany and the EU.”

It also noted that in the US Google publishes what it describes as “a more comprehensive list” vs its influence ops in Europe.

“There the company lists 94 trade associations and member organizations as well as 256 ‘third party organizations’,” it wrote. “In the US, there are on average 2.5 other organizations for every member organization that Google supports financially without membership.

“For Germany and Europe, too, we can assume that, in addition to the disclosed memberships, there are also numerous organizations that receive money from Google. Google keeps this information under lock and key.”

TechCrunch contacted a number of Member States’ permanent representatives to the EU to ask for a response to Google’s memo about its “successful” lobbying to freeze ePrivacy.

The permanent representations to the EU of France, Germany, Italy, Spain, Finland, Poland, Sweden and Denmark did not respond to questions about their position on the file — nor did they confirm whether Google had lobbied them (directly or indirectly) to stall the legislation.

In fact we got no response to these requests at all.

Some representations do publish (partial) lists of lobbying meetings, as we have noted. But this is even a sketch as big tech can route around even those limited disclosures by approaching national governments locally, either directly to national governments or through their vast influence network of third parties.

Sometimes — presumably when a giant like Google feels a particular piece of legislation poses enough of a risk — it might even send in its own CEO to personally petition a government or head of state. Divide and conquer as they say.

EU digital rules in the deep freeze?

The adtech influence network across Europe is quite the thing to behold, even just looking at the (partially) visible tip.

An iceberg seems an appropriate visual metaphone for what Google and other tech giants have been busily developing behind closed doors across the EU to put a deep freeze on legislation that could disrupt their surveillance capitalism in a region with ~450M pairs of eyeballs — and the world’s most well-established set of digital privacy rules (at least on paper).

Politico‘s report last year on Amazon’s lobbying memo described the document as offering “a snapshot of the company’s modus operandi in the EU” — highlighting references in the text to how it channels its positions through “a range of different lobby groups including tech groups CCIA and DigitalEurope, as well as marketing group FEDMA”. All of which now sounds very familiar.

“Amazon lobbyists said in the document that they would focus on lobbying Council telecoms attachés,” the report went on, suggesting another link between the interests of US tech giants and European telco giants (after all, US telcos don’t have the same regulatory limits on what they can do with users’ data). “They said that they had met with ministries in Madrid, Rome and Paris and noted that Spain was ‘now openly outspoken against the proposal and aligned with our views’ and that Italy was likely to follow suit.”

So the playbook for big tech to play EU policymakers off against each other looks firmly established. Even embedded.

That does not bode well for the passage of a suite of ambitious new EU digital regulations — from the aforementioned DSA and DMA to an Artificial Intelligence Regulation which will provide controls for high risk applications of AI; or broad plans to expand the rules around data resharing (with claimed privacy protections); or even planned legislation for online political ads, given the risks that data-driven adtech can pose to democratic processes.

Still, revelations that GAFAM has been privately celebrating its success at derailing updates to the bloc’s rules may not sit easily with all EU Member State governments.

France and the Netherlands have — at least in public pronouncements — broken ranks somewhat, pressing for Brussels to have greater powers to reign in big tech for example.

There is also pressure building within a number of Member States over the need for a competitive reboot of digital markets — to ensure a better outcome for consumers and competition. So even if EU legislation fails, big tech may face a patchwork of rules clipping its wings at a local level.

On this front, Germany is ahead: It has already updated its digital rulebook to bring in ex ante powers and the FCO has a raft of procedures open to assess tech giant’s market power — including into Google and Facebook. If it confirms they have what the law dubs “paramount significance for competition across markets” the next steps would be bespoke antitrust interventions.

France has also been banging the drum for national and European ‘digital sovereignty’ in recent years — and its competition watchdog stung Google with a $268M fine over adtech abuse this summer (albeit extracting interoperability commitments which Google will probably be more than happy to comply with if they further entrench its surveillance model).

France’s national stance aligns with rhetoric from the (French) EU commissioner, Thierry Breton, who is responsible for the bloc’s internal market policy — and is also fond of talking up la souveraineté — and/or warning of the balance of power between different global blocs “hardening“.

And, this summer, French president Macron had some bold-sounding words about breaking up US tech giants — making his  utterance in front of a tech audience, no less.

So one looming development for the ePrivacy file — which could auger at least a shift of tone — is that France takes over the rotating Council presidency from Slovenia next year.

France has named digital regulations as among its priorities during the six-month stint when it will be steering activity. So there may be a small window of opportunity to unchoke big tech’s choke-hold on ePrivacy. Although France’s list of claimed priorities is long, and there’s scepticism over how much it will actually be able to get done.

In ‘t Veld, for one, isn’t holding her breath — saying her expectations of the rotating presidency mechanism are “limited”.

She suggests a better solution for fixing blockages in the EU’s legislative firepower might be to strategically withhold portions of the budget — as a tool to concentrate Member States’ minds on, well, the common good of all Europeans. (Though she says that such talk can make some of her fellow MEPs “nervous”.)

“I’m very tired of the way that the Council operates. They have to start taking responsibility,” she adds. “Haven’t we learnt anything from the last couple of years? Look in the last weeks — just today — about Facebook. Think about Cambridge Analytica. Haven’t they learnt anything?”

While adtech giants splash millions on bending the ears of EU policymakers, civil society organizations don’t even have a fraction of the resource to marshal to defend European’s fundamental rights from such self-interested attacks.

It is, in short, not a fair fight. But it’s a fight for the future of so much — perhaps for the very substance of society itself — as connectivity inexorably expands the net of data-driven surveillance that losing it doesn’t even bear thinking about.

Privacy is, on the one hand, inherently personal — which can make articulating its fundamental importance a challenge, since it is so very multifaceted and multidimensional. But, hey, you sure as hell miss it when it’s gone. Collectively, it is also a shield that helps keep people and communities aligned towards a civilized, common goal by eliding difference in a way that fosters good will, social cohesion and consensus.

Again, if we’re all broken out so we can be badged and branded, manipulated and jerked around — and, sure, set against each other if it happens to sell more stuff — aka, atomized by adtech — it can quickly become a very different, polarizing story. One that doesn’t have a happy-looking ending for democratic civilization.

And with giants like Facebook already making a pitch to co-opt interoperability to its own ends by embedding the surveillance model at the core of an ‘ad-ternative‘ immersive digital reality (‘the metaverse’), time is fast running out to save the European model — of fundamental rights and freedoms — from the deep-pocketed corporate lobbyists now ranged against it.

Remember: If the shiniest version of the future big adtech has to sell comes straight out of a sci-fi dystopia — and stars Nick Clegg trying to buy off the European model for ‘10,000 jobs’ — it really is time to wake up and show big tech’s lobbyists where to find the door.

Australia could be next to mandate a choice screen in a bid to break Google’s dominance of the search market.

The Australian Competition and Consumer Commission (ACCC) is recommending it is given the power to “mandate, develop and implement a mandatory choice screen to improve competition and consumer choice in the supply of search engine services in Australia”.

A new report by the country’s competition watchdog concludes that interventions are needed to boost competition in the local search engine market and address harms flowing from Google’s circa 94% marketshare of search in Australia, such as barriers to entry for other competitors and the risk of lower quality services with what it dubs “undesirable features” — like more sponsored content vs organic search results…

Google’s grip on the local search market is also hampering new business models emerging — such as subscription options that don’t rely on ads (and data-mining users) to monetize Internet search.

The report gives the example of Neeva, a new, subscription-based search engine — which advertises itself as “the only ad-free private search engine” — and (unlike Google most other search engines) has no ads or affiliate links in search results —  thereby offering “a different value proposition that consumers may desire”, in the regulator’s assessment.

“Google’s foreclosure of key search access points through the arrangements discussed in this Report limits the ability of these businesses to grow, and consumers’ exposure to new and potentially attractive business models,” the ACCC concludes.

To tackle such problems, the regulator wants to be able to implement a search choice screen on Android devices which will provide users with a selection of options vs Google’s “predetermined default” — with the regulator saying that such a mechanism “can improve the ability of rival search engines to reach consumers”.

However the ACCC also wants to be responsible for developing the criteria around the application of a choice screen to specified service providers — which its report notes should be “linked to the provider’s market power and/or strategic position”.

That looks like a key qualification given Google has been offering a search choice screen in the European Union to users of Android smartphones for around three years — and there has been no notable shift in its regional dominance of search.

That can be blamed on the EU leaving it up to Google to determine how to ‘remedy’ the $5BN ‘cease & desist’ antitrust enforcement it issued against Google over Android back in 2018.

Google responded by adding a ‘choice’ screen of its own devising in the EU — which search rivals quickly decried.

They were especially incensed about a sealed bid auction model Google opted for — selling slots on the choice screen to rivals, without needing to pay to continue displaying its own search engine on the very same screen of course.

This resulted in Android choice screens that were filled with an uninspiring parade of Google-style ad-targeting wannabies or else Google itself as the offered ‘choice’ — almost entirely excluding pro-privacy or not-for-profit alternatives who were unable to outbid data-mining rivals.

The entirely predictable result was that Google has maintained its iron grip on Europe’s search market. And laughed all the way to the bank.

Eventually the European Commission did step in — and, this summer year, Google announced it would drop the auction model and instead let search rivals appear for free.

However alternative search engines, such as the non-tracking DuckDuckGo, remain highly critical of remedy, arguing that for a choice screen to truly work regulators must tackle the whole suite of damaging defaults Google applies to lock users to its products.

The EU implementation, for example, only applies to new Android devices (and not also to Chrome), and only on set-up or factory reboot of a device. DuckDuckGo’s counter suggestion is that regulators must make search competition a single click away at all times.

Australia’s watchdog appears aware of such pitfalls — writing: “The development of a choice screen and its implementation should be subject to detailed consultation with industry participants and user testing, and there should be careful consideration of its interaction with other measures proposed in this Report.”

Notably, it’s leaning towards the choice screen applying to both new and existing Android mobile devices — which is immediately a considerably broader application than Google chose to apply in the EU (giving itself a free pass to keep applying its damaging defaults on the hundreds of millions of already in-play Androids).

The ACCC also stipulates that an Australian Android choice screen should be free for search engines to participate in. So the lesson of Google’s self-serving EU auction model has clearly trickled down under.

Australia’s watchdog also wants wider powers to be able to effectively tackle the lack of competition in search — suggesting it will need measures to restrict a provider (“which meets pre-defined criteria”) from tying or bundling search services with other goods or services; and perhaps also limit the ability of a provider to pay for certain default positions.

The report also floats the idea of “potentially mandating access to specified datasets for rival non-dominant search engines”.

The regulator suggests rivals may need access to Google’s click-and-query data — “and potentially other datasets” — though it adds that this would need to be subject to “extensive consideration of privacy impacts, and careful design and ongoing monitoring to ensure there are no adverse impacts on consumers”.

“While there are a number of factors that contribute to the quality of a search engine, click-and-query data is a critical input,” the report goes on. “In limiting rivals’ ability to reach consumers at scale, Google has ensured it maintains access to an unrivalled dataset, allowing it to continuously improve the quality of its search results in a way that its competitors cannot, and reducing the extent to which rivals are able to scale and compete against Google.

“This has flow on effects for the ability of rival search engines to monetise their services, effectively self-reinforcing Google’s dominance in search. Google’s position means it can offer greater sums to suppliers of browsers and OEMs to be the default search engine. Rivals are thereby foreclosed from accessing users and the necessary click-and-query data to improve their search engine service, further raising barriers to entry and expansion, and extending and entrenching Google’s dominance in search.”

While the ACCC is considering some meaty interventions against Google’s dominance of search, actual action appears some way off — with a fifth interim report, that’s not due until September 2022, set to consider a framework for the proposed rules and powers, along with a broader assessment of the need for ex-ante regulation for digital platforms — “to address common competition and consumer concerns we have identified across digital platform markets”.

The reports are part of work ordered by the Australian government last year — when it instructed the regulator to conduct an inquiry into digital markets and report back every six months. However the final report is not due until March 2025.

The EU of course has already proposed ex ante rules to tackle platform giants, under the Digital Markets Act, which was presented at the end of last year. Though is likely years off being adopted.

The UK is also working on a competition reform that will give regulators pre-emptive powers over tech giants who are deemed to have strategic market power. But, again, a dedicated Digital Markets Unit is pending a suite of legislative powers.

While Germany — which is ahead of the pack — updated its digital regulations at the start of this year, bringing in ex ante powers to tackle platforms. Its competition watchdog, the FCO, is in the process of assessing the market power of a number of tech giants to determine how it might intervene to support competition.

Germany suite of probes includes a close look at Google’s News Showcase product, through which the tech giant licensed news publishers’ content by inking closed door commercial deals which risk playing large publishers off against smaller ones, among other competition concerns.

Australia has already passed legislation in this area — putting a news bargaining code into legislation back in February that applies to both Google and Facebook.

Reached for comment on the ACCC report, Google sent us this statement attributed to a spokesperson:

“People use Google Search because it’s helpful, not because they have to and its popularity is based on quality that’s built on two decades of innovation. Android gives people choice by allowing them to customise their device — from the apps they download, to the default services for those apps. Preinstallation benefits users by making it easier for them to use services quickly and easily. We are continuing to review the report and look forward to discussing it with the ACCC and Government.”

 

Surveys have long been used by marketing teams and other business decision makers to learn how customers tick. But they can be costly to put together, hard to run at scale, and, at the end of the day, are only as credible as the data that gets put into them. Today, a London startup called Attest, which has built a cloud-based, no-code, big-data solution that it believes provides an answer to those challenges, is announcing $60 million in growth funding, in the wake of record business growth in the last couple of years.

Jeremy King, the company’s CEO and co-founder, says that its machine learning-based approach is gaining traction against the many incumbent players in the field of online market research — there are hundreds of them ranging from Kantar and SurveyMonkey through to Qualtrics and many more — because it provides faster and more accurate results.

“The dark secret is that most online research has been very low quality,” he said in an interview, noting that if you took the same brief to five traditional market research organizations, you’d likely get five different sets of responses, not unlike taking one building brief to five architects. Attest’s ambition is to move away from that and build a much more consistent, and thus reliable, framework for market research.

“We are trying to make it as good as we can get it. It’s still not perfect but at least we are trying, and most online research frankly doesn’t try at all,” King said. He also described the company’s methodology as ‘suicidally transparent.’ It’s a pitch that indeed does sound pretty honest, and yet it seems to have struck a chord with a lot of big names. Its customer list includes Microsoft, Santander, Walgreens/Boots, Klarna, Brew Dr. Kombucha, Fabletics, eToro and Publicis, among others.

The investment is coming from returning investor NEA and Kismet, along with other unnamed backers, and Attest said that it brings the total it has raised to date to $85 million. Other investors in the company have included Oxford Capital and Episode 1 (which co-led a $3.1 million round in 2019).

Attest is not disclosing its valuation, but PitchBook data reveals that it was just under $273 million post-money in August of this year, when this latest round appears to have actually closed.

As you might expect from a startup working in the world of data, Attest has some pretty compelling data points of its own.

The startup has built out a massive database that is aggregated from hundreds of individual panels, user groups and more, altogether totaling 110 million consumers across 49 countries. Its promise is that any user (no tech expertise required) can create a survey within minutes to target any segment of users that it wants out of that bigger pool and will get responses back in 24 hours if not sooner.

Within those results, Attest guarantees the data in terms of response numbers and integrity using a complex set of algorithms as part of what King described as a 15-step process that starts with formulating questions, sourcing audiences, processing the data and providing it in visualizations that are useful to the person doing the research. It’s priced on a freemium model, where those on paid tiers basically buy credits that are used based on one credit equalling one response per question. These credits work out to ranging in price between 40 cents and 60 cents.

King said that Attest does not rely on paying users, nor placing surveys in access gates, nor building campaigns that run on social media and are built on asking questions that are indirectly mapped on to your demographic (the dreaded ‘Which 1980s Brat Packer are you?’). These, King says, are gateways to low-quality data, since people usually only click on them to get them out of the way. Instead, it relies on those algorithms, which in part are designed to figure out which kinds of audiences are most useful and receptive to answering the questions you are asking. That being said, those building questions can also opt to work with actual humans at Attest if they need help figuring out useful questions.

Surveys and the bigger topic of “consumer engagement” have been something of a hot-button issue in the last several years, not least because of the roles that they potentially play in areas like data collection and how that data ultimately might be tied to a particular user. Some believe that Facebook, as well as other social media companies that have built their business models on engagement, have a lot to answer for in how they build experiences and tweak algorithms to surface more engaging content, irrespective of the quality or nature of the media in question.

King says that what Attest does is a big step removed from all of that: although full of complexity and technology at the back end, at the front end they are more old school and direct, aimed at figuring out, for example, whether Bertolli olive oil should finally come clean and admit it’s not Italian, but in fact Spanish in origin.

In what is estimated to be an $80 billion market, as data becomes more of a support — and in some cases a replacement for — basic intuition in business decision-making, King said the company has seen a growing demand for more surveying on its platform, and that is why investors are interested, too.

“Today’s investment underscores our commitment to Attest and our belief in their stellar team and innovative technology, which is revolutionising access to high quality consumer insights for brands around the world at such an exciting scale,” said Colin Bryant, a partner at NEA, in a statement. “In the current climate, the need to tap into consumer behavior has never been higher, and we’ve seen how Attest has facilitated growth for brands across the pandemic. We foresee the demand for consumer data only getting stronger.”

“Attest is on a trajectory to overtake all incumbents in the market research space, and has its eyes firmly on the largest prizes,” added Asheque Shams, General Partner at Kismet. “We are a startup investment firm with a mission to help fast-growing tech companies reach their greatest potential, so we are immensely excited to partner with Jeremy and team.”

European Union lawmakers are mobilizing support for a ban on tracking-based advertising to be added to a new set of Internet rules for the bloc — which were proposed at the back end of last year but are now entering the last stretch of negotiations ahead of becoming pan-EU law.

If they succeed it could have wide-ranging implications for adtech giants like Facebook and Google. And for the holistic health of Internet users’ eyeballs more generally.

The move follows a smorgasbord of concerns raised in recent years over how such creepy ads, which use personal data to decide who sees which marketing message, can negatively impact individuals, businesses and society — from the risk of discrimination and predatory targeting of vulnerable people and groups; to the amplification of online disinformation and the threat that poses to democratic processes; to the vast underbelly of ad fraud embedded in the current system.

Gathering so much data about Internet users is not only terrible for people’s privacy (and wasteful from an environmental perspective, given all the extra data-processing it bakes in), they contend, but tracking opens up a huge attack vector for hackers — supporting a pipeline of data breaches and security risks, which in turn contravenes key principles of existing EU law (like data minimization).

So arguments in favor of a regional ban are plentiful.

On the flip side, adtech giants Facebook and Google make bank by mining people’s digital activity and using this surreptitiously sucked up information as a targeting tool to grab attention, darting eyeballs with what they euphemize as “relevant” advertising — meaning ads that use people’s own information to try to manipulate their behavior for profit.

The powerful pair can easily afford to spend millions lobbying against anything that threatens their tracking-based business models. And they have been doing just that in Europe in recent years, as lawmakers have been working on redrawing the parameters of digital regulations — along with the tacit (if not very publicly valuable) support of an opaque adtech middleman layer.

These are largely faceless (to consumers) entities that benefit from dynamics like the lack of traceability around current ad spending (oh-hi ad fraud!) and arbitrage of quality publishers’ audiences to sell ads against cheap filler content (bye-bye quality journalism!). But that’s hardly good news for consumers, society or for fair and honest competition, critics say.

While the arguments for banning microtargeting have been getting louder and stronger (with each passing scandal) for years, it’s fair to say that this remains something of a David vs Goliath battle — with individual rights, civil society, quality publishers and pro-privacy innovators on one side vs big adtech and the sprawling ecosystem of opaque data-traders and Internet content bottom-feeders that the current Facebook-Google duopoly gives succour to punching down hard on the other.

Simultaneously, high level competition concerns over Facebook and Google’s power over online advertising is driving increasing scrutiny and enforcement from antitrust watchdogs in Europe.

But the risk there is regulators could just end up cementing harmful microtargeting — such as by enforcing increased sharing of people’s data for ad targeting — instead of seeking to reboot the market in a way that’s both healthy for consumers and for healthy digital competition.

So, the next few weeks, really looks like crunch time for anyone rooting for a full reboot of surveillance-based business models.

MEPs in the European Parliament are set to vote on a number of committee reports that contain amendments to the Commission’s legislative proposals seeking to outlaw or restrict the practice — with a final plenary vote expected in early November.

Their target is the Digital Services Act (DSA) and the Digital Markets Act (DMA): Incoming EU regulations that contain a range of measures intended to level the playing field between offline and online commerce — by dialling up accountability on digital businesses and platforms; standardizing elements of governance; and seeking to enforce fairness of business dealing; in the latter case by applying a set of fixed rules to intermediating platform giants like Google and Facebook that play a gatekeeping role over Internet content and/or commerce.

The planned legislation contains a swathe of new rules for how Internet businesses will be able to operate in the EU. But the European Commission, which drafted the proposals, did not include an outright ban on surveillance-based ad business models.

That’s what a number of MEPs are trying to change now.

The Commission sidestepped including a ban in the draft legislation despite pressure from the parliament and other EU institutions to take a tougher line on microtargeting.

There have also been mounting complaints against tracking-based ads under existing EU privacy law (the General Data Protection Regulation; GDPR) — and a concerning a lack of GDPR enforcement against adtech.

International backing for a ban on surveillance-based ads has also been growing.

The Commission itself has been actively grappling with related issues — like how to reduce online disinformation and protect EU elections from interference — and recently made an appeal to the adtech sector to do more to address problematic economic incentives driving the amplification of harmful nonsense.

But although it has said it will be beefing up efforts in those areas it continues to prefer what amounts to a process of adtech industry engagement vs laying down strict legal limits to prohibit such intrusive targeting altogether. So it’s been left to Europeans’ elected representatives to take a stand against microtargeting.

The next few weeks will be crucial to determining whether the bloc ends up taking the bold but — many now argue — necessary step of outlawing surveillance-based advertising.

Alternative forms of Internet advertising are both available and profitable, supporters of a ban say.

It’s also worth noting that even EU regulators only gave Google the green light to acquire Fitbit at the end of last year after gaining a concession from the tech giant that it would not use Fitbit’s health data for ad targeting — for a full ten years.

So if the EU’s own regulators decided they needed to bake in such extensive precautions before allowing another data-grabbing acquisition by big (ad)tech, it does suggest the fundamental problem here is adtech’s unfettered use of people’s data.

If the bloc outlaws microtargeting it would — conversely — provide clear impetus for the publishing industry to switch to less intrusive forms of advertising, while simultaneously creating an incentive for increased innovation around pro-privacy business models — a space where European startups are already among the world leaders.

So there’s a chance for the region to lead in both digital regulations and tech development.

Growing support for tracking-free ads

In recent months a coalition of MEPs, civil society organisations and companies from across the EU has been campaigning to end the pervasive tracking advertising industry that dominates the Internet — organizing under the banner of the Tracking-free Ads Coalition.

They are calling on interested parties to join the push to defend fundamental EU rights and counter tracking-based harms.

Although time is now short — with the window of opportunity for slotting a ban into draft legislation set to close in a few weeks’ (or even days’) time as the European Parliament has its say on the final shape of the regulations.

The Coalition argues that harms associated with surveillance-based advertising are simply too vast to ignore — whether it’s the core erosion of the fundamental right EU citizens have to privacy, or the impact on publishers whose audiences are being arbitraged by big tech and opaque adtech middlemen which in turn undermines quality journalism and supercharges ad fraud.

Media plurality is at threat, they warn.

The group also points out that viable alternatives already exist for funding online content, such as contextual advertising — used by the likes of pro-privacy search engine DuckDuckGo, which has been profitable for over a decade for example.

DuckDuckGo is one of 14 pro-privacy businesses that have intervened to lobby MEPs on the issue in recent weeks — along with Vivaldi, Fastmail, Conva, Proton, Tutao, Disconnect, Mojeek, Ecosia, Startpage & StartMail, Nextcloud, Kobler, Strossle International and Mailfence — who have written to the Committee on Internal Market and Consumer Protection (IMCO) to express their backing for banning tracking ads.

“In addition to the clear privacy issues caused by surveillance-based advertising, it is also detrimental to the business landscape,” they wrote in their letter to the IMCO committee which makes the case for pro-privacy alternatives to tracking.

“These practices seriously undermine competition and take revenue away from content creators. Anticompetitive behaviour and effects serve to entrench dominant actors’ positions while complex supply chains and ineffective technologies lead to lost revenues for advertisers and publishers,” they also argued, adding: “Although we recognize that advertising is an important source of revenue for content creators and publishers online, this does not justify the massive commercial surveillance systems set up in attempts to ‘show the right ad to the right people’.

“Other forms of advertising technologies exist, which do not depend on spying on consumers, and alternative models can be implemented without significantly affecting revenue. On the contrary — and that we can attest to — businesses can thrive without privacy-invasive practices.”

MEPs backing the Coalition’s calls for a ban on tracking ads are hoping to convince enough of their fellow parliamentarians to step in over the broad array of harms being associated with surveillance-based business models, including by debunking the economic claims that adtech makes in defence of tracking-ads.

The Coalition is shooting for an outright ban on microtargeting in the DSA, which will apply broadly to all digital services; and also wants to get restrictions on how data can be combined by gatekeeping giants for ad purposes added to the DMA — which is likely to apply to both Facebook and Google (among other tech giants).

In October last year MEPs backed a call for tighter regulations on microtargeting in favor of less intrusive forms of advertising (like contextual ads) — urging Commission lawmakers to assess further regulatory options, including looking at a phase-out leading to a full ban.

But the Coalition believes the debate has moved on enough that the parliament could vote to back an outright ban.

At the same time, supporters also warn over the frenzied lobbying going on in the background as adtech giants try to derail a threat to what is — for them — a very lucrative business model.

Big adtech’s EU lobbying frenzy

A report published this summer by Corporate Europe Observatory and LobbyControl listed US adtech giants Google and Facebook at the top of a list of big spenders lobbying to influence EU lawmakers — with the pair splurging €5.8M and €5.5M respectively to push their pro-tracking agenda in the region.

However that’s likely just the tip of the iceberg as Facebook and Google provide funding to an array of third party lobby groups, such as think tanks and industry associations, which can often been heard amplifying their talking points — without making their links with big tech funders amply clear.

One source close to the parliament negotiations around the DSA and DMA highlighted Facebook’s “enormous” lobbying budget — saying there’s a belief among some MEPs that it’s “the largest lobbying operation in world history in Europe”.

With the counter push to amend EU legislation now entering the final stages, our source on the negotiations suggested there is political momentum to get a ban on microtargeting into EU law — following the Wall Street Journal‘s recent exposé of internal Facebook documents.

The whistleblower, Frances Haugen, who revealed herself as the source of the document dump, called directly for lawmakers to act — suggesting changes are needed to Facebook’s algorithms to prevent a range of virality-generated harms.

Preventing ads from being targeted against personal data would fall into that category — removing the economic incentives that prop up some types of harmful disinformation and misinformation.

But Paul Tang, of the Progressive Alliance of Socialists and Democrats in the European Parliament — who is one of the MEPs involved in the push for a ban on microtargeting — argues there are now scores of reasons to ban tracking ads.

“There are tons of reasons but the two main concerns we have are as follows: Firstly, we need to stop the massive privacy breaches and monetization of attention. New examples of misuse of data and the harms of polarisation algorithms are being published almost every single week,” he told TechCrunch.

“Secondly, Google and Facebook have built an opaque advertising ecosystem, allowing ad fraud and giving them a duopoly on the digital advertising market to the detriment of SMEs and traditional publishers, who their income erode. This undermines our public media and by that our democratic institutions.”

Another MEP who’s part of the Coalition, Alexandra Geese, a member of the Greens/European Free Alliance political group, told us her personal concern is how much personal data is being allowed to accumulate in the hands of just two private companies — with a range of associated risks and concerns.

“While data protection against public authorities works quite well in Europe, we’re only starting to understand the risks and harms that arise because private companies control so much personal data,” she argued. “If it were in the hands of a government like in China, there would be a public outcry. Some of the harms are already clearly visible. Disinformation, hateful speech and the polarization of societies as well as electoral manipulation are so widespread because groups of users can be specifically targeted — be it for commercial reasons (keeping eyeballs to show ads to with strong emotional content), be it for manipulation by malicious actors.

“I also have strong economic concerns. In the current advertising ecosystem, no other company can compete and it’s also very difficult to make a different choice. This is already choking publishers. In the long run, this will damage European AI companies. While we can leverage industrial data in Europe, the possibility to predict people’s behaviour as well as being able to train algorithms with such a wealth of data will always be an enormous competitive advantage.”

A report by the Irish Council for Civil Liberties (ICCL) which was prepared at MEPs’ request — and shared with TechCrunch ahead of publication — summarises a range of key harms it argues stem from microtargeted ads, underscoring both threats to Europeans’ fundamental rights (like privacy) and the negative impact on media pluralism of adtech giants’ surveillance-based business models enabling an opaque layer of adtech that profits off a shadowy trade in people’s information and arbitrage of quality publishers’ audience.

Estimates for the cost of what the ICCL report bills as the “opaque fees charged on every tracking-based ad” range between 35%-70% — which it cites as a key threat to media pluralism.

In the report the ICCL also writes that Google has “diverted data and revenue from publishers to itself” — citing a stat that in 2004 half (51%) of the tech giant’s ad revenue came from displaying ads on publishers’ properties vs “nearly all” (85%) now being derived from displaying ads on its own websites and apps — which the ICCL says Google does “with the benefit of data taken from publishers’ properties”.

So the narrative here is of theft of individuals’ privacy and publishers’ revenue. 

The report also summarises examples of revenue uplift experienced by a number of European publishers after they switched from microtargeting (tracking) to contextual (non-tracking) ads. Such as the 149% boost seen by Dutch publisher NPO Group — or the 210% higher average price reported by TV2, a Norwegian news website for ads sold through Kobler’s contextual targeting vs tracking-based ad targeting.

“Practical evidence from European publishers now shows that publishers’ ad revenue can increase when tracking-based advertising is switched off,” the ICCL argues, predicting that: “A switch off across the entire market will amplify this effect, protecting fundamental rights and publisher sustainability. We urge lawmakers to play their part.”

Asked about momentum in the Parliament for a microtargeting ban, Tang suggests there is a lot of appetite among MEPs to change what he dubs as “this opaque and even fraudulent advertising system”.

But few people TechCrunch spoke to about this issue were willing to predict exactly which way MEPs will jump. So it really looks like it will go down to the wire.

“We are currently still in hectic negotiations with all political groups but there is for sure a momentum,” Tang added.

He also gave short shrift to big adtech’s claims that surveillance-based ads are essential for small businesses.

“Facebook promotes itself disingenuously as the champion of small and medium enterprises. The high intermediary costs of Facebook and Google, the prevalence of ad fraud and the limited effectiveness of tracking-ads should all be red flags for every SME out there,” he suggested.

Also discussing momentum for the campaign, Geese said three groups in the parliament (S&D, Greens and Left) are currently supporting a ban and a shift to contextual advertising for the whole ecosystem.

“Some MEPs of other groups support stronger transparency and better consent regulation as well as a ban of dark patterns,” she also told us, adding: “What is interesting is the fact that public awareness is rising and campaigns are picking up speed.

“People are sick of being tricked into consent and when asked with a simple Yes/No question, refuse consent in high numbers (ca. 80% globally e.g. for iPhone users). The advertising system is evolving and shutting out third party cookies will also mobilize publishers if they don’t want to depend entirely on Google and Facebook,” she continued, before sticking her neck out with a prediction of victory, saying: “I’m confident that we will have a majority by the time DSA/DMA go to plenary.”

Nonetheless, Geese acknowledged that Facebook and Google continue to have “huge influence” over views on this issue.

“Their lobbying is pervasive and targets the EU commission as well as my fellow lawmakers and national governments,” she said, noting that she hears colleagues from other parties repeating Google and Facebook claims “without really understanding the issue”.

“Also all relevant Brussels-based think tanks as well as many academic researcher depend in some way on funding by big tech — so they all send out the same kind of message.”

“The current main claim is that ‘targeted advertising is good for SMEs’ as well as pretending that a ban of surveillance advertising would mean a ban of any kind of advertising — which is incorrect since we do support contextual advertising,” Geese went on. “Many lawmakers are not familiar with the advertising industry and don’t know that surveillance advertising is a rather new form of advertising that has been taking revenue away from European companies and publishers in the last 15 years. Most are also unaware that this market is controlled by a duopoly that faces charges for price-fixing in the US. How can this be good for SMEs or European companies?

“Furthermore there is no evidence whatsoever that SMEs benefit more from targeted advertising than from contextual. With our proposal, SMEs would still be able to target clients on websites based on their interests or even on their location. Contextual advertising has a huge growth potential and can serve SMEs far better than the current system, not to speak of publishers who have thrived with contextual in the past.”

We contacted Google and Facebook for comment on the Coalition’s campaign.

At the time of writing neither had responded to the request. But we also pinged the IAB Europe — which was happy to defend surveillance-based advertising on the tech giants’ behalf.

In a statement attributed to Greg Mroczkowski, the IAB Europe’s director of public policy, the online ad industry association argued that creepy ads are “indispensable” for both small and big business, and — it claimed — for “Europe’s news publishers” and for “content creators of all kinds”.

Which, as an argument, does seem to gloss over some basic market dynamics — like, for instance, if you flood the market with “content” of any quality that might make it more difficult for news publishers to produce high quality journalism (which is more expensive to produce than any old clickbait) since all this stuff is competing for people’s eyeballs (and attention is finite).

Similarly, if big business is allowed to use intrusively powerful ad targeting tools, doesn’t that undermine the ability of small businesses to compete against those brand giants using the same tools — given they don’t have the same level of resource to spend on digital marketing and data gathering? So wouldn’t it be a more level playing field if people’s data was off-limits and competition could be centred on the relative merits of the products?

But, well, here’s the IAB’s statement in full:

“Personalised advertising is an indispensable marketing channel for small businesses and household brands alike. It is also a vital revenue generator for Europe’s news publishers as well as other content creators of all kinds.

“The EU’s already extensive legal framework for privacy and data protection applies directly to data-driven ads, and we support enforcement of that law. We also welcome measures to further boost trust and transparency in the online advertising ecosystem, but uncosted, untested prohibitions of valuable technological solutions are not the way forward.”

The claim by the IAB that it supports enforcement of the GDPR is also worth an addendum, given that its own Transparency and Consent Framework — a widely used tool for gathering Internet users’ ‘consents’ to ad targeting — is itself the subject of a data protection complaint.

A division of Belgium’s data protection agency found in a preliminary report last year that the tool failed to comply with key GDPR principles of transparency, fairness and accountability, and the lawfulness of processing. But that regulatory procedure appears to be ongoing.

Adtech enforcement gap

Another important strand to this story is the (much chronicled) lack of enforcement of the GDPR around adtech.

Yet there seems to be a lack of awareness inside parts of the European Commission about this very notable enforcement gap where EU data protection law intersects with tracking-based business models.

One Commission official we spoke to — in the context of flagging eye-watering research that demonstrates Facebook’s platform can be used to target ads at a single person; in order to ask whether such a problematic use-case might fall under a proposed prohibition in draft AI regulations (which will forbid subliminal techniques that influence people ‘beyond their consciousness’ and aim to materially distorting a person’s behaviour in a manner that causes or is likely to cause them or another person physical or psychological harm) — pointed to the GDPR by way of recourse, saying the law already establishes rules on users’ consent and/or their right to object to targeted digital marketing.

However this official seemed entirely unaware that Facebook does not offer an option to use its service without being tracked. Which means EU users simply can’t obtain their GDPR rights when/if using Facebook.

So if the Commission is imagining that the GDPR provides protection enough against abuse of personal data by adtech it hasn’t being paying enough attention to the rampant attention-mining going down, consent-free, on the modern web.

It’s true that Google has been fined in France under GDPR for failing to be transparent enough about its tracking practices. But privacy experts continue to decry its use of disingenuous dark patterns — such as labyrinthine settings menus that make it hard for consumers to figure out how to stop it tracking them for ad targeting as they browse the Internet.

While a complaint against Facebook’s “forced consent” has been stalled in Ireland — which has suggested there’s no legal problem with the tech giant claiming users are actually in a contract with it to receive targeted ads (meaning there’s no need for it to ask for their consent).

Like Facebook, Google is now regulated on data protection matters by Ireland’s Data Protection Commission — which has faced years of criticism for failing to act on cross-border GDPR complaints, including a large number concerning adtech.

So, again, if the Commission takes the view that the GDPR alone can stop unwanted microtargeting it’s essential to underscore that that certainly has not happened yet — more than three years after the law came into application.

So the salient question then is, should EU citizens really have to wait years more for Europe’s top court to enforce the letter of the law? A growing number of MEPs don’t think so. Which is why they think it’s vital to put a ban on the face of EU legislation.

If these parliamentarians fail to get a ban on microtargeting into the DSA (and/or adtech data restrictions in the DMA) there is perhaps another looming chance for them to amend legislation to outlaw at least a subset of surveillance ads.

Indeed, the Commission official we spoke to suggested that a prohibition already included in another draft piece of EU legislation — the AI regulation, which will set rules around high risk uses of artificial intelligence — could potentially apply to microtargeting of people, if it fulfils all the conditions under article 5(1)a) of the proposed Act.

The problem is that the current Commission proposal sets a very high bar for a ban — requiring that such an ad is not only targeted at a person without their knowledge but must also “materially distort” their behaviour in such a way that it “causes or is likely to cause that person or another person physical or psychological harm”.

Safe to say, the adtech giants would deny any such impact is possible from targeted advertising — even as they rake billions more into their coffers by selling people’s attention via an industrial data-gathering apparatus which has been baked into the web to spy on what Internet users do and profile people for profit.

If a microtargeting ban fails to make it in the votes now looming in the European Parliament, the AI Regulation could be the next — and perhaps final — battleground for EU lawmakers to overrule the surveillance giants.

European Union lawmakers are mobilizing support for a ban on tracking-based advertising to be added to a new set of Internet rules for the bloc — which were proposed at the back end of last year but are now entering the last stretch of negotiations ahead of becoming pan-EU law.

If they succeed it could have wide-ranging implications for adtech giants like Facebook and Google. And for the holistic health of Internet users’ eyeballs more generally.

The move follows a smorgasbord of concerns raised in recent years over how such creepy ads, which use personal data to decide who sees which marketing message, can negatively impact individuals, businesses and society — from the risk of discrimination and predatory targeting of vulnerable people and groups; to the amplification of online disinformation and the threat that poses to democratic processes; to the vast underbelly of ad fraud embedded in the current system.

Gathering so much data about Internet users is not only terrible for people’s privacy (and wasteful from an environmental perspective, given all the extra data-processing it bakes in), they contend, but tracking opens up a huge attack vector for hackers — supporting a pipeline of data breaches and security risks, which in turn contravenes key principles of existing EU law (like data minimization).

So arguments in favor of a regional ban are plentiful.

On the flip side, adtech giants Facebook and Google make bank by mining people’s digital activity and using this surreptitiously sucked up information as a targeting tool to grab attention, darting eyeballs with what they euphemize as “relevant” advertising — meaning ads that use people’s own information to try to manipulate their behavior for profit.

The powerful pair can easily afford to spend millions lobbying against anything that threatens their tracking-based business models. And they have been doing just that in Europe in recent years, as lawmakers have been working on redrawing the parameters of digital regulations — along with the tacit (if not very publicly valuable) support of an opaque adtech middleman layer.

These are largely faceless (to consumers) entities that benefit from dynamics like the lack of traceability around current ad spending (oh-hi ad fraud!) and arbitrage of quality publishers’ audiences to sell ads against cheap filler content (bye-bye quality journalism!). But that’s hardly good news for consumers, society or for fair and honest competition, critics say.

While the arguments for banning microtargeting have been getting louder and stronger (with each passing scandal) for years, it’s fair to say that this remains something of a David vs Goliath battle — with individual rights, civil society, quality publishers and pro-privacy innovators on one side vs big adtech and the sprawling ecosystem of opaque data-traders and Internet content bottom-feeders that the current Facebook-Google duopoly gives succour to punching down hard on the other.

Simultaneously, high level competition concerns over Facebook and Google’s power over online advertising is driving increasing scrutiny and enforcement from antitrust watchdogs in Europe.

But the risk there is regulators could just end up cementing harmful microtargeting — such as by enforcing increased sharing of people’s data for ad targeting — instead of seeking to reboot the market in a way that’s both healthy for consumers and for healthy digital competition.

So, the next few weeks, really looks like crunch time for anyone rooting for a full reboot of surveillance-based business models.

MEPs in the European Parliament are set to vote on a number of committee reports that contain amendments to the Commission’s legislative proposals seeking to outlaw or restrict the practice — with a final plenary vote expected in early November.

Their target is the Digital Services Act (DSA) and the Digital Markets Act (DMA): Incoming EU regulations that contain a range of measures intended to level the playing field between offline and online commerce — by dialling up accountability on digital businesses and platforms; standardizing elements of governance; and seeking to enforce fairness of business dealing; in the latter case by applying a set of fixed rules to intermediating platform giants like Google and Facebook that play a gatekeeping role over Internet content and/or commerce.

The planned legislation contains a swathe of new rules for how Internet businesses will be able to operate in the EU. But the European Commission, which drafted the proposals, did not include an outright ban on surveillance-based ad business models.

That’s what a number of MEPs are trying to change now.

The Commission sidestepped including a ban in the draft legislation despite pressure from the parliament and other EU institutions to take a tougher line on microtargeting.

There have also been mounting complaints against tracking-based ads under existing EU privacy law (the General Data Protection Regulation; GDPR) — and a concerning a lack of GDPR enforcement against adtech.

International backing for a ban on surveillance-based ads has also been growing.

The Commission itself has been actively grappling with related issues — like how to reduce online disinformation and protect EU elections from interference — and recently made an appeal to the adtech sector to do more to address problematic economic incentives driving the amplification of harmful nonsense.

But although it has said it will be beefing up efforts in those areas it continues to prefer what amounts to a process of adtech industry engagement vs laying down strict legal limits to prohibit such intrusive targeting altogether. So it’s been left to Europeans’ elected representatives to take a stand against microtargeting.

The next few weeks will be crucial to determining whether the bloc ends up taking the bold but — many now argue — necessary step of outlawing surveillance-based advertising.

Alternative forms of Internet advertising are both available and profitable, supporters of a ban say.

It’s also worth noting that even EU regulators only gave Google the green light to acquire Fitbit at the end of last year after gaining a concession from the tech giant that it would not use Fitbit’s health data for ad targeting — for a full ten years.

So if the EU’s own regulators decided they needed to bake in such extensive precautions before allowing another data-grabbing acquisition by big (ad)tech, it does suggest the fundamental problem here is adtech’s unfettered use of people’s data.

If the bloc outlaws microtargeting it would — conversely — provide clear impetus for the publishing industry to switch to less intrusive forms of advertising, while simultaneously creating an incentive for increased innovation around pro-privacy business models — a space where European startups are already among the world leaders.

So there’s a chance for the region to lead in both digital regulations and tech development.

Growing support for tracking-free ads

In recent months a coalition of MEPs, civil society organisations and companies from across the EU has been campaigning to end the pervasive tracking advertising industry that dominates the Internet — organizing under the banner of the Tracking-free Ads Coalition.

They are calling on interested parties to join the push to defend fundamental EU rights and counter tracking-based harms.

Although time is now short — with the window of opportunity for slotting a ban into draft legislation set to close in a few weeks’ (or even days’) time as the European Parliament has its say on the final shape of the regulations.

The Coalition argues that harms associated with surveillance-based advertising are simply too vast to ignore — whether it’s the core erosion of the fundamental right EU citizens have to privacy, or the impact on publishers whose audiences are being arbitraged by big tech and opaque adtech middlemen which in turn undermines quality journalism and supercharges ad fraud.

Media plurality is at threat, they warn.

The group also points out that viable alternatives already exist for funding online content, such as contextual advertising — used by the likes of pro-privacy search engine DuckDuckGo, which has been profitable for over a decade for example.

DuckDuckGo is one of 14 pro-privacy businesses that have intervened to lobby MEPs on the issue in recent weeks — along with Vivaldi, Fastmail, Conva, Proton, Tutao, Disconnect, Mojeek, Ecosia, Startpage & StartMail, Nextcloud, Kobler, Strossle International and Mailfence — who have written to the Committee on Internal Market and Consumer Protection (IMCO) to express their backing for banning tracking ads.

“In addition to the clear privacy issues caused by surveillance-based advertising, it is also detrimental to the business landscape,” they wrote in their letter to the IMCO committee which makes the case for pro-privacy alternatives to tracking.

“These practices seriously undermine competition and take revenue away from content creators. Anticompetitive behaviour and effects serve to entrench dominant actors’ positions while complex supply chains and ineffective technologies lead to lost revenues for advertisers and publishers,” they also argued, adding: “Although we recognize that advertising is an important source of revenue for content creators and publishers online, this does not justify the massive commercial surveillance systems set up in attempts to ‘show the right ad to the right people’.

“Other forms of advertising technologies exist, which do not depend on spying on consumers, and alternative models can be implemented without significantly affecting revenue. On the contrary — and that we can attest to — businesses can thrive without privacy-invasive practices.”

MEPs backing the Coalition’s calls for a ban on tracking ads are hoping to convince enough of their fellow parliamentarians to step in over the broad array of harms being associated with surveillance-based business models, including by debunking the economic claims that adtech makes in defence of tracking-ads.

The Coalition is shooting for an outright ban on microtargeting in the DSA, which will apply broadly to all digital services; and also wants to get restrictions on how data can be combined by gatekeeping giants for ad purposes added to the DMA — which is likely to apply to both Facebook and Google (among other tech giants).

In October last year MEPs backed a call for tighter regulations on microtargeting in favor of less intrusive forms of advertising (like contextual ads) — urging Commission lawmakers to assess further regulatory options, including looking at a phase-out leading to a full ban.

But the Coalition believes the debate has moved on enough that the parliament could vote to back an outright ban.

At the same time, supporters also warn over the frenzied lobbying going on in the background as adtech giants try to derail a threat to what is — for them — a very lucrative business model.

Big adtech’s EU lobbying frenzy

A report published this summer by Corporate Europe Observatory and LobbyControl listed US adtech giants Google and Facebook at the top of a list of big spenders lobbying to influence EU lawmakers — with the pair splurging €5.8M and €5.5M respectively to push their pro-tracking agenda in the region.

However that’s likely just the tip of the iceberg as Facebook and Google provide funding to an array of third party lobby groups, such as think tanks and industry associations, which can often been heard amplifying their talking points — without making their links with big tech funders amply clear.

One source close to the parliament negotiations around the DSA and DMA highlighted Facebook’s “enormous” lobbying budget — saying there’s a belief among some MEPs that it’s “the largest lobbying operation in world history in Europe”.

With the counter push to amend EU legislation now entering the final stages, our source on the negotiations suggested there is political momentum to get a ban on microtargeting into EU law — following the Wall Street Journal‘s recent exposé of internal Facebook documents.

The whistleblower, Frances Haugen, who revealed herself as the source of the document dump, called directly for lawmakers to act — suggesting changes are needed to Facebook’s algorithms to prevent a range of virality-generated harms.

Preventing ads from being targeted against personal data would fall into that category — removing the economic incentives that prop up some types of harmful disinformation and misinformation.

But Paul Tang, of the Progressive Alliance of Socialists and Democrats in the European Parliament — who is one of the MEPs involved in the push for a ban on microtargeting — argues there are now scores of reasons to ban tracking ads.

“There are tons of reasons but the two main concerns we have are as follows: Firstly, we need to stop the massive privacy breaches and monetization of attention. New examples of misuse of data and the harms of polarisation algorithms are being published almost every single week,” he told TechCrunch.

“Secondly, Google and Facebook have built an opaque advertising ecosystem, allowing ad fraud and giving them a duopoly on the digital advertising market to the detriment of SMEs and traditional publishers, who their income erode. This undermines our public media and by that our democratic institutions.”

Another MEP who’s part of the Coalition, Alexandra Geese, a member of the Greens/European Free Alliance political group, told us her personal concern is how much personal data is being allowed to accumulate in the hands of just two private companies — with a range of associated risks and concerns.

“While data protection against public authorities works quite well in Europe, we’re only starting to understand the risks and harms that arise because private companies control so much personal data,” she argued. “If it were in the hands of a government like in China, there would be a public outcry. Some of the harms are already clearly visible. Disinformation, hateful speech and the polarization of societies as well as electoral manipulation are so widespread because groups of users can be specifically targeted — be it for commercial reasons (keeping eyeballs to show ads to with strong emotional content), be it for manipulation by malicious actors.

“I also have strong economic concerns. In the current advertising ecosystem, no other company can compete and it’s also very difficult to make a different choice. This is already choking publishers. In the long run, this will damage European AI companies. While we can leverage industrial data in Europe, the possibility to predict people’s behaviour as well as being able to train algorithms with such a wealth of data will always be an enormous competitive advantage.”

A report by the Irish Council for Civil Liberties (ICCL) which was prepared at MEPs’ request — and shared with TechCrunch ahead of publication — summarises a range of key harms it argues stem from microtargeted ads, underscoring both threats to Europeans’ fundamental rights (like privacy) and the negative impact on media pluralism of adtech giants’ surveillance-based business models enabling an opaque layer of adtech that profits off a shadowy trade in people’s information and arbitrage of quality publishers’ audience.

Estimates for the cost of what the ICCL report bills as the “opaque fees charged on every tracking-based ad” range between 35%-70% — which it cites as a key threat to media pluralism.

In the report the ICCL also writes that Google has “diverted data and revenue from publishers to itself” — citing a stat that in 2004 half (51%) of the tech giant’s ad revenue came from displaying ads on publishers’ properties vs “nearly all” (85%) now being derived from displaying ads on its own websites and apps — which the ICCL says Google does “with the benefit of data taken from publishers’ properties”.

So the narrative here is of theft of individuals’ privacy and publishers’ revenue. 

The report also summarises examples of revenue uplift experienced by a number of European publishers after they switched from microtargeting (tracking) to contextual (non-tracking) ads. Such as the 149% boost seen by Dutch publisher NPO Group — or the 210% higher average price reported by TV2, a Norwegian news website for ads sold through Kobler’s contextual targeting vs tracking-based ad targeting.

“Practical evidence from European publishers now shows that publishers’ ad revenue can increase when tracking-based advertising is switched off,” the ICCL argues, predicting that: “A switch off across the entire market will amplify this effect, protecting fundamental rights and publisher sustainability. We urge lawmakers to play their part.”

Asked about momentum in the Parliament for a microtargeting ban, Tang suggests there is a lot of appetite among MEPs to change what he dubs as “this opaque and even fraudulent advertising system”.

But few people TechCrunch spoke to about this issue were willing to predict exactly which way MEPs will jump. So it really looks like it will go down to the wire.

“We are currently still in hectic negotiations with all political groups but there is for sure a momentum,” Tang added.

He also gave short shrift to big adtech’s claims that surveillance-based ads are essential for small businesses.

“Facebook promotes itself disingenuously as the champion of small and medium enterprises. The high intermediary costs of Facebook and Google, the prevalence of ad fraud and the limited effectiveness of tracking-ads should all be red flags for every SME out there,” he suggested.

Also discussing momentum for the campaign, Geese said three groups in the parliament (S&D, Greens and Left) are currently supporting a ban and a shift to contextual advertising for the whole ecosystem.

“Some MEPs of other groups support stronger transparency and better consent regulation as well as a ban of dark patterns,” she also told us, adding: “What is interesting is the fact that public awareness is rising and campaigns are picking up speed.

“People are sick of being tricked into consent and when asked with a simple Yes/No question, refuse consent in high numbers (ca. 80% globally e.g. for iPhone users). The advertising system is evolving and shutting out third party cookies will also mobilize publishers if they don’t want to depend entirely on Google and Facebook,” she continued, before sticking her neck out with a prediction of victory, saying: “I’m confident that we will have a majority by the time DSA/DMA go to plenary.”

Nonetheless, Geese acknowledged that Facebook and Google continue to have “huge influence” over views on this issue.

“Their lobbying is pervasive and targets the EU commission as well as my fellow lawmakers and national governments,” she said, noting that she hears colleagues from other parties repeating Google and Facebook claims “without really understanding the issue”.

“Also all relevant Brussels-based think tanks as well as many academic researcher depend in some way on funding by big tech — so they all send out the same kind of message.”

“The current main claim is that ‘targeted advertising is good for SMEs’ as well as pretending that a ban of surveillance advertising would mean a ban of any kind of advertising — which is incorrect since we do support contextual advertising,” Geese went on. “Many lawmakers are not familiar with the advertising industry and don’t know that surveillance advertising is a rather new form of advertising that has been taking revenue away from European companies and publishers in the last 15 years. Most are also unaware that this market is controlled by a duopoly that faces charges for price-fixing in the US. How can this be good for SMEs or European companies?

“Furthermore there is no evidence whatsoever that SMEs benefit more from targeted advertising than from contextual. With our proposal, SMEs would still be able to target clients on websites based on their interests or even on their location. Contextual advertising has a huge growth potential and can serve SMEs far better than the current system, not to speak of publishers who have thrived with contextual in the past.”

We contacted Google and Facebook for comment on the Coalition’s campaign.

At the time of writing neither had responded to the request. But we also pinged the IAB Europe — which was happy to defend surveillance-based advertising on the tech giants’ behalf.

In a statement attributed to Greg Mroczkowski, the IAB Europe’s director of public policy, the online ad industry association argued that creepy ads are “indispensable” for both small and big business, and — it claimed — for “Europe’s news publishers” and for “content creators of all kinds”.

Which, as an argument, does seem to gloss over some basic market dynamics — like, for instance, if you flood the market with “content” of any quality that might make it more difficult for news publishers to produce high quality journalism (which is more expensive to produce than any old clickbait) since all this stuff is competing for people’s eyeballs (and attention is finite).

Similarly, if big business is allowed to use intrusively powerful ad targeting tools, doesn’t that undermine the ability of small businesses to compete against those brand giants using the same tools — given they don’t have the same level of resource to spend on digital marketing and data gathering? So wouldn’t it be a more level playing field if people’s data was off-limits and competition could be centred on the relative merits of the products?

But, well, here’s the IAB’s statement in full:

“Personalised advertising is an indispensable marketing channel for small businesses and household brands alike. It is also a vital revenue generator for Europe’s news publishers as well as other content creators of all kinds.

“The EU’s already extensive legal framework for privacy and data protection applies directly to data-driven ads, and we support enforcement of that law. We also welcome measures to further boost trust and transparency in the online advertising ecosystem, but uncosted, untested prohibitions of valuable technological solutions are not the way forward.”

The claim by the IAB that it supports enforcement of the GDPR is also worth an addendum, given that its own Transparency and Consent Framework — a widely used tool for gathering Internet users’ ‘consents’ to ad targeting — is itself the subject of a data protection complaint.

A division of Belgium’s data protection agency found in a preliminary report last year that the tool failed to comply with key GDPR principles of transparency, fairness and accountability, and the lawfulness of processing. But that regulatory procedure appears to be ongoing.

Adtech enforcement gap

Another important strand to this story is the (much chronicled) lack of enforcement of the GDPR around adtech.

Yet there seems to be a lack of awareness inside parts of the European Commission about this very notable enforcement gap where EU data protection law intersects with tracking-based business models.

One Commission official we spoke to — in the context of flagging eye-watering research that demonstrates Facebook’s platform can be used to target ads at a single person; in order to ask whether such a problematic use-case might fall under a proposed prohibition in draft AI regulations (which will forbid subliminal techniques that influence people ‘beyond their consciousness’ and aim to materially distorting a person’s behaviour in a manner that causes or is likely to cause them or another person physical or psychological harm) — pointed to the GDPR by way of recourse, saying the law already establishes rules on users’ consent and/or their right to object to targeted digital marketing.

However this official seemed entirely unaware that Facebook does not offer an option to use its service without being tracked. Which means EU users simply can’t obtain their GDPR rights when/if using Facebook.

So if the Commission is imagining that the GDPR provides protection enough against abuse of personal data by adtech it hasn’t being paying enough attention to the rampant attention-mining going down, consent-free, on the modern web.

It’s true that Google has been fined in France under GDPR for failing to be transparent enough about its tracking practices. But privacy experts continue to decry its use of disingenuous dark patterns — such as labyrinthine settings menus that make it hard for consumers to figure out how to stop it tracking them for ad targeting as they browse the Internet.

While a complaint against Facebook’s “forced consent” has been stalled in Ireland — which has suggested there’s no legal problem with the tech giant claiming users are actually in a contract with it to receive targeted ads (meaning there’s no need for it to ask for their consent).

Like Facebook, Google is now regulated on data protection matters by Ireland’s Data Protection Commission — which has faced years of criticism for failing to act on cross-border GDPR complaints, including a large number concerning adtech.

So, again, if the Commission takes the view that the GDPR alone can stop unwanted microtargeting it’s essential to underscore that that certainly has not happened yet — more than three years after the law came into application.

So the salient question then is, should EU citizens really have to wait years more for Europe’s top court to enforce the letter of the law? A growing number of MEPs don’t think so. Which is why they think it’s vital to put a ban on the face of EU legislation.

If these parliamentarians fail to get a ban on microtargeting into the DSA (and/or adtech data restrictions in the DMA) there is perhaps another looming chance for them to amend legislation to outlaw at least a subset of surveillance ads.

Indeed, the Commission official we spoke to suggested that a prohibition already included in another draft piece of EU legislation — the AI regulation, which will set rules around high risk uses of artificial intelligence — could potentially apply to microtargeting of people, if it fulfils all the conditions under article 5(1)a) of the proposed Act.

The problem is that the current Commission proposal sets a very high bar for a ban — requiring that such an ad is not only targeted at a person without their knowledge but must also “materially distort” their behaviour in such a way that it “causes or is likely to cause that person or another person physical or psychological harm”.

Safe to say, the adtech giants would deny any such impact is possible from targeted advertising — even as they rake billions more into their coffers by selling people’s attention via an industrial data-gathering apparatus which has been baked into the web to spy on what Internet users do and profile people for profit.

If a microtargeting ban fails to make it in the votes now looming in the European Parliament, the AI Regulation could be the next — and perhaps final — battleground for EU lawmakers to overrule the surveillance giants.

A new research paper written by a team of academics and computer scientists from Spain and Austria has demonstrated that it’s possible to use Facebook’s targeting tools to deliver an ad exclusively to a single individual if you know enough about the interests Facebook’s platform assigns them.

The paper — entitled Unique on Facebook: Formulation and Evidence of (Nano)targeting Individual Users with non-PII Data — describes a “data-driven model” that defines a metric showing the probability a Facebook user can be uniquely identified based on interests attached to them by the ad platform.

The researchers demonstrate that they were able to use Facebook’s Custom Audience tool to target a number of ads in such a way that each ad only reached a single, intended Facebook user.

The research raises fresh questions about potentially harmful uses of Facebook’s ad targeting tools, and — more broadly — questions about the legality of the tech giant’s personal data processing empire given that the information it collects on people can be used to uniquely identify individuals, picking them out of the crowd of others on its platform even purely based on their interests.

The findings could increase pressure on lawmakers to ban or phase out behavioral advertising — which has been under attack for years, over concerns it poses a smorgasbord of individual and societal harms. And, at the least, the paper seems likely to drive calls for robust checks and balances on how such invasive tools can be used.

The findings also underscore the importance of independent research being able to interrogate algorithmic adtech — and should increase pressure on platforms not to close down researchers’ access.

Interests on Facebook are personal data

“The results from our model reveal that the 4 rarest interests or 22 random interests from the interests set FB [Facebook] assigns to a user make them unique on FB with a 90% probability,” write the researchers from Madrid’s University Carlos III, the Graz University of Technology in Austria and the Spanish IT company, GTD System & Software Engineering, detailing one key finding — that having a rare interest or lots of interests that Facebook knows about can make you easily identifiable on its platform, even among a sea of billions of other users.

“In this paper, we present, to the best of our knowledge, the first study that addresses individuals’ uniqueness considering a user base at the worldwide population’s order of magnitude,” they go on, referring to the scale inherent in Facebook’s data mining of its 2.8BN+ active users (NB: the company also processes information about non-users, meaning its reach scales to even more Internet users than are active on Facebook).

The researchers suggest the paper presents the first evidence of “the possibility of systematically exploiting the FB advertising platform to implement nanotargeting based on non-PII [interest-based] data”.

There have been earlier controversies over Facebook’s ad platform being a conduit for one-to-one manipulative — such as this 2019 Daily Dot article about a company called the Spinner which was selling a ‘service’ to sex-frustrated husbands to target psychologically manipulative messages at their wives and girlfriends. The suggestive, subliminally manipulative ads would pop up on the targets’ Facebook and Instagram feeds.

The research paper also references an incident in UK political life, back in 2017, when Labour Party campaign chiefs apparently successfully used Facebook’s Custom Audience ad-targeting tool to ‘pull the wool’ over former leader Jeremy Corbyn’s eyes. But in that case the targeting was not just at Corbyn; it also reached his associates, and a few aligned journalists.

With this research the team demonstrates it’s possible to use Facebook’s Custom Audience tool to target ads at just one Facebook user — a process they’re referring to as “nanotargeting” (vs the current adtech ‘standard’ of microtargeting ‘interest-based’ advertising at groups of users).

“We run an experiment through 21 Facebook ad campaigns that target three of the authors of this paper to prove that, if an advertiser knows enough interests from a user, the Facebook Advertising Platform can be systematically exploited to deliver ads exclusively to a specific user,” they write, adding that the paper provides “the first empirical evidence” that one-to-one/nanotargeting can be “systematically implemented on FB by just knowing a random set of interests of the targeted user”.

The interest data they used for their analysis was collected from 2,390 Facebook users via a browser extension they created that the users had installed before January 2017.

The extension, called Data Valuation Tool for Facebook Users, parsed each user’s Facebook ad preferences page to gather the interests assigned to them, as well as providing a real-time estimate about the revenue they generate for Facebook based on the ads they receive while browsing the platform.

While the interest data was gathered before 2017, the researchers’ experiments testing whether one-to-one targeting is possible through Facebook’s ad platform took place last year.

“In particular, we have configured nanotargeting ad campaigns targeting three authors of this paper,” they explain, discussing the results of their tests. “We tested the results of our data-driven model by creating tailored audiences for each targeted author using combinations of 5, 7, 9, 12, 18, 20, and 22 randomly selected interests from the list of interests FB had assigned them.

“In total, we ran 21 ad campaigns between October and November 2020 to demonstrate that nanotargeting is feasible today. Our experiment validates the results of our model, showing that if an attacker knows 18+ random interests from a user, they will be able to nanotarget them with a very high probability. In particular, 8 out of the 9 ad campaigns that used 18+ interests in our experiment successfully nanotargeted the chosen user.”

So having 18 or more Facebook interests just got really interesting to anyone who wants to manipulate you.

Nothing to stop nanotargeting

One way to prevent one-to-one targeting would be if Facebook were to put a robust a limit on the minimum audience size.

Per the paper, the adtech giant provides a “Potential Reach” value to advertisers using its Ads Campaign Manager tool if the potential audience size for a campaign is greater than 1,000 (or greater than 20, prior to 2018 when Facebook increased the limit).

However the researchers found that Facebook does not actually prevent advertisers running a campaign targeting fewer users than those potential reach limits — the platform just does not tell advertisers how many (or, well, few) people their messaging will reach.

They were able to demonstrate this by running multiple campaigns that successfully targeted a single Facebook user — validating that the audience size for their ads was one by looking at data generated by Facebook’s ad reporting tools (“FB reported that only one user had been reached”); having a log record in their web server generated by the (sole) user click on the ad; and — in a third validation step — they asked each nanotargeted user to collect a snapshot of the ad and its associated “Why am I seeing this ad?” option. Which they say matched their targeting parameters in the successfully nanotargeted cases.

“The main conclusions derived from our experiment are the following: (i) nanotargeting a user on FB is highly likely if an attacker can infer 18+ interests from the targeted user; (ii) nanotargeting is extremely cheap, and (iii) based on our experiments, 2/3 of the nanotargeted ads are expected to be delivered to the targeted user in less than 7 effective campaign hours,” they add in a summary of the results.

In another section of the paper discussing countermeasures to prevent nanotargeting, the researchers argue that Facebook’s claimed limits on audience size “have been proven to be completely ineffective” — and assert that the tech giant’s limit of 20 is “not currently being applied”.

They also suggest there are workarounds for the limit of 100 that Facebook claims it applies to Custom Audiences (another targeting tool that involves advertisers uploading PII).

From the paper:

“The most important countermeasure Facebook implements to prevent advertisers from targeting very narrow audiences are the limits imposed on the minimum number of users that can form an audience. However, those limits have been proven to be completely ineffective. On the one hand, Korolova et. al state that, motivated by the results of their paper, Facebook disallowed configuring audiences of size smaller than 20 using the Ads Campaign Manager. Our research shows that this limit is not currently being applied. On the other hand, FB enforces a minimum Custom Audience size of 100 users. As presented in Section 7.2.2, several works in the literature showed different ways to overcome this limit and implement nanotargeting ad campaigns using Custom Audiences.

While the researchers refer throughout their paper to interest-based data as “non-PII” [aka, personally identifiable information] it is important to note that that framing is meaningless in a European legal context — where the law, under the EU’s General Data Protection Regulation (GDPR), takes a much broader view of personal data.

PII is a more common term in the US — which does not have comprehensive (federal) privacy legislation equivalent to the pan-EU GDPR.

Adtech companies also typically prefer to refer to PII, given it’s far more bounded a category vs all the information they actually process which can be used to identify and profile individuals to target them with ads.

Under the GDPR, personal data does not only include the obvious identifiers, like a person’s name or email address (aka ‘PII’), but can also encompass information that can be used — indirectly — to identify an individual, such as a person’s location or indeed their interests.

Here’s the relevant chunk from the GDPR (Article 4(1)) [emphasis ours]:

“‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;”

Other research has also repeatedly — over decades — shown that re-identification of individuals is possible with, at times, just a handful of pieces of ‘non-PII’ information, such as credit card metadata or Netflix viewing habits.

So it should not surprise us that Facebook’s vast people profiling, ad targeting empire, which continuously and pervasively mines Internet users’ activity for interest-based signals (aka, personal data) to profile individuals for the purpose of targeting them with ‘relevant’ ads, has created a new attack vector for — potentially — manipulating almost anyone in the world if you know enough about them (and they have a Facebook account).

But that does not mean there are no legal problems here.

Indeed, the legal basis that Facebook claims for processing people’s personal data for ad targeting has been under challenge in the EU for years.

Legal basis for ad targeting

The tech giant used to claim that users consent to their personal data being used for ad targeting. However it does not offer a free, specific and informed choice to people over whether they want to be profiled for behavioral ads or just want to connect with their friends and family. (And free, specific and informed is the GDPR standard for consent.)

If you want to use Facebook you have to accept your information being used for ad targeting. This is what EU privacy campaigners have dubbed ‘forced consent‘. Aka, coercion, not consent.

However, since the GDPR came into application (back in May 2018), Facebook has — seemingly — switched to claiming it’s legally able to process Europeans’ information for ads because users are actually in a contract with it to receive ads.

A preliminary decision by Facebook’s lead EU regulator, Ireland’s Data Protection Commission (DPC), which was published earlier this week, has proposed to fine the company $36M for not being transparent enough about that silent switch.

And while the DPC doesn’t seem to have a problem with Facebook’s ad contract claim, other European regulators disagree — and are likely to object to Ireland’s draft decision — so the regulatory scrutiny over that particular Facebook GDPR complaint is ongoing and far from over. 

If the tech giant is ultimately found to be bypassing EU law it could finally be forced to give users a free choice over whether their information can be used for ad targeting — which would essentially blast an existential hole in its ad targeting empire, since even holding a few pieces of interest data is personal data, as this research underlines.

For now, though, the tech giant is using its customary tactic of denying there’s anything to see here.

In a statement responding to the research, a Facebook spokesperson dismissed the paper — claiming it is “wrong about how our ad system works”.

Facebook’s statement goes on to try to divert attention from the researchers’ core conclusions in an effort to minimize the significance of their findings — with its spokesperson writing:

“This research is wrong about how our ad system works. The list of ads targeting interests we associate with a person are not accessible to advertisers, unless that person chooses to share them. Without that information or specific details that identify the person who saw an ad, the researchers’ method would be useless to an advertiser attempting to break our rules.” 

Responding to Facebook’s rebuttal, one of the paper’s authors — Angel Cuevas — described its argument as “unfortunate” — saying the company should be deploying stronger countermeasures to prevent the risk of nanotargeting, rather than trying to claim there is no problem.

In the paper the researchers identify a number of harmful risks they say could be associated with nanotargeting — such as psychological persuasion, user manipulation and blackmailing.

“It is surprising to find that Facebook is implicitly recognizing that nanotargeting is feasible and the only countermeasure is assuming advertisers are unable to infer users interests,” Cuevas told TechCrunch.

“There are many ways interests could be inferred by advertisers. We did that in our paper with a browser plug-in (with explicit consent from users for research purposes). Even more, beyond interests there are other parameters (we did not use in our research) such as age, gender, city, zip code, etc.

“We think this is an unfortunate argument. We believe a player like Facebook can implement stronger countermeasures than assuming advertisers are unable to infer user interests to be later used to define audiences in the Facebook ads platform.”

One might recall — for example — the 2018 Cambridge Analytica Facebook data misuse scandal, where a developer that had access to Facebook’s platform was able to extract data on millions of users, without most of the users’ knowledge or consent — via a quiz app.

So, as Cuevas says, it’s not hard to envisage similarly opaque and underhand tactics being deployed by advertisers/attackers/agents to harvest Facebook users’ interest data to try to manipulate specific individuals.

In the paper the researchers note that a few days after their nanotargeting experiment had ended Facebook shuttered the account they’d used to run the campaigns — without explanation.

The tech giant did not respond to specific questions we put to it about the research, including why it closed the account — and, if it did so because it had detected the nanotargeting issue, why it failed to prevent the ads running and targeting a single user in the first place. 

Expect litigation

What might the wider implications be for Facebook’s business as a result of this research?

One privacy researcher we spoke to suggested the research will certainly be useful for litigation — which is growing in Europe, given the slow pace of privacy enforcement by EU regulators against Facebook specifically (and adtech more generally).

Another pointed out that the findings underline how Facebook has the ability to “systematically re-identity” users at scale — “while pretending it does not process ‘personal data’ on the data” — suggesting the tech giant has amassed enough data on enough people that it can, essentially, circumvent narrowly bounded legal restrictions that might seek to put limits on its processing of PII.

So regulators looking to put meaningful limits on harms that can flow from behavioral advertising will need to be wise to how Facebook’s own algorithms can seek out and make use of proxies in the masses of data it holds and attaches to users — and its likely line of associated argument that its processing therefore avoids any legal implications (a tactic Facebook has used on the issue of inferred sensitive interests, for example).

Another privacy watcher, Dr Lukasz Olejnik, an independent privacy researcher and consultant, called the research staggering — describing the paper as among the top ten most important privacy research results of this decade.

“Reaching 1 user out of 2.8bn? While the Facebook platform claimed there are precautions making such microtargeting impossible? So far, this is among the top 10 most important privacy research results in this decade,” he told TechCrunch.

“It seems that users are identifiable by their interests in the meaning of article 4(1) of the GDPR, meaning that interests constitute personal data. The only caveat is that we are not certain how such a processing would scale [given the nanotesting was only tested on three users].”

Olejnik said the research shows the targeting is based on personal data — and “perhaps even special category data in the meaning of GDPR Article 9”.

“This would mean that the user’s explicit consent is needed. Unless of course appropriate protections were made. But based on the paper we conclude that these, if present, are not sufficient,” he added.

Asked if he believes the research indicates a breach of the GDPR, Olejnik said: “DPAs should investigate. No question about it,” adding: “Even if the matter may be technically challenging, building a case should take two days max.”

We flagged the research to Facebook’s lead DPA in Europe, the Irish DPC — asking the privacy regulator whether it would investigate to determine if there had been a breach of the GDPR or not — but at the time of writing it had not responded.

Towards a ban on microtargeting?

On the question of whether the paper strengthens the case for outlawing microtargeting, Olejnik argues that curbing the practice “is the way forward”– but says the question now is how to do that.

“I don’t know if the current industry and political environment would be prepared for a total ban now. We should demand technical precautions, at the very least,” he said. “I mean, we were already told that these were in place but it appears this is not the case [in the case of nanotargeting on Facebook].”

Olejnik also suggested there could be changes coming down the pipe based on some of the ideas built into Google’s Privacy Sandbox proposal — which has, however, been stalled as a result of adtech complaints triggering competition scrutiny.

Asked for his views on a ban on microtargeting, Cuevas told us: “My personal position here is that we need to understand the tradeoff between privacy risks and economy (jobs, innovation, etc). Our research definitely shows that the adtech industry should understand that just thinking of PII information (email, phone, postal address, etc.) is not enough and they need to implement more strict measures regarding the way audiences can be defined.

“Saying that, we do not agree that microtargeting — understood as the capacity of defining an audience with (at least) tens of thousands of users — should be banned. There is a very important market behind microtargeting that creates many jobs and this is a very innovative sector that does interesting things that are not necessarily bad. Therefore, our position is limiting the potential of microtargeting to guarantee the privacy of the users.”

“In the area of privacy we believe the open question that is not solved yet is the consent,” he also said. “The research community and the adtech ecosystem have to work (ideally together) to create an efficient solution that obtains the informed consent from users.”

Zooming out, there are more legal requirements looming on the horizon for AI-driven tools in Europe.

Incoming EU legislation for high risk applications of artificial intelligence — which was proposed earlier this year — has suggested a total ban on AI systems that deploy “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm”.

So it’s at least interesting to speculate whether Facebook’s platform might face a ban under the EU’s future AI Regulation — unless the company puts proper safeguards in place that robustly prevent the risk of its ad tools being used to blackmail or psychologically manipulate individual users.

For now, though, it’s lucrative business as usual for Facebook’s eyeball targeting empire.

Asked about plans for future research into the platform, Cuevas said the obvious next piece of work they want to do is to combine interests with other demographic information to see if nanotargeting is “even easier”.

“I mean, it is very likely that an advertiser could can combine the age, gender, city (or zip code) of the user with a few interests to nanotarget a user,” he suggested. “We would like to understand how many of these parameters you need to combine. Inferring the gender, age, location and few interests from a user may be much easier than inferring few tens of interests.”

Cuevas added that the nanotargeting paper has been accepted for presentation at the ACM Internet Measurement Conference next month.

Facebook’s lead data protection regulator in the European Union is inching toward making its first decision on a complaint against Facebook itself. And it looks like it’s a doozy.

Privacy campaign not-for-profit noyb today published a draft decision by the Irish Data Protection Commission (DPC) on a complaint made under the EU’s General Data Protection Regulation (GDPR).

The DPC’s draft decision proposes to fine Facebook $36 million — a financial penalty that would take the adtech giant just over two and a half hours to earn in revenue, based on its second quarter earnings (of $29BN).

Yeah, we lol’d too…

But even more worrying for privacy advocates is the apparent willingness of the DPC to allow Facebook to simply bypass the regulation by claiming users are giving it their data because they’re in a contract with it to get, er, targeted ads…

In a summary of its findings, the DPC writes: “There is no obligation on Facebook to seek to rely solely on consent for the purposes of legitimising personal data processing where it is offering a contract to a user which some users might assess as one that primarily concerns the processing of personal data. Nor has Facebook purported to rely on consent under the GDPR.”

“I find the Complainant’s case is not made out that the GDPR does not permit the reliance by Facebook on 6(1)(b) GDPR in the context of its offering of Terms of Service,” the DPC also writes, suggesting it’s totally bona fide for Facebook to claim a legal right to process people’s information for ad targeting because it’s now suggesting users actually signed up for a contract with it to deliver them ads.

Yet — simultaneously — the DPC’s draft decision does find that Facebook infringed GDPR transparency requirements — specifically: Articles 5(1)(a), 12(1) and 13(1)(c) — meaning that users were unlikely to have understood they were signing up for a Facebook ad contract when they clicked ‘I agree’ on Facebook’s T&Cs.

So the tl;dr here is that Facebook’s public-facing marketing — which claims its service “helps you connect and share with the people in your life” — appears to be missing a few critical details about the advertising contract it’s actually asking you to enter into, or something…

Insert your own facepalm emoji right here.

Mind the enforcement gap

The GDPR came into application across the EU back in May 2018 — ostensibly to cement and strengthen long standing privacy rules in the region which had historically suffered from a lack of enforcement, by adding new provisions such as supersized fines (of up to 4% of global turnover).

However EU privacy rules have also suffered from a lack of universally vigorous enforcement since the GDPR update. And those penalties that have been issued — including a handful against big tech — have been far lower than that theoretical maximum. Nor has enforcement led to an obvious retooling of privacy hostile business models — yet.

So the reboot hasn’t exactly gone as privacy advocates hoped.

Adtech giants especially have managed to avoid a serious reckoning in Europe over their surveillance-based business models despite the existence of the GDPR — through the use of forum shopping and cynical delay tactics.

So while there is no shortage of GDPR complaints being filed against adtech, complaints over the lack of regulatory enforcement in this area are equally stacking up.

And complainants are now also resorting to legal action.

The issue is, under GDPR’s one-stop-shop mechanism, cross-border complaints and investigations, such as those targeted at major tech platforms, are led by a single agency — typically where the company in question has its legal base in the EU.

And in Facebook’s case (and many other tech giants’) that’s Ireland.

The Irish authority has long been accused of being a bottleneck to effective enforcement of the GDPR, with critics pointing to a glacial pace of enforcement, scores of complaints simply dropped without any discernible activity and — in instances where the complaints aren’t totally ignored — underwhelming decisions eventually popping out the other end.

One such series of adtech-related GDPR complaints were filed by noyb immediately the regulation came into application three years ago — targeting a number of adtech giants (including Facebook) over what noyb called “forced consent”. And these complaints of course ended up on the DPC’s desk.

noyb’s complaint against Facebook argues that the tech giant does not collect consent legally because it does not offer users a free choice to consent to their data being processed for advertising.

This is because under EU law consent must be freely given, specific (i.e. not bundled) and informed in order to be valid. So the substance of the complaint is not exactly as complicated as rocket science.

Yet a decision on noyb’s complaint has taken years to emerge from the DPC’s desk — and even now, in dilute draft form, it looks entirely underwhelming.

Per noyb, the Irish DPC has decided to accept what the campaign group dubs Facebook’s “trick” to bypass the GDPR — in which the company claims it switched away from relying on consent from users as a legal basis for processing people’s data for ad targeting to claiming users are actually in a contract with it to get ads injected into their eyeballs the very moment the GDPR came into force.

“It is painfully obvious that Facebook simply tries to bypass the clear rules of the GDPR by relabeling the agreement on data use as a ‘contract’,” said noyb founder and chair, Max Schrems, in a statement which goes on to warn that were such a basic  wheeze allowed to stand it would undermine the whole regulation. Talk about a cunning plan!

“If this would be accepted, any company could just write the processing of data into a contract and thereby legitimize any use of customer data without consent. This is absolutely against the intentions of the GDPR, that explicitly prohibits to hide consent agreements in terms and conditions.”

“It is neither innovative nor smart to claim that an agreement is something that it is not to bypass the law,” he adds. “Since Roman times, the Courts have not accepted such ‘relabeling’ of agreements. You can’t bypass drug laws by simply writing ‘white powder’ on a bill, when you clearly sell cocaine. Only the Irish DPC seems to fall for this trick.”

Ireland has only issued two GDPR decisions in complaints against big tech thus far: Last year in a case against a Twitter security breach ($550k fine); and earlier this year in an investigation into the transparency of (Facebook-owned) WhatsApp T&Cs ($267M fine).

Under the GDPR, a decision on these type of cross-border GDPR complaints must go through a collective review process — where other DPAs get a chance to object. It’s a check and balance on one agency getting too cosy with business and failing to enforce the law.

And in both the aforementioned cases objections were raised on the DPC drafts that ended up increasing the penalties.

So it is highly likely that Ireland’s Facebook decision will face plenty of objections that end in a tougher penalty for Facebook.

noyb also points to guidelines put out by the European Data Protection Board (EDPB) — which it says make it clear that bypassing the GDPR isn’t legal and must be treated as consent. But it quotes the Irish DPC saying it is “simply not persuaded” by the view of its European Colleagues, and suggests the EDPB will therefore have to step in yet again.

“Our hope lies with the other European authorities. If they do not take action, companies can simply move consent into terms and thereby bypass the GDPR for good,” says Schrems.

noyb has plenty more barbs for the DPC — accusing the Irish authority of holding “secret meetings” with Facebook on its “consent bypass” (not for the first time); and of withholding documents it requested — going on to denounce the regulator as acting like a “‘big tech’ advisor” (not, y’know, a law enforcer).

“We have cases before many authorities, but the DPC is not even remotely running a fair procedure,” adds Schrems. “Documents are withheld, hearings are denied and submitted arguments and facts are simply not reflected in the decision. The [Facebook] decision itself is lengthy, but most sections just end with a ‘view’ of the DPC, not an objective assessment of the law.”

We reached out to the DPC for comment on noyb’s assertions — but a spokesperson declined, citing an “ongoing process”.

One thing is beyond doubt at this point, over three years into Europe’s flagship data protection reboot: There will be even more delay in any GDPR enforcement against Facebook.

The GDPR’s one-stop-shop mechanism — of review plus the chance for other DPAs to file objections — already added multiple months to the two earlier DPC ‘big tech’ decisions. So the DPC issuing another weak draft decision on a late-running investigation looks like it’s becoming a standard procedural lever to decelerate the pace of GDPR enforcement across the EU.

This will only increase pressure for EU lawmakers to agree alternative enforcement structures for the bloc’s growing suite of digital regulations.

In the meanwhile, as DPAs fight it out to try to hit Facebook with a penalty Mark Zuckerberg can’t just laugh off, Facebook gets to continue its lucrative data-mining business as usual — while EU citizens are left asking where are my rights?

The world of marketing has become a world of marketing tech. But marketers are not necessarily engineers, so working with the terabytes of data their campaigns produce can be a challenge. Today, a Stockholm startup called Funnel, which has built a no-code platform to help manage that process, is announcing $66 million in funding, a growth round that underscores the demand in the market for such tools. Funnel is describing this as a “pre-IPO” round: it will be its last before it files to go public, likely in his home market, and likely in the next six to 18 months.

Fourth Swedish National Pension Fund (AP4) and Stena Sessan are co-leading the round, with previous backers Balderton Capital, Eight Roads, F-Prime, Oxx, and Industrifonden also participating, among others. Fredrik Skantze, the co-founder and CEO, said the company is not disclosing its valuation but he said it was considerably higher than its pre-money valuation in its last round, a pre-pandemic $47 million Series B in January 2020.

As a measure of Funnel’s size, the company has about 1,200 large customers, with about half of them in the U.S. They include brands like Home Depot, trivago, Skechers, Samsung, Vodafone, Logitech, Skyscanner, and SAS – Scandinavian Airlines, as well as Havas Media, a division of the French advertising and PR giant, Ogilvy and DAC Group.

The challenge that Funnel is tackling is that marketing has become a massively digitized business: although outdoor, print, television and other analogue campaigns still account for 40% of marketing spend, that leaves 60% to digital.

That is a proportion that is still very much on the rise, not least because digital marketing provides a more measurable picture of how well a campaign is doing: people engage and respond on social media; they click on links; they share information to other platforms. The growing ubiquity of digital marketing also means that the data sources that a marketer typically uses are also growing.

“Four or five years ago, a marketer typically used seven data sources,” said Skantze. “Then that grew to 10. Now, our customers might be using 20 to 30 or even 70 or 80 data sources. If you are active in 50 markets that becomes a complex problem.”

But that also poses a data problem. When there are fewer platforms and marketing campaigns running, a marketer has typically relied on using spreadsheets to analyse data, or tools specific to a single campaign. However, that becomes untenable as the data sources grow and as the expectations of what marketers want to get out of that information grow along with it. Working with the data that is produced thus usually requires the help of a data scientist to organise it to be reported in a more usable way.

“It’s not enough to simply use the raw data,” Skantze said. “Facebook alone has 700 metrics, and the data you will get from a campaign just goes to a data warehouse. So you have to make it business-ready, you have to normalize it. That means using something like SQL. And that means marketers themselves cannot work directly with that funnel of data.”

Funnel’s platform is able to “read”, organize and create reports for the various datasets coming out of these campaigns, by way of sets of rules that are pre-designed, or a company can customize for itself. It currently handles some 550 different data sources (from social media platforms to search engines and much more: basically any digital platforms that might be used by a marketer to run a campaign). And it is adding more in as and when customers use them, Skantze said. Through drop-down menus, non-technical marketers can do, he said, “all the things they would have previously asked an IT person to do, to stage the data.”

The key also is that it’s focused on marketing, which also sets it apart from other competitors providing low-code tools to help organize data for further business intelligence or reporting applications.

“Five years ago I would have thought that BI tools would solve this, but the problem is is that they are too horizontal, and cover any type of data, whether it is marketing, geographical, financial or so on. So within marketing it might cover only five data sources, while we have 550,” he said. “You can’t solve the problem of pulling in the data unless you are vertical in some sort of segment. It’s the same with snowflake: it has 200 connectors but they are in too many areas.”

Funnel’s future growth may seem all but assured: more online activity breeds more marketing activity, and marketers are being expected to report and provide more, not less, insights about what they do and discover about their customers. On the other hand, the market is evolving. People do not want to be tracked; regulations are coming into force that are making it harder to gather marketing data; there is a growing body of technology that is looking for ways of creating “synthetic” datasets that could mean less reliance on pulling data out of marketing campaigns, which could mean less business for the Funnels of the world; and platforms are also changing their tune.

“The restrictions that Apple has placed on tracking on iOS has had a big impact especially for B2C companies,” Skantze said. “They are not seeing the same level of performance as before. It’s something our customers are concerned with but so far that hasn’t affected us. Our role is to pull down data, so that others can understand it. We are a bit like Switzerland here. We are a step away from the mechanics of adtech.”

Regardless of how that develops, this is a good argument for diversifying to cover more than marketing in its platform.

There are a number of tools in the market today that are also creating ways to better order data troves so that they can be used for better business intelligence. They include Collibra and Acryl and many others. The key with Funnel is that it is presented firmly as a tool for non-technical people, and it has been built with marketing in mind, Skantze noted. That being said, the company has plans to extend beyond marketing over time. “We are already pulling data for sales teams and e-commerce teams,” he said, and it is also eyeing up a move into providing data reporting tools for the finance sector.

“Pre-IPO” rounds in the context of this latest fundraise is about bringing in institutional investors who will also be a part of the IPO process.

“We are long term investors who look for companies we like and hold them for a long time. We were impressed with the size of the global opportunity and the team’s ambition to build a large software company,” said Jannis Kitsakis, senior portfolio manager at AP4, in a statement.

“Funnel has shown strong, predictable growth with impressive go to market metrics and a global footprint,” added Fredrik Konopik, Investment Director at Stena Sessan. “We feel that the company is well positioned for the public market in Sweden.”

TikTok this week presented its new plan to ramp up advertiser investment in its video platform with the expansion of e-commerce, a new promise of “brand safety,” and the launch of several new and interactive ad formats, ranging from clickable stickers to choose-your-own-adventure ads to “super likes” and more. The additions, the company says, will make TikTok’s advertising more interactive and creative, much like the TikTok experience itself.

The company demonstrated its new additions at an online conference aimed at the advertising and marketing community on Tuesday.

Here, TikTok also announced several new e-commerce partnerships beyond its pilot partner Shopify to make online shopping a more native experience, with the ability for users to go from product discovery to checkout without leaving the app. It noted it’s making live shopping available to brands and offered several ad products made just for e-commerce brands. And, in some markets, TikTok is offering to take on the responsibilities of shipping and fulfillment, as well.

Meanwhile, TikTok’s broader ads business is getting a jolt with the launch of several new products designed with the goal of making TikTok better differentiated from other social media rivals.

On this front, TikTok introduced a new product called “instant page,” which is a quick-loading landing page that the company claims will load 11 times faster than a typical mobile website. This allows a user who clicks through on an ad to be immediately taken to a page where they’ll be able to see more information from the brand, watch more videos, and swipe through other content — all without leaving the TikTok app. This could compete with Instagram’s Link Sticker which recently stepped in to replace the swipe-up gesture in its app.

Image Credits: TikTok (instant page)

Another new product, “pop out showcase,” aims to make engaging with ads a more interactive experience.

With “pop out showcase,” advertisers can access a library of stickers and images that can be superimposed on top of their TikTok videos to illustrate the products they’re demonstrating or other key story elements. For instance, a beauty brand may add a sticker of a makeup brush to its content that, when tapped, takes the viewer to a page where they can buy a makeup brush from the brand.

Image Credits: TikTok (pop out showcase)

Other new formats encourage TikTok users to tap on the ads themselves.

One of these is TikTok’s “super like.” This offers a way to make “liking” a video a more engaging experience. When users tap the like (heart) button on a TikTok video, the Super Like can display different types of icons that appear on viewers’ screens. Users are also invited to visit a landing page where they can learn more about the brand’s product or service being featured.

Image Credits: TikTok (super like)

There are also gesture ads that will reveal rewards or other information to users who either slide or tap on videos. Like the “pop out showcase: and “super like,” these ads play to the familiarity that TikTok users — particularly its young Gen Z and millennial demographic — have with how to navigate their smartphones. It’s second nature for younger people to know to tap, swipe, and drag, and these ads offer some form of immediate gratification for doing so, whether that’s an explosion of icons or even a real-life reward.

Image Credits: TikTok (gesture ads)

The final new product is TikTok’s “storytime tool,” which encourages users to become a part of the brand’s storytelling experience. Some streaming services, like Hulu, have experimented with ads that ask the users to play along — but not quite to the extent of controlling the story. Instead of just watching a TikTok ad, this choose-your-own-adventure style format lets users tap to direct the action in the video to shape the narrative and personalize the outcome.

Image Credits: TikTok (storytime tool)

“All these solutions are a part of our goal to enable advertisers to create the most engaging ads in ways that taps into their creativity and fun that exists on the platform,” said Jaclyn Fitzpatrick, TikTok Product Strategist, Global Business Marketing, when introducing the new lineup.

Of course, performance and measurement capabilities are just as important to marketers as the ad creatives themselves. To address these concerns, the company touted its TikTok Ad Manager, editing suite, trends and insights, and other new tools for buying, scaling, and analyzing their campaigns. It launched a new buying type called Reach & Frequency, which allows advertisers to target a higher volume of users through extended reach, or get more impressions with the same number of users by opting into a higher frequency for their ad placements.

TikTok also made a commitment to brand safety — an issue that’s plagued YouTube in the past — with the launch of a proprietary brand safety inventory filter.

The solution leverages machine learning technology to classify a video’s risk based on the video’s content, text, audio, and more, so advertisers can make decisions about which kind of inventory they want to run adjacent to, the company explained. TikTok says the new filter is aligned with the Global Alliance for Responsible Media (GARM)’s industry framework and it partnered with Integrate Ad Science (IAS), Zefr, and OpenSlate to help it to ensure ads run next to brand-safe content.

The message to advertisers, clearly, is that TikTok should be considered not only because of its sizable audience — now 1 billion monthly actives, it says — but also because of its advertising toolset.

To date, marketers haven’t carved out as much of their spending for TikTok compared with other major platforms, like Facebook and Instagram. But TikTok parent company, ByteDance, has been making inroads in the global ad market, with annual revenue across its apps more than doubling in 2020 to reach $34.3 billion. In the U.S., TikTok was expected to bring in $500 million in 2020, up from $200-$300 million in the year prior, according to a report by The Information. (Some of that is from in-app purchases, of course.)

As TikTok has scaled its ad business, its ad prices have been steadily increasing, too. Bloomberg noted this summer it was jacking up home page takeover ads, its most valuable real estate, to more than $2 million on top days — like holidays. Reuters also noted that TikTok saw a 500% increase in the number of advertisers that were running campaigns in the U.S. from the start of 2020 to the end, though ad sales were still small compared with other major platforms.

Image Credits: eMarketer

That continues to be the case in 2021, as TikTok’s U.S. ad revenues are dwarfed by other social brands. In fact, TikTok was not even broken out in eMarketer’s recent tabulation of U.S. ad revenues, where it’s instead lumped into an “Other” category with other, smaller social networks which, combined, are expected to reach $1.3 billion in 2021.

A coalition of digital marketing firms and others has taken its lobbying against Google’s plan to phase out tracking cookies — by replacing them with alternative technologies which the tech giant claims will protect user privacy — to the European Union, lodging a formal complaint with the bloc’s antitrust regulators.

The self-styled ‘Movement for an Open Web’ (MOW), as the opaque group pushing the complaint is now called (RIP ‘Marketers for an Open Web’), put out a press release announcing the move today — and claiming it has provided the Commission with “evidence of Google’s technology changes, how they impact choice and competition”, and offered some “potential remedies”.

Google and the Commission have been contacted for comment on the complaint.

EU regulators finally opened an investigation of Google’s adtech this summer, announcing an in-depth probe in June that they said would include delving into the Privacy Sandbox proposal.

A UK probe of that was announced months earlier, in January, and the issue remains a live one on the Competition and Markets Authority’s (CMA) desk — with Google suggesting concessions this summer.

MOW’s suggested remedies to EU regulators include requirements that Google should notify the EU ahead of time over any changes to its browser (Chrome/Chromium) — in order to “enabl[e] privacy and competition assessments to be made by the EU and data protection authorities in line with Google’s proposed remedy to the UK’s Competition and Market Authority and Information Commissioner’s Office”, as it puts it.

So here you can see the strange sight of a campaign which was kicked off by a bunch of marketers seemingly lobbying for user ‘privacy’; but of course they would say that wouldn’t they — given the EU has already flagged “user privacy” as one of the areas its antitrust investigation will be considering. (Additionally, the UK’s CMA and Information Commissioner’s Office are doing joint working on the Privacy Sandbox complaint.)

Notably MOW’s website still does not disclose exactly who the members of this anti-Google-Privacy-Sandbox/pro-tracking-cookie group are — aside from the name of its director, James Rosewell (co-founder of UK mobile marketing company, 51 Degrees).

Instead it writes that: “MOW is supported by businesses that between them have annual revenues of $40BN+. The name has been changed because more businesses, not just marketing companies, are realising the threat from Privacy Sandbox and the benefit of joining the MOW campaign.”

It may be the case that MOW has indeed evolved its membership to include entities with a genuine (rather than opportunistic) concern for user privacy. But given that it won’t disclose its membership it’s impossible to know…

Commenting on the complaint in a statement, Rosewell said: “The internet was originally envisaged as an open environment outside the control of any single body. Google maintains it is making these changes to protect privacy but if not properly policed, the move threatens digital media, online privacy and innovation.

“Solutions aligned to laws – not self-serving misuse of the web architecture by the members of Big Tech such as Google – are needed. More people, surrendering more personal data to fewer companies doesn’t improve anyone’s privacy while stifling competition and boosting their huge profits even further.”

Lawyer Tim Cowen, who is listed as a legal advisor to MOW and chair of the antitrust practice at Preiskel & Co LLP, added: “We’re asking that the EU Commission create a level playing field for all digital businesses, to maintain and protect an open web. Google says they’re strengthening ‘privacy’ for end users but they’re not, what they’re really proposing is a creepy data mining party.”

Google has already delayed its timeline for implementing Privacy Sandbox — announcing back in June that it would be taking longer to make the transition away from tracking cookies as a result of ongoing engagement with the UK’s antitrust regulator.

It has also offered not to phase out tracking cookies unless or until the UK’s Competition and Markets Authority is happy that the transition to alternative technologies can be done in a manner that protects competition and privacy.

So MOW, presumably, scents blood — spying an opportunity to press the case for a wider, pan-European freeze on Google’s Privacy Sandbox play, rather than merely getting some UK-specific checks and balances.

Asked if he believes the Commission will be sympathetic to MOW’s complaint, Dr Lukasz Olejnik, a privacy researcher and consultant, told TechCrunch: “The Commission currently has its own investigation, so I could envisage that this additional complaint could constitute a case in point. Technology anti-competition proceedings tend to be slow, so the investigation could simply accumulate things in a snowball manner.”

“A long time ago, the European Data Protection Supervisor identified potential links between privacy and competition. Opinions and activities followed. However, it is true that EU Competition regulators are prioritising their own principles, rather than focusing on privacy,” he also told us.

“In fact, I doubt that any [world] competition regulator would be balancing competition with privacy. They should, I believe, consider it an internal aspect of society and technology. Perhaps it will happen to some degree? But while we do know that UK CMA and ICO are in contact, it is not known if the EC and EDPS are also in sync in relation to the latest investigation.”

Given that Google has already proposed a number of legally binding commitments to the UK’s CMA around Privacy Sandbox, there are several interested entities in this story who may see a ‘quick win’ if the bloc can be persuaded to swiftly adopt much the same framework, assuming the CMA also agrees to it.

“I think that the most effortless action would be to take the commitments issued to CMA, copy-and-paste them, perhaps modify a bit and hand to the EU. I’m not sure if this would work out so easily. However, it is clear that Google wants to remove this potential roadblock as fast as possible,” agreed Olejnik.

“It’s quite clear that consumer detriment may happen not only from market conduct but other aspects are notable. For example privacy and data protection standards,” he added.