Steve Thomas - IT Consultant

In the small hours local time, European Union lawmakers secured a provisional deal on a landmark update to rules for digital services operating in the region — grabbing political agreement after a final late night/early morning of compromise talks on the detail of what is a major retooling of the bloc’s existing ecommerce rulebook.

The political agreement on the Digital Services Act (DSA) paves the way for formal adoption in the coming weeks and the legislation entering into force — likely later this year. Although the rules won’t start to apply until 15 months after that — so there’s a fairly long lead in time to allow companies to adapt.

The regulation is wide ranging — setting out to harmonize content moderation and other governance rules to speed up the removal of illegal content and products. It addresses a grab-bag of consumer protection and privacy concerns, as well as introducing algorithmic accountability requirements for large platforms to dial up societal accountability around their services. While ‘KYC’ requirements are intended to do the same for online marketplaces.

How effective the package will be is of course tbc but the legislation that’s was agreed today goes further than the Commission proposal in a number of areas — with, for example, the European Parliament pushing to add in limits on tracking-based advertising.

A prohibition on the use of so-called ‘dark patterns’ for online platforms is also included — but not, it appears, a full blanket ban for all types of digital service (per details of the final text shared with TechCrunch via our sources).

See below for a fuller breakdown of what we know so far about what’s been agreed. 

The DSA was presented as a draft proposal by the Commission back in December 2020 which means it’s taken some 16 months of discussion — looping in the other branches of the EU: the directly elected European Parliament and the Council, which represents EU Member States’ governments — to reach this morning’s accord.

After last month’s deal on the Digital Markets Act (DMA), which selectively targets the most powerful intermediating platforms (aka gatekeepers) with an ex ante, pro-competition regime, EU policy watchers may be forgiven for a little euphoria at the (relative) speed with which substantial updates to digital rules are being agreed.

Big Tech’s lobbying of the EU over this period has been of an unprecedented scale in monetary terms. Notably, giants like Google have also sought to insert themselves into the ‘last mile’ stage of discussions where EU institutions are supposed to shut themselves off from external pressures to reach a compromise, as a report published earlier today by Corporate Europe Observatory underlines. That illustrates what they believe is at stake.

The full impact of Google et al‘s lobbying won’t be clear for months or even years. But, at the least, Big Tech’s lobbyists were not success in entirely blocking the passage of the two major digital regulations — so the EU is saved from an embarrassing repeat of the (stalled) ePrivacy update which may indicate that regional lawmakers are wising up to the tech industry’s tactics. Or, well, that Big Tech’s promises are not as shiny and popular as they used to be.

The Commission’s mantra for the DSA has always been that the goal is to ensure that what’s illegal offline will be illegal online. And in a video message tweeted out in the small hours local time, a tired but happy looking EVP, Margrethe Vestager, said it’s “not a slogan anymore that’s what illegal offline should also be seen and dealt with online”.

“Now it is a real thing,” she added. “Democracy’s back.”

In a statement, Commission president Ursula von der Leyen added:

“Today’s agreement on the Digital Services Act is historic, both in terms of speed and of substance. The DSA will upgrade the ground-rules for all online services in the EU. It will ensure that the online environment remains a safe space, safeguarding freedom of expression and opportunities for digital businesses. It gives practical effect to the principle that what is illegal offline, should be illegal online. The greater the size, the greater the responsibilities of online platforms. Today’s agreement — complementing the political agreement on the Digital Markets Act last month — sends a strong signal: to all Europeans, to all EU businesses, and to our international counterparts.”

In its own press release, the Council called the DSA “a world first in the field of digital regulation”.

While the parliament said the “landmark rules… effectively tackle the spread of illegal content online and protect people’s fundamental rights in the digital sphere”.

In a statement, its rapporteur for the file, MEP Christel Schaldemose, further suggested the DSA will “set new global standards”, adding: “Citizens will have better control over how their data are used by online platforms and big tech-companies. We have finally made sure that what is illegal offline is also illegal online. For the European Parliament, additional obligations on algorithmic transparency and disinformation are important achievements. These new rules also guarantee more choice for users and new obligations for platforms on targeted ads, including bans to target minors and restricting data harvesting for profiling.”

Other EU lawmakers are fast dubbing the DSA a “European constitution for the Internet”. And it’s hard not to see the gap between the EU and the US on comprehensive digital lawmaking as increasingly gaping.

Vestager’s victory message notably echoes encouragement tweeted out earlier this week by the former US secretary of state, senator, first lady and presidential candidate, Hillary Clinton, who urged Europe to get the DSA across the line and “bolster global democracy before it’s too late”, as she put it, adding: “For too long, tech platforms have amplified disinformation and extremism with no accountability. The EU is poised to do something about it.”

DSA: What’s been agreed?

In their respective press releases trumpeting the deal, the parliament and Council have provided an overview of areas of key elements of the regulation they’ve agreed.

It’s worth emphasizing that the full and final text hasn’t been published yet — and won’t be for a while. It’s pending legal checks and translation into the bloc’s many languages — which means the full detail of the regulation and the implication of all its nuance remains tbc.

But here’s an overview of what we know so far…

Scope, supervision & penalties

On scope, the Council says the DSA will apply to all online intermediaries providing services in the EU.

The regulation’s obligations are intended to be proportionate to the nature of the services concerned and the number of users — with extra, “more stringent” requirements for “very large online platforms” (aka VLOPs) and very large online search engines (VLOSEs).

Services with more than 45M monthly active users in the EU will be considered VLOPs or VLOSEs. So plenty of services will reach that bar — including, for example, the homegrown music streaming giant Spotify.

“To safeguard the development of start-ups and smaller enterprises in the internal market, micro and small enterprises with under 45 million monthly active users in the EU will be exempted from certain new obligations,” the Council adds.

The Commission itself will be responsible for supervising VLOPs and VLOSEs for the obligations that are specific to them — which is intended to avoid bottlenecks in oversight and enforcements of larger platforms (such as happened with the EU’s GDPR).

But national agencies at the Member State level will supervise the wider scope of the DSA — so EU lawmakers say this arrangement maintains the country-of-origin principle that’s baked into existing digital rules.

Penalties for breaches of the DSA can scale up to 6% of global annual turnover.

Per the parliament, there will also be a right for recipients of digital services to seek redress for any damages or loss suffered due to infringements by platforms.

Content moderation & marketplace rules

The content moderation measures are focused on harmonizing rules to ensure “swift” removal of illegal content.

This is being done through what the parliament describes as a “clearer ‘notice and action’ procedure” — where “users will be empowered to report illegal content online and online platforms will have to act quickly”, as it puts it.

It also flags support for victims of cyber violence — who it says will be “better protected especially against non-consensual sharing (revenge porn) with immediate takedowns”.

MEPs say fundamental rights are protected from the risk of over-removal of content from the regulation putting pressure on platforms to act quickly through “stronger safeguards to ensure notices are processed in a non-arbitrary and non-discriminatory manner and with respect for fundamental rights, including the freedom of expression and data protection”.

The regulation is also intended to ensure swift removal of illegal products/services from online marketplaces. So there are new  requirements incoming for ecommerce players.

On this, the Council says the DSA will impose a “duty of care” on marketplaces vis-à-vis sellers who sell products or services on their online platforms.

“Marketplaces will in particular have to collect and display information on the products and services sold in order to ensure that consumers are properly informed,” it notes, although there will be plenty of devil in the detail of the exact provisions.

On this, the parliament says marketplaces will “have to ensure that consumers can purchase safe products or services online by strengthening checks to prove that the information provided by traders is reliable (‘Know Your Business Customer’ principle) and make efforts to prevent illegal content appearing on their platforms, including through random checks”.

Random checks on traders/goods had been pushed for by consumer protection organizations — who had been concerned the measure would be dropped during trilogues — so EU lawmakers appear to have listened to those concerns.

Extra obligations for VLOPs/VLOSEs

These larger platform entities will face scrutiny of how their algorithms work from the European Commission and Member State agencies — which the parliament says will both have access to the algorithms of VLOPs.

The DSA also introduces an obligation for very large digital platforms and services to analyse “systemic risks they create” and to carry out “risk reduction analysis”, per the Council.

The analysis must be done annually — which the Council suggests will allow for monitoring of and reduced risks in areas such as the dissemination of illegal content; adverse effects on fundamental rights; manipulation of services having an impact on democratic processes and public security; adverse effects on gender-based violence, and on minors and serious consequences for the physical or mental health of users.

Additionally, VLOPs/VLOSEs will be subject to independent audits each year, per the parliament.

Large platforms that use algorithms to determine what content users see (aka “recommender systems”) will have to provide at least one option that is not based on profiling. Albeit, many already do — although they often also undermine these choices by applying dark pattern techniques to nudge users away from control over their feeds so holistic supervision will be needed to meaningfully improve user agency.

There will also be transparency requirements for the parameters of these recommender systems with the goal of improving information for users and any choices they make. Again, the detail will be interesting to see there.

Limits on targeted advertising  

Restrictions on tracking-based advertising appear to have survived the trilogue process with all sides reaching agreement on a ban on processing minors’ data for targeted ads.

This applies to platforms accessible to minors “when they are aware that a user is a minor”, per the Council.

“Platforms will be prohibited from presenting targeted advertising based on the use of minors’ personal data as defined in EU law,” it adds.

A final compromise text shared with TechCrunch by our sources suggests the DSA will stipulate that providers of online platforms should not do profile based advertising “when they are aware with reasonable certainty that the recipient of the service is a minor”.

A restriction on the use of sensitive data for targets ads has also made it into the text.

The parliament sums this up by saying “targeted advertising is banned when it comes to sensitive data (e.g. based on sexual orientation, religion, ethnicity)”.

The wording of the final compromise text which we’ve seen states that: “Providers of online platforms shall not present advertising to recipients of the service based on profiling within the meaning of Article 4(4) of Regulation 2016/679 [aka, the GDPR] using special categories of personal data as referred to in article 9(1) of Regulation 2016/679.”

Article 4(4) of the GDPR defines ‘profiling’ as: “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements;”.

While the GDPR defines special category data as personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, as well as biometric and health data, data on sex life and/or sexual orientation.

So targeting ads based on tracking or inferring users’ sensitive interests is — on paper — facing a hard ban in the DSA.

Ban on use of dark patterns

A prohibition on dark patterns also made it into the text. But, as we understand it, this only applies to “online platforms” — so it does not look like a blanket ban across all types of apps and digital services.

That is unfortunate. Unethical practices shouldn’t be acceptable no matter the size of the business.

On dark patterns, the parliament says: “Online platforms and marketplaces should not nudge people into using their services, for example by giving more prominence to a particular choice or urging the recipient to change their choice via interfering pop-ups. Moreover, cancelling a subscription for a service should become as easy as subscribing to it.”

The wording of the final compromise text that we’ve seen says that: “Providers of online platforms shall not design, organize or operate their online interfaces in a way that deceives, manipulates or otherwise materially distorts or impairs the ability of recipients of their service to make free and informed decisions” — after which there’s an exemption for practices already covered by Directive 2005/29/EC [aka the Unfair Commercial Practices Directive] and by the GDPR.

The final compromise text we reviewed further notes that the Commission may issue guidance on specific practices — such as platforms giving more prominence to certain choices, repeatedly requesting a user makes a choice after they already have and making it harder to terminate a service than sign up. So the effectiveness of the dark pattern ban could well come down to how much attention the Commission is willing to give to a massively widespread online problem.

The wording of the associated recital in the final compromise we saw also specifies that the dark pattern ban (only) applies for “intermediary services”.

Crisis mechanism 

An entirely new article was also added to the DSA following Russia’s invasion of Ukraine — and in connection with rising concern around the impact of online disinformation — that creates a crisis response mechanism which will give the Commission extra powers to scrutinize VLOPs/VLOSEs in order to analyze the impact of their activities to the crisis in question.

The EU’s executive will also be able to come up with what the Council bills as “proportionate and effective measures to be put in place for the respect of fundamental rights”.

The mechanism will be activated by the Commission on the recommendation of the board of national Digital Services Coordinators.

The UK has announced a bundle of consumer protection and competition reforms which could see platforms that fail to tackle fake reviews fined up to 10% of their global annual turnover.

Also incoming: Stronger powers for the national competition regulator to prevent tech giants from being able to buy up startups or smaller rivals with the intention of shuttering a competing service (so called ‘killer acquisitions’).

However the government still hasn’t decided how to deal with the broad online scourge of dark pattern design which uses deceptive and/or manipulative tactics to dupe web users into spending more time or money than they intend on a digital service — saying it’s “seeking further evidence” on how best to arm regulators to combat these unethical tactics.

The reforms it has agreed follow a consultation last year on reforming competition and consumer policy which saw the government take feedback from businesses, consumers groups, regulators and others on how to strengthen legislation in these areas.

Consumer protection & competition reforms

In a response to the consultation published by the department for Business, Energy & Industrial Strategy (BEIS) today, the government said policies it’s proposing fall into three areas: Competition reforms to ensure the system is “fit for the digital age”; consumer rights reforms to keep pace with digital developments and tackle specific issues like fake reviews; and consumer enforcement reforms to empower the national antitrust watchdog to intervene effectively.

In a press release announcing the package of reforms this morning, BEIS said the plan would make it “clearly illegal” to pay someone to write or host a fake review.

There will also be “clearer rules” for businesses to make it easier for consumers to opt out of subscriptions so they are not stuck paying for things they no longer want.

So called ‘subscription traps’ — in which businesses make it difficult for consumers to exit a contract  — will also be targeted by new rules that companies must:

  • provide clearer information to consumers before they enter a subscription contract
  • issue a reminder to consumers that a free trial or low-cost introductory offer is coming to an end, and a reminder before a contract auto-renews onto a new term
  • ensure consumers can exit a contract in a straightforward, cost-effective and timely way

In a major change, the Competition and Markets Authority (CMA) will be able to directly enforce consumer law under the reform plan, rather than needing to go through a court process — with the aim of dialling up the speed of enforcement.

The watchdog will get new powers to fine firms up to 10% of their global turnover for “mistreating customers”, as BEIS put it, or up to £300,000 in the case of an individual.

It will also be able to award compensation to consumers, instead of that being the preserve of the courts.

There will also be measures aimed at helping consumers and traders resolve more disputes without needing to go to court — by improving Alternative Dispute Resolution (ADR) services in consumer markets, including via amendments to regulation intended to improve ADR services, per BEIS.

The government says the average UK household spends around £900 each year influenced by online reviews — and £60 on “unwanted subscriptions”.

The slated consumer protection reforms will apply in England, Scotland and Wales (the area is devolved in Northern Ireland).

In a statement, consumer minister Paul Scully said:

“We’re making sure consumer protections keep pace with a modern, digitised economy.  No longer will you visit a 5 star-reviewed restaurant only to find a burnt lasagne or get caught in a subscription in which there’s no end in sight. Consumers deserve better and the majority of businesses out there doing the right thing deserve protection from rogue traders undermining them.”

It’s not clear when exactly these new powers will come in. Legislation will need to be formally proposed and presented to parliament to undergo the usual process of scrutiny before it can become law and enter into force.

Prime minister Boris Johnson’s government also does not have the greatest record on swiftly legislating in these areas (consumer protection and competition) so it could be several years before new rules apply.

Fake reviews

On fake reviews, the government says it will consult on a new law to tackle fake reviews that would make it illegal to:

  • commission someone to write or submit a fake review
  • host consumer reviews without taking reasonable steps to check they are genuine
  • offer or advertise to submit, commission or facilitate fake reviews

In terms of the impact on platforms and marketplaces much will depend on exactly what “reasonable steps” boils down to in that context.

More thorough checks would be more expensive for platforms to implement. But if the measures are too weak and easy for scammers to circumvent it’ll be consumers left disappointed that fake reviews continue to proliferate.

The CMA — which has been broadly investigating online reviews since 2015 — has instigated a number of interventions against platforms on the issue of fake reviews specifically in recent years, including actions targeted at eBay, Facebook, Amazon and Google.

It has also expressed frustration with certain companies over their slow response to pressure to stop the trade in fake reviews, with CMA CEO Andrea Coscelli saying last year that it was “disappointing” Facebook had taken over a year to fix issues the regulator had flagged, for example.

A threat of fines that could — under the government’s reform plan — stretch into billions of dollars for a tech giant like Facebook would be more likely to concentrate C-suite minds on compliance with this issue.

Commenting in a statement on the full package of reforms, Coscelli said:

“This is an important milestone towards strengthening the CMA’s ability to hold companies to account, promote fair and open markets, and protect UK consumers. The CMA stands ready to assist the government to ensure that legislation can be brought forward as quickly as possible, so consumers and businesses can benefit.”

It may be that Facebook is also the inspiration for other planned changes to beef up penalties for breaches.

These reforms will see the regulator able to impose fines worth up to 5% of a business’ annual global turnover (as well as additional daily penalties for continued non-compliance) for breaches of undertakings given to it; and able to levy penalties worth up to 1% of a business’ annual global turnover (plus additional daily penalties if the breach continues) in the case of non-compliance with an information notice, concealing evidence or providing false information.

That’s notable after Facebook was fined $70M by the CMA last year for deliberately withholding information related to the regulator’s oversight of its acquisition of Giphy — the first such finding of a breach of an order by a company “consciously refusing to report all the required information”, as the CMA put it at the time.

The regulator subsequently ordered Facebook to undo the Giphy purchase — which was also the first time the CMA had blocked such a major digital acquisition.

Killer acquisitions

The tech giant’s power to inspire major regulatory reforms looks undeniably — given the government also intends to beef up the watchdog’s powers to combat killer acquisitions. (Facebook had shut down Giphy’s competing ad product after buying the smaller business, triggering competition concerns, an in depth probe and, finally, an order to reverse the acquisition.)

Other measures slated as incoming through the reform package include powers to strengthen the CMA’s ability to gather evidence to combat cartel-style anticompetitive behavior where companies colluding to bump up prices, per BEIS.

The CMA will also be empowered to fine businesses for anticompetitive abuses even in smaller markets as the government says it will reduce the minimum turnover threshold for immunity from financial penalties from £50M to £20M.

Smaller businesses will see some relief in the form of a government pledge to cut their M&A red tape by excluding mergers between small businesses — where each party’s UK turnover is less than £10M — from the CMA’s merger control altogether.

More details on the competition components of the reform are contained in the government response to the consultation — where it writes that it is progressing the following policies:

  • retaining a voluntary and non-suspensory merger control regime
  • adjusting the thresholds for the CMA’s jurisdiction to better target the mergers most likely to cause harm and ensure the regime remains proportionate:
    • Raising the turnover threshold in line with inflation (>£70m to >£100m UK turnover)
    • Creating an additional basis for establishing jurisdiction to enable review of so-called ‘killer acquisition’ and other mergers which do not involve direct competitors. Jurisdiction would be established where at least one of the merging businesses has: (a) an existing share of supply of goods or services of 33% in the UK or a substantial part of the UK; and (b) a UK turnover of £350m. In response to feedback received these thresholds have been raised from the levels originally consulted upon
  • introducing a small merger safe harbour, exempting mergers from review where each party’s UK turnover is less than £10 million, to reduce the burden on small and micro enterprises
  • government will also continue to monitor the operation of the share of supply test and may consider further proposals on how to reform it
  • enabling the CMA to deliver more effective and efficient merger investigations by:
    • accepting commitments from businesses which resolve competition issues earlier during a phase 2 investigation
    • enhancing and streamlining the merger ‘fast track’ procedure
    • updating how the CMA is required to publish its merger notice

Competition law covers the whole of the UK so these wider reforms will apply in all nations.

BEIS also noted today that it is developing closer ties with international partners as a result of the CMA dealing with more cross-border cases following Brexit.

“The government is making overseas disclosures of information held by a UK competition or consumer authority more streamlined, and introducing new powers on investigative assistance,” it added.

Moving like sludge…

While the UK government continues to consider how best to tackle dark pattern design, EU lawmakers have a chance to take the lead and ban these manipulative tactics in already proposed legislation — assuming the Council agrees with MEPs to include a prohibition on such practices in the Digital Services Act (another ‘trilogue’ compromise negotiation is scheduled for tomorrow so the issue is still a live one for the bloc).

The EU also already agreed a major ex ante reform of competition law to target tech giants — aka, the Digital Markets Act — which is expected to come into force later this year.

While the UK’s own slated ‘pro-competition’ ex ante reform is still pending legislation, despite the regime change being announced back in November 2020.

A Digital Markets Unit (DMU) was set up last year but it lacks the necessary powers to take action against tech giants’ market abuses, meaning the CMA has to tackle market power related problems using classic (slow) competition powers.

A UK government claim to be “moving in a more agile way than the EU, whilst maintaining high standards” — penned by the BEIS secretary in his ministerial forward to the response to the consultation outcome — looks questionable in light of how little progress has been made in bringing legislation forward to deliver on the long-trailed promise of a major competition reboot.

The CMA has been investigating a number of concerns about the market power of tech giants in recent years — including undertaking a market study of online advertising back in 2019 which concluded there were serious structural problems linked to the dominance of Google and Meta/Facebook.

However the regulator eschewed intervening when it reported its concerns — opting to wait for the government to the reform competition regime.

Another ongoing CMA market study — examining the Apple, Google mobile duopoly — has also raised substantial competition concerns but, again, the regulator has suggested these issues are best tackled by the DMU once it’s empowered, meaning that the structural remedies needed to tackle serious competition issues remain on pause.

Jailed Kremlin critic Alexey Navalny has called on tech giants Google and Meta/Facebook to help circumvent Putin’s grip on the media and get information out to ordinary Russians about what’s actually going on in the war in Ukraine by allowing their ad targeting tools and platforms to be selective repurposed to run a nationwide ad campaign that shows the bloody reality of the Kremlin’s so-called ‘special military operation’.

It’s a fascinating — if highly unlikely — idea to try to work around draconian laws Putin’s regime has implemented which mean Russian citizens are risking lengthy jail terms if they post anti-war comments themselves or even just like a social media post with an anti-war message.

There would, undoubtedly, be a certain symmetrical irony if Western ad targeting tools which the Kremlin routinely appropriates to spread its propaganda to try to attack the West got redirected and reversed into a firehose of truthful messaging targeted at Russian citizens to counter Kremlin propaganda. Albeit, allowing selective, one-way speech would likely just invite counter criticizm that the West was undermining its own commitment to free speech.

Another reason it’s unlikely is it would invite further retaliation on the tech platforms by the Kremlin, which — since invading Ukraine — has already blocked access to Facebook and Instagram, restricted access to Google News and levied a series of fines against YouTube.

Google and Meta also stopped selling ads inside Russia in the wake of the invasion. Although it’s less clear whether they have stopped accepting payment outside Russia for ads that are targeted inside the country — however Google, for one, has imposed what it calls a ‘global sensitive events’ policy which blocks ads that are related to the Ukraine conflict and/or otherwise seek to take advantage of the situation.

So it’s likely Navalny’s suggestion would run directly counter to that — and be impossible without Google changing its policy. But that hasn’t stopped Putin’s bravest critic from making the ask.

Targeted ads attack

In a lengthy tweet storm in Russian [translated using machine translation] laying out his thinking — and making a direct appeal to both Google and Meta and a number of Western political leaders — Navalny starts by describing this as “a thread about how to open a second front against the Kremlin war criminal. Informational”.

“The first thing to understand is that there is no 75% support for the war with Ukraine in Russia. This is a lie of the Kremlin,” he writes, suggesting that while there is widespread opposition Putin’s aggression among Russia citizens it is almost impossible for normal people to voice their opposition, given they face up to 15 years in jail for just using the word ‘war’ — and later claiming: “Putin’s secret services now have no more important things to do than arrest people for likes under anti-war posts.”

Per the thread, even just standing in the street in Moscow with a copy of Tolstoy’s classic, War and Peace, got a man arrested recently.

“The fact is that the majority of Russian citizens have a completely distorted idea of what is happening in Ukraine,” writes Navalny. “For them, Putin is waging a small, very successful war with little bloodshed. Our military are heroes, and there are almost no casualties among them.”

To counter such pervasive Kremlin propaganda, Navalny suggests leveraging what he says is still very heavy use (86%+ of Russian adults) of Western social media platforms and messaging apps, name-checking YouTube, Instagram, WhatsApp, Google and Facebook — and flooding those platforms with “a huge national anti-war campaign”, starting with an ad campaign.

“200 million impressions per day to hit every Russian Internet user twice. Stories, posts and pre-rolls. Throughout Russia, in cities and villages. In every tablet and every phone,” he suggests, fleshing out a scenario in which Western ad platforms agree to be mobilized and weaponized against Putin’s aggression.

For his idea to work, Navalny says a nationwide ad campaign is needed — and also targeted ads specifically, with the Kremlin critic writing that when Google and Meta stopped selling ads inside Russia it “seriously complicated the work of the opposition”.

“After all, we need to agitate not supporters, but opponents and doubters. And when we could give well-targeted ads, it worked. We gave battle to Putin’s propaganda and won,” he goes on, suggesting that de facto opposition election wins were made possible with the help of targeted ads.

“In the last elections to the State Duma, our candidates won almost all the districts of Moscow and St. Petersburg, according to the protocols, and were stopped only by large-scale falsifications. And yes, it works not only in Moscow, but also in Siberia, and throughout the country. Checked.”

He then segues into arguing that stopping the war is “more important than any election” — before conceding the core difficulty with an idea that calls for the West to weaponize speech via adtech, even for an anti-war purpose.

“I understand that in a democratic system, the authorities cannot order @Google and @Meta to advertise for one and forbid it for the other,” he writes. “However, I urge @POTUS, @StateDept, @BorisJohnson, @EU_Commission, @JosepBorrellF, @vonderleyen, @finkd, @sundarpichai urgently to find a solution to crush Putin’s propaganda, using the advertising opportunities of social networks.”

The price of sending vast amounts of anti-war propaganda (200M ad views, at least 300k clicks or 8M video views) would cost the same as just one shot from a Javelin anti-tank missile ($230k), Navalny also suggests, adding: “It is necessary to give an opportunity to honest and impartial media in Russian – @meduzaproject, @wwwproektmedia, @the_ins_ru, @mediazzzona, @istories_media, @bbcrussian, @dw_russian, @VOANews – to radically expand the audience through direct and well-targeted advertising, taking into account all the possibilities @Google and Facebook (@Meta).”

We reached out to Google and Meta for a response to Navalny’s suggestion but at press time neither company had responded.

Unsurprisingly, given its own, well-documented ad targeting propaganda playbook, the Kremlin itself seems wise to the risk that targeted advertising could be weaponized against it — also warning YouTube last month over what it described as anti-Russian “information attacks” which it said were being spread on its platform.

Back in February, soon after Russia’s invasion of Ukraine had started, the state media regulator, Roskomnadzor, also put out a warning over “false” information it claimed was being spread via online ads using Google Ads contextual advertising — saying it had written to Google to complain about it, while also warning owners of domestic Internet resources and ad networks that they’re risking a fine of up to 5 million rubles if found distributing “unreliable socially significant information”. It also warned it would block any Internet resources found hosting such ads.

Interestingly, though, the Kremlin has not moved to block YouTube itself — as some had feared might happen, given earlier bans slapped on Facebook and Instagram.

Propaganda channels

The persistence of YouTube inside Russia when other Western services have been blocked hints at how much power the video sharing platform has — power the Kremlin can and does harness for its own ends of course, as a major outlet for spreading its propaganda.

While Google blocked the Russian state media affiliated channels, RT and Sputnik, on YouTube across Europe, after an EU ban on their distribution last month, it notably initially left the channels running inside Russia itself. Although it later (on March 11) expanded the policy to block the channels globally — apparently deplatforming those channels inside Russia too.

In recent weeks, Roskomnadzor has stepped up a steady stream of public ire targeted at YouTube in relation to a number of channel suspensions — issuing a press release earlier this month, for example, in which it “demanded that the American company Google LLC, which owns the YouTube Internet service, immediately restore access to the Duma TV channel of the State Duma of the Russian Federation on its video hosting platform and explain the reason for the introduction of such restrictions”.

“The State Duma of the Federal Assembly of the Russian Federation is a representative and legislative body of power in the Russian Federation, one of the chambers of the Parliament. The blocking of the youtube channel of the state body, carried out by the administration of the video hosting, hinders the dissemination of information and free access to it,” the regulator went on, before accusing Google of adhering to “a pronounced anti-Russian position in the information war unleashed by the West against our country”; and accusing YouTube of being “a key platform for spreading fakes, discriminating against official Russian sources of information”.

In another of its press releases, dated April 7 of this month, the regulator says that since April 2020 it has identified around 60 “incidents” of YouTube ‘discriminating’ against hosting content from Russian media, government, public and sports organizations and figures — including blocking the accounts or content of news agencies Russia Today, Russia 24, Sputnik, Zvezda, RBC, NTV, among others.

In recent weeks the regulator has also started administrative proceedings against Google for not removing certain content from YouTube which the Kremlin wishes to purge — actions that may result in fines of several millions of roubles.

Given this stream of angry words directed at YouTube it’s interesting the Kremlin hasn’t moved to shutter domestic access to the platform entirely — which suggests it continues to play a strategically important role for distributing Putin’s propaganda domestically, even with some ‘enhanced’ content restrictions in place.

In his thread, Navalny does not accuse Google of allowing YouTube to be a key conduit for Kremlin disinformation. But given he’s trying to encourage the tech giant to sign up to a cyber war against the Kremlin that might be his idea of being tactful.

Instead, he’s scathing in his criticism of the role of Russian tech giant Yandex in helping spread Putin’s messages — dubbing the local search giant the “main propagandist” of the war, bigger even than the TV channels that carry Putin’s messaging 24/7.

“News from their main page (and there is a solid shameless lie) is the main source of information for 41% of the population,” he specifically claims.

Sources close to Yandex, meanwhile, claim its services are being increasingly used by Russians to circumvent state propaganda by helping Russians find international media coverage and access opposition media.

However the role that the company’s News product plays in amplifying Kremlin propaganda is impossible to deny — given the list of media sources news aggregators are authorized to link to is also controlled by the Russian state through a licensing regime overseen by Roskomnadzor.

Last month it emerged that Yandex is looking to step out of the increasingly risky media business by seeking a buyer for its News and Zen products. At the time, our sources suggested another local tech giant, VK, is a leading contender to acquire the products.

One key consideration is whether Yandex will actually remove the News product from its homepage post-sale — which, if it does, could potentially put a major dent in the Kremlin’s propaganda machine by removing a go-to news source for many Russians. But if Yandex continues to display the News feed on its homepage, i.e. by partnering with the new owner, there would be no change to Russians being shown a prominent feed of disinformation.

Any sale and operational changes by Yandex will need to be signed off by the Kremlin. The tech giant agreed to a restructuring back in 2019 which increased government influence over the company.

Despite that the IT giant has sought to claim a neutral stance, saying — in the case of News amplifying state propaganda — it has no choice but to abide by local media licensing laws.

Last year, Google — and Apple — also blamed “local laws” for their decision to remove a tactical voting app created by Navalny’s organization from their mobile app stores in Russia.

The medium is the message more than ever these days, and brands are faced with a challenge — but also opportunity — to capture what consumers think about them and their products if they can harness and better understand those messages, via whichever medium is being used to deliver them. Today, a company called BlueOcean that has built an artificial intelligence-powered platform that it says can produce those insights is announcing $30 million in funding, money that it will be using to continue expanding its technology on the heels of rapid growth.

Insight Partners led the round, with FJ Labs also participating. Valuation is not being disclosed.

Digital life as it plays out these days has created a perfect storm (heh) for BlueOcean. We spend more time online than ever before, and the number of places where we might encounter a product or service has grown along with that: social media feeds are noisy with ads, content that feels like ads, lots of opinions; we do most of our news, information and entertainment sourcing online; we shop there, too; and many of us also spend our days working in cyberspace as well.

That’s a lot of real estate where a brand (or a brand’s competitors) might potentially appear, either intentionally or inadvertently, and more likely than not in a form that is outside of that brand’s control.

“Fragmentation is a huge driver,” Grant McDougall, the CEO who co-founded the company with president Liza Nebel, said in an interview. “There are silos all over the business and what we do sits over the top of that, to provide a common language to understand and talk to, for example, both to the CFO about revenue team as well as loyalty teams about messaging.”

At the same time, the tech industry that has built all of those online experiences has also built an enormous amount of tools to better parse what is going on in that universe. AI is playing a huge role in that navigation game: it’s too much for a single human, or even a large team of humans, to parse; and so a company like BlueOcean building tech to do some of that work for marketing professionals and others to have better data to work with becomes very valuable.

That has played out as a very significant evolution for the startup.

We last covered BlueOcean in 2020 when it was focused on a more narrow concept of digital brand identity: a company provided its website and a list of competitors, and one week later, for a price of $17,000, BlueOcean provided customers with brand audits that included lists of actionable items to improve or completely change. (As a point of contrast, typically brand audits for large brands can cost millions of dollars and typically do not come with specific pointers for improvement.)

Fast forward to today, and the company has expanded the scope of what it does for customers, and its overall engagement: its AI algorithms and big-data ingestion engine are now focused on providing continuous feedback to its customers, which subscribe to the service at fees starting at $100,000 per year. They use BlueOcean not just to measure their overall brand recognition in the market, but to track how specific products are performing; which launch strategies are working, and which are not; and the impact of different campaigns in different markets in real time so that they can change and respond more quickly.

“Lots has changed,” said McDougall. “We’re an AI powered brand intelligence platform. Access to insights and what competitors are doing are more relevant today than it’s ever been. What we do is collect information about brands out in public and help them understand performance relative to competitors, to help them take action to improve their brands to get market share.”

Interestingly, just as the Covid-19 pandemic has been a huge fillip to e-commerce and more generally online consumption of everything, so too has it played a strong role in the growth of BlueOcean and the approach that it takes. In the world of fast-paced and constantly changing and refreshed information, big-picture insights can be more meaningful than no picture at all, or one delayed for the sake of more detail.

“Covid has surfaced that speed is more important than accuracy,” noted Nebel. “We have data [to shape better] inclinations right now. It’s about making changes to capture opportunity.”

That concept has also clicked with its investors.

“Having invested in hundreds of the world’s most well-known brands, we know that having accurate and fast data is vital to brand health. We have extreme faith in BlueOcean and we’re excited to bring them into our investment portfolio,” said Fabrice Grinda, founding partner of FJ Labs, in a statement.

BlueOcean still also provides all-important competitive analysis but builds those lists of other companies and the data produced about them in conjunction with its customers, based in part on where the customer sees itself and would like to see itself; and also where it is as a brand in the real world.

It has also expanded its customer list: it now works with 84 brands, which may not sound like much except that these are some of the biggest companies in the world — they include Microsoft, Google, Amazon, Diageo, Cisco, Bloomingdales and Juniper Networks (and others that it cannot name) — and collectively represent what BlueOcean describes as $18 trillion in value and more than 6,000 brands — a list investors believe is poised to grow in line with how the internet itself is growing.

“After leading BlueOcean’s Series A round, we are proud to also lead their Series B to help them scale and serve even more brands,” said Whitney Bouck, MD at Insight Partners, in a statement. “As a former CMO myself, I know that marketing is constantly challenged to provide true ROI on brand marketing. BlueOcean gives marketing leaders quantifiable and actionable insights on brand performance for the first time, which we know is game-changing.”

An interesting new study of 1,759 iOS apps before and after Apple implemented a major privacy feature last year which required developers to ask permission to track app users — aka App Tracking Transparency (ATT) — has found the measure has made tracking more difficult by preventing the collection of the Identifier for Advertisers (IDFA), which can be used for cross-app user tracking.

However, the researchers found little change to tracking libraries baked into apps and also saw many apps still collecting tracking data despite the user having asked the apps not to be tracked.

Additionally, they found evidence of app makers engaging in privacy-hostile fingerprinting of users, through the use of server-side code, in a bid to circumvent Apple’s ATT — suggesting Cupertino’s move may be motivating a counter movement by developers to deploy other means to keep tracking iOS users.

“We even found a real-world example of Umeng, a subsidiary of the Chinese tech company Alibaba, using their server-side code to provide apps with a fingerprinting-derived cross-app identifier,” they write. “The use of fingerprinting is in violation of Apple’s policies, and raises questions around to what extent the company is able to enforce its policies. ATT might ultimately encourage a shift of tracking technologies behind the scenes, so that they are outside of Apple’s reach. In other words, Apple’s new rules might lead to even less transparency around tracking than we currently have, including for academic researchers.”

The research paper, which is entitled “Goodbye Tracking? Impact of iOS App Tracking Transparency and Privacy Labels”, is the work of four academics affiliated with the University of Oxford and a fifth independent U.S.-based researcher. It’s worth noting that it’s been published as a pre-print — meaning it has not yet been peer reviewed.

Another component of the study looked at the “privacy nutrition labels” Apple introduced to iOS at the end of 2020 — with the researchers concluding that these labels are often inaccurate.

Apple’s system, which aims to provide iOS users with an at-a-glance overview of how much data they’re giving up to use an app, requires app developers to self-declare how they process user data. And here the researchers found “notable discrepancies” between apps’ disclosed and actual data practices — which they suggest may be creating a false sense of security for consumers and misleading them over how much privacy they’re giving up to use an app.

“Our findings suggest that tracking companies, especially larger ones with access to large troves of first party, still track users behind the scenes,” they write in a section discussing how continued, consentless tracking may be reinforcing both the power of gatekeepers and the opacity of the mobile data ecosystem. “They can do this through a range of methods, including using IP addresses to link installation-specific IDs across apps and through the sign-in functionality provided by individual apps (e.g. Google or Facebook sign-in, or email address).

“Especially in combination with further user and device characteristics, which our data confirmed are still widely collected by tracking companies, it would be possible to analyse user behaviour across apps and websites (i.e. fingerprinting and cohort tracking). A direct result of the ATT could therefore be that existing power imbalances in the digital tracking ecosystem get reinforced.”

The paper may add fuel to arguments that try to pitch competition law against privacy rights as the paper’s authors suggests their findings back the view that Apple and other large companies have been able to increase their market power as a result of implementing measures like ATT which give users more agency over their privacy.

Apple was contacted for comment on the research paper but at the time of writing the company had not responded.

Competition authorities have already fielded a number of complaints over Apple’s ATT.

While a separate plan by Google to deprecate support for tracking cookies in its Chrome browser — and switch to alternative ad targeting technologies (which the tech giant has also said it will bring to Android devices) — has similarly been targeted for antitrust complaints in recent months.

As it stands, neither move by the pair of mobile gatekeepers, Apple’s ATT or Google’s self-styled “Privacy Sandbox”, has been outright blocked by competition regulators, although Google’s Sandbox plan remains under close monitoring in Europe following a U.K. antitrust intervention which led the company to offer a series of commitments over how it will develop the tech stack. The interventions have also very likely contributed to delaying Google’s original timeline.

The EU is also conducting a formal antitrust investigation of Google’s adtech, which includes probing the Sandbox plan — although, at the time it announced the investigation, it stressed that any decision would need to consider user privacy too, writing that it would “take into account the need to protect user privacy, in accordance with EU laws in this respect, such as the General Data Protection Regulation”, and emphasizing that: “Competition law and data protection laws must work hand in hand to ensure that display advertising markets operate on a level playing field in which all market participants protect user privacy in the same manner.”

Joint working by the U.K.’s competition (CMA) and privacy regulators (ICO) has also been the approach undertaken throughout the CMA’s Privacy Sandbox procedure. And in an opinion last year, the outgoing U.K. information commissioner told the adtech industry it needed to move away from tracking and profiling-based ad targeting — urging the development of alternative ad targeting technologies that don’t require processing people’s data.

In discussion in their research paper, the researchers go on to speculate that reduced access to permanent user identifiers as a result of Apple’s ATT could — over time — “substantially improve” app privacy, pointing exactly to these wider shifts underway to recast ad-targeting technologies (such as Google’s Sandbox) which claim to be better for privacy, although as the researchers also note those claims need to be interrogated — as having the potential to flip economic calculations away from privacy-hostile techniques like fingerprinting.

However they predict that this migration away from tracking is further concentrating the market power of platform gatekeepers.

“While in the short run, some companies might try to replace the IDFA with statistical identifiers, the reduced access to non-probabilistic cross-app identifiers might make it very hard for data brokers and other smaller tracker companies to compete. Techniques like fingerprinting and cohort tracking may end up not being competitive enough compared to more privacy-preserving, on-device solutions,” they suggest. “We are already seeing a shift of the advertising industry towards the adoption of such solutions, driven by decisions of platform gatekeepers (e.g. Google’s FloC / Topics API and Android Privacy Sandbox, Apple’s ATT and Privacy Nutrition Labels), though more discussion is needed if these new technologies protect privacy meaningfully.

“The net result, however, of this shift towards more privacy preserving methods is likely going to be more concentration with the existing platform gatekeepers, as the early reports on the tripled marketing share of Apple, the planned overhaul of advertising technologies by Facebook/Meta and others, and the shifting spending patterns of advertisers suggest. At the end of the day, advertising to iOS users — being some of the wealthiest individuals — will be an opportunity that many advertisers cannot miss out on, and so they will rely on the advertising technologies of the larger tech companies to continue targeting the right audiences with their ads.”

The paper also calls out the failure of European regulators and policymakers to crack down on tracking by enforcing privacy laws such as the General Data Protection Regulation (GDPR), writing that: “[I]t is worrying that a few changes by a private company (Apple) seem to have changed data protection in apps more than many years of high-level discussion and efforts by regulators, policymakers and others. This highlights the relative power of these gatekeeper companies, and the failure of regulators thus far to enforce the GDPR adequately. An effective approach to increase compliance with data protection law and privacy protections in practice might be more targeted regulation of the gatekeepers of the app ecosystem; so far, there exists no targeted regulation in the US, UK and EU.”

Targeted regulation is coming down the pipe for internet gatekeepers, though. Albeit at a pace that’s orders of magnitude slower than the ads which get auctioned off and microtargeted at eyeballs every millisecond of every day.

The European Union reached political agreement on its flagship ex ante competition reform for gatekeepers, aka the Digital Markets Act, just last month — and lawmakers said then that they expect the regime to come into force in October. (Although it’s unlikely to really kick in until 2023 at the earliest and there’s already debate over whether the Commission has adequate resources to enforce against some of the world’s most valuable companies with their expanding armies of in-house lawyers.)

The U.K., meanwhile, has its own bespoke version of this sort of Big Tech competition reform. Its “pro-competition” regime was trailed back in 2020 but is still pending legislation to empower the Digital Markets Unit. And recent reports in the U.K. press have suggested the Digital Competition Bill won’t now be presented to parliament until next year — which would mean further delay.

Germany is ahead of the curve here, having passed a competition reform at the start of last year. It has also — earlier this year — identified Google as subject to this special abuse control regime. Although the country’s FCO still needs to complete the work of investigating the various Google products that are causing it competition concern. But it’s possible that we’ll see some gatekeeper targeted enforcements by the FCO this year.

Google has just announced the next stage of trials of its Privacy Sandbox proposal — focused on ads relevance and measurement.

The Sandbox refers to an evolving — and now very closely overseen — ad targeting tech stack which Google has proposed for replacing tracking cookie based targeted advertising in Chrome by (at the earliest) the second half of 2023 with alternatives which it argues will be better for users’ privacy yet still effective for generating ad revenue.

Writing in a blog post today, Vinay Goel, product director, Privacy Sandbox, Chrome, said: “Starting today, developers can begin testing globally the Topics, FLEDGE, and Attribution Reporting APIs in the Canary version of Chrome.

“We’ll progress to a limited number of Chrome Beta users as soon as possible. Once things are working smoothly in Beta, we’ll make API testing available in the stable version of Chrome to expand testing to more Chrome users.”

The Sandbox proposal is made up of multiple components, such as Topics — Google’s idea for interest-based ad targeting via browser-based tracking of users’ web activity (which replaced FLoCs; the much criticized antecedent Google recently abandoned) — and FLEDGE, Google’s proposal for remarketing and custom audiences without individual-level user tracking.

As well as being complex and acronym-ridden, Google’s Sandbox plan has attracted no shortage of controversy.

Most notably, antitrust regulators in Europe stepped in following complaints from publishers and advertisers who argue that Google’s plan to deprecate tracking cookies will simply entrench its market power.

But after obtaining a number of commitments from Google over how it would develop the Sandbox (including the appointment of a monitoring trustee), the UK’s CMA signed off on letting the project proceed last month — paving the way for continued development and another batch of Sandbox trials to go ahead now. (Although EU regulators are continuing to scrutinize the plan.)

Google said it will also now begin testing updated Privacy Sandbox settings and controls — which it says will allow users to “see and manage the interests associated with them, or turn off the trials entirely”.

Its blog post gives a sample graphic of some of the settings it will be trialing — which show a multi-layered menu structure with a master toggle to turn off (or on) the trials at the top level and, drilling down, a menu for browser-based ad personalization where a user could remove interests assigned to them by Topics-based surveillance of their browsing activity and edit the list of sites from where the system infers interests, as well as two other menus (one related to ad measurement and another for spam and fraud reduction).

Image credits: Google

Notably, Chrome users in the European Union (and a few other regional markets) won’t be opted in to the latest Sandbox origin trials — and will instead only be able to participate if they actively choose to opt in by flipping the toggle to the on position, per Google. That’s likely owing to the legal protections for people’s privacy in the region, under laws such as the EU’s General Data Protection Regulation.

Trials of the now abandoned FLoCs component of Sandbox last year begun outside Europe too.

“During the upcoming origin trial, Chrome plans to test multiple methods for notifying users about the trials depending on region,” a Google spokeswoman told us. “In the European Economic Area, Switzerland and the UK users will be asked to voluntarily participate in the trials by way of opt in.”

“All users will have robust controls, and can opt out of the trials at any point,” the spokeswoman added.

Whether Google’s approach with Sandbox will truly be privacy preserving is one rather salient and yet to be answered question hanging over the proposal.

There is also the wider issue of whether targeting ads based on inferred interests of individuals (by surveilling their browsing locally) won’t simply replicate the predatory and discriminatory targeting that’s all-too-possible with current individual tracking-based adtech. So how much of a privacy reform/reboot Google’s Sandbox will actually be remains to be seen.

Commenting generally on Google’s proposals, Dr. Lukasz Olejnik, an independent privacy researcher and consultant, said its approach to develop “privacy preserving” ad targeting is drawing on earlier years of research. But he agreed the task is challenging.

“Such research is kind of forgotten now but it was the exciting part of privacy research circles 10-7 years ago or so. It is no longer possible to have an academic publication in this space because it seems that academics consider the problem ‘solved’ for many years now. Well, sort of, however ridiculous this sounds.

“But Google is not interested in having a conference paper, they’re actually building the infrastructure, and this is the challenging part because the web is a complex ecosystem. It’s also quite leaky — so the important part is to propose suitable amendments or additions to the crucial parts of the web architecture to make it privacy proof,” he told TechCrunch.

Olejnik also said Google’s trial announcement underlines at least one notable learning since the tech giant kicked off the Sandbox migration effort a couple of years ago.

“The announcement is also evidence that Google learned the hard way of the importance of user control of tests of the kind, so it is now clear that the users will be in control from the very beginning,” he suggested.

In Google’s blog post, Goel ends by writing that the Sandbox proposals have “benefited substantially from the thoughtful feedback of early testers”, adding that: “We’re eager to open up testing for more of our proposals. We’ll continue to gather feedback from the ecosystem and to engage with regulators globally.”

Developers are pointed to Google’s guidance about the Sandbox APIs and for information about how they can participate in the trials.

 

There were already several hints that YouTube was getting more serious about podcasts, after reports indicated the company hired a podcast executive, Kai Chuk, to lead its efforts in the space and had even begun offering cash to popular podcasters to film their shows. Now, a leaked document has unveiled more about YouTube’s plans in this area, pointing to a future podcasts homepage on YouTube.com and other monetization features.

The details were published by Podnews, which recently got its hands on an 84-page presentation where YouTube described its podcasts roadmap. Here, the company says it will improve podcast ingestion by piloting the ability to pull in podcast RSS feeds. It also noted it plans to centralize podcasts on a new homepage at YouTube.com/podcasts. The URL doesn’t yet work; but it also doesn’t automatically redirect to the YouTube homepage — which is what it does if you put other random words after the slash.

Not surprisingly, Google sees podcasts as a way to expand its advertising business on YouTube. The document suggests YouTube will feature audio ads sold by Google as well as other partners. It mentions the support of “new metrics” designed for audio-first creators and the ability to integrate YouTube data into industry-standard podcast measurement platforms. One page shows brands like Nielsen, Chartable, and Podtrac listed as partners.

The addition of a new “podcasts” vertical to YouTube would be a logical next step for the company.

Over the years, YouTube has highlighted the service’s larger content categories by giving them their own homepages, as it did with YouTube Gaming back in 2015 and with YouTube Fashion (now Fashion & Beauty) in 2019. Plus, YouTube content helps to power Google’s music streaming service, YouTube Music, which competes with other services like Spotify, where podcasts are a competitive advantage.

Spotify has been looking to dominate the podcast advertising market and has made several acquisitions to bring related adtech in-house. As a result, Spotify has since been able to sell its own ads, introduce streaming ad insertion technology, launch its own audio ad marketplace, and is trying out new ad formats. Meanwhile, as a video-centric platform, YouTube has been left out of much of this ad market growth.

Podnews didn’t publish the full document and it’s not clear when the document was first produced or distributed, given references to launches that are listed as coming “in 2022” and the mention of Chartable, a company Spotify acquired last month. YouTube didn’t comment to Podnews, per its article. We’ll update if a comment is provided to us.

In Russia’s latest swipe at foreign social media giants since it started a land war in Europe by invading Ukraine late last month, the country’s internet censor has fired a warning shot at Google over what it describes as anti-Russian “information attacks” which it claims are being spread via YouTube — accusing the U.S. tech giant of being engaged in acts “of a terrorist nature” by allowing ads on the video-sharing platform to be used to threaten Russian citizens.

In a statement posted on its website today, Roskomnadzor claims YouTube has been serving targeted ads that call for people to disable railway links between Russia and Belarus.

“The actions of the YouTube administration are of a terrorist nature and threaten the life and health of Russian citizens,” the regulator wrote [translated from Russian with machine translation].

“The spread of such appeals clearly demonstrates the anti-Russian position of the American company Google LLC,” it added.

The regulator also warned Google to stop distributing “anti-Russian videos as soon as possible”.

Its statement goes on to accuse U.S. IT companies in general, and tech giants Google and Meta (Facebook’s owner) in particular, of choosing a “path of confrontation” with Russia by launching a targeted campaign of “information attacks” that it says are intended to “discredit the Russian Armed Forces, the media, public figures and the state as a whole”.

“Similar actions by Meta Platforms Inc. and Google LLC not only violate Russian law but also contradict generally accepted norms of morality,” Roskomnadzor added.

YouTube could not immediately be reached for comment on the warning from Roskomnadzor.

The direct warning to Google from the state internet censor could be a precursor to Russia blocking access to YouTube.

In recent days, Facebook and Instagram have both been blocked by Roskomnadzor — as the Kremlin has sought to tighten its grip on the digital information sphere in parallel with its war in Ukraine.

Facebook and Instagram were blocked after Meta said it was relaxing its hate speech policy to allow users in certain regions to post certain kinds of death threats aimed at Russia — which Meta global affairs president, Nick Clegg, defended as a temporary change he said was designed to protect “people’s rights to speech as an expression of self-defense”.

In recent weeks, Roskomnadzor has also put restrictions on Twitter.

But YouTube has escaped any major censorship since the Ukraine invasion, despite the company itself applying some limitations to its service in Russia — such as suspending payment services for users (it took that action as a result of Western sanctions against Russian banks).

In one signal that that could be about to change, a report in Russian press today suggests a block is looming, citing sources close to Roskomnadzor who told it YouTube could be blocked as soon as today or next week. RIA Novosti’s sources told it a block of YouTube is “most likely” by the end of next week.

In what may be another small indicator of the cyber war that’s now fiercely raging between Russia and Ukraine, Roskomnadzor’s website was noticeably slow to load as we were filing this report today. It also appears to have introduced a CAPTCHA request — suggesting it may be trying to prevent and/or mitigate DDoS attacks.

Australia’s Competition and Consumer Commission (ACCC) has instigated proceedings against Facebook owner Meta for allowing the spread of scam ads on its platforms and — it alleges — not taking sufficient steps to tackle the issue.

The watchdog said today that it’s seeking “declarations, injunctions, penalties, costs and other orders” against the social media giant, accusing it of engaging in “false, misleading or deceptive conduct” by publishing scam advertisements featuring prominent Australian public figures — activity the ACCC asserts breaches local consumer laws.

Specifically, it alleges Meta’s conduct is in breach of the Australian Consumer Law (ACL) or the Australian Securities and Investments Commission Act (ASIC Act).

The regulator’s accusation extends to alleging that Meta “aided and abetted or was knowingly concerned in false or misleading conduct and representations by the advertisers” (i.e. who used its platform to net victims for their scams).

Meta refutes the accusations, saying it already uses technology to try to detect and block scams.

In a statement on the ACCC’s action attributed to a company spokesperson, the tech giant said:

“We don’t want ads seeking to scam people out of money or mislead people on Facebook — they violate our policies and are not good for our community. We use technology to detect and block scam ads and work to get ahead of scammers’ attempts to evade our detection systems. We’ve cooperated with the ACCC’s investigation into this matter to date. We will review the recent filing by the ACCC and intend to defend the proceedings. We are unable to further comment on the detail of the case as it is before the Federal Court.”

The ACCC says the scam ads it’s taking action over promoted cryptocurrency investment or money-making schemes via Meta’s platforms, and featured people likely to be well known to Australians — such as businessman Dick Smith, TV presenter David Koch and former NSW Premier Mike Baird — who could be seen in the ads apparently endorsing the schemes, yet, in actuality, these public figures had never approved or endorsed the messaging.

“The ads contained links which took Facebook users to a fake media article that included quotes attributed to the public figure featured in the ad endorsing a cryptocurrency or money-making scheme. Users were then invited to sign up and were subsequently contacted by scammers who used high pressure tactics, such as repeated phone calls, to convince users to deposit funds into the fake scheme,” it explains.

The ACCC also notes that celebrity endorsement cryptocurrency scam ads continued being displayed on Facebook in Australia after public figures elsewhere around the world had complained that their names and images had been used in similar ads without their consent.

A similar complaint was pressed against Facebook in the UK back in 2018 — when local consumer advice personality, Martin Lewis, sued the platform for defamation over a flood of scams ads bearing his image and name without his permission which he said were being used to trick and defraud UK consumers.

Lewis ended that litigation against Facebook in 2019 after it agreed to make some changes to its platform locally — including adding a button to report scam ads. (A Facebook misleading and scam ads reporting form was subsequently also made available by the company in Australia, the Netherlands, and New Zealand.)

Despite ending his suit, Lewis did not end his campaigning against scam ads — most recently (successfully) pressing for draft UK Online Safety legislation, which was introduced to the country’s parliament yesterday, to be extended to bring scam ads into scope. That incoming regime will include fines of up to 10% of global annual turnover to encourage tech giants to comply.

Australia, meanwhile, legislated on Online Safety last year — with its own similarly titled Act coming into force this January. However its online safety legislation is narrower, focused on other types of abusive content (such as CSAM, terrorism, cyberbullying etc).

For pursuing online platforms on the scam ads issue, the country is relying on existing consumer and financial investment rules.

It remains to be seen whether these laws are specific enough to be successfully used to force a change in Meta’s conduct around ads. 

The adtech giant makes its money from profiling people to serve targeted advertising. Any limits on how its ad business can operate — such as requirements to manually review all ads before posting and/or limitations on its ability to target ads at eyeballs — would significantly ramp up its costs and threat its ability to generate so much revenue.

So it’s notable that the ACCC does appear to be eyeing orders for such types of measures — suggesting, for example, that Meta’s targeting tools are exacerbating the scam ads issue by enabling scammers to target people who are “most likely to click on the link in an ad “– assuming, of course, that it prevails in its proceeding.

That looks like the most interesting element of the proceeding — if the ACCC ends up digging into how scammers are able to use Facebook’s ad tools to amplify the effectiveness of their scams.

In Europe, wider moves are already afoot to put legal limits on platforms’ ability to run tracking ads. While Meta has been warning its investors of “regulatory headwinds” impacting its ad business.

“The essence of our case is that Meta is responsible for these ads that it publishes on its platform,” ACCC chair Rod Sims wrote in a statement. “It is a key part of Meta’s business to enable advertisers to target users who are most likely to click on the link in an ad to visit the ad’s landing page, using Facebook algorithms. Those visits to landing pages from ads generate substantial revenue for Facebook.

“We allege that the technology of Meta enabled these ads to be targeted to users most likely to engage with the ads, that Meta assured its users it would detect and prevent spam and promote safety on Facebook but it failed to prevent the publication of other similar celebrity endorsement cryptocurrency scam ads on its pages or warn users.”

“Meta should have been doing more to detect and then remove false or misleading ads on Facebook, to prevent consumers from falling victim to ruthless scammers,” he added.

Sims also pointed out that in addition to “untold losses to consumers” — in one case the ACCC said a consumer lost $650,000 to a scam advertised as an investment opportunity on Facebook — scam ads damage the reputation of public figures falsely associated with them, reiterating that Meta has failed to take “sufficient steps” to stop fake ads featuring public figures, even after the public figures had reported to it that their name and image were being featured in celebrity endorsement cryptocurrency scam ads.

The idea that a technology platform which — over a full decade ago! — was able to deploy facial recognition on its platform for autotagging users in photo uploads would be unable to successfully apply the same sort of tech to automatically flag-for-review all ads bearing certain names and faces — after, or even before, a public figure reported a concern — looks highly questionable.

And while Meta claims that “cloaking” is one technique spammers use to try to workaround its review processes — aka, presenting different content to Facebook users and Facebook crawlers or tools — that is also the exact kind of technology problem you’d imagine a tech giant would be able to deploy its vast engineering resources to crack.

It’s certainly telling that in the four or so years since Lewis’ scam ads litigation the exact same playbook can apparently still be being successfully deployed by scammers through Facebook’s platform all around the world. If this is success, one has to wonder what Meta failing would look like.

How many scam ads Meta is ‘successfully’ removing is not at all clear.

In a section of its self-styled Community Standard Enforcement Report — that’s labelled “spam” (NB: not scams; and also where spam is functioning as a catch all (and self-defined) term, meaning it does not exclusively refer to problematic stuff that appears in ads specifically, let alone scams in ads) — Meta writes that “1.2 billion” is the figure for “content actioned on spam” in the three months of Q4.

This figure is all but meaningless since Meta gets to define what constitutes a single piece of “spam” for the purposes of its “transparency” reporting, as the company itself concedes in the report — hence the tortuous phrasing (“content actioned on spam”, not even pieces, or indeed ads, photos, posts etc). It also of course gets to define what spam is in this context — apparently bundling scam ads into that far fuzzier category too.

Furthermore, in the report, Meta doesn’t even write that 1.2BN refers to 1.2BN pieces of spam. (In any case, as noted above, a ‘piece’ of spam — in Meta’s universe — might actually refer to several pieces of content which it has decided to bundle up and count as one unit for public reporting purposes, such as multiple photos and text posts, as it also discloses in the report, which essentially means it can use a show of transparency to further obscure what’s actually happening on its platform.)

There’s more too: The term “actioned” — yet another self-serving bit of Meta nomenclature — does not necessarily mean that the (in this case “spam“) content got removed. That’s because it bundles a number of other potential responses, such as screening content with a warning or disabling accounts.

So — tl;dr — as ever with big adtech, it’s impossible to trust platforms’ self reported actions around the content they’re busy amplifying and monetizing — absent explicit legislative requirements mandating exactly what data points they must disclose to regulators in order to ensure actual oversight and genuine accountability.

Not-for-profit search engine Ecosia has started funnelling a portion of the profits it generates from serving ads against users’ searches into startups in the renewable energy space.

This is in addition to the €350M WorldFund which Ecosia recently incubated and launched last year to back climate-focused startups.

To be clear, Ecosia is also continuing to fund tree-planting with search ads profits (an activity it’s best known for) — but the Berlin-based search engine told us it’s now making an “ongoing commitment” to green energy investment as a result of the energy crunch triggered by Russia’s invasion of Ukraine. 

The initial focus for investment is on Germany which is particularly reliant on buying gas from Russia — meaning its economy is heavily exposed to the crisis in Ukraine.

The war has already created fresh impetus for the world to accelerate the transition away from fossil fuels to renewables — layering an economic crisis on top of the climate crisis which could lead to a surge in demand for renewables.

Although fossil fuel interests have been quick to spin up a counter argument to try to block any rush toward green energy — lobbying for Western nations to increase their exploitation of oil and gas and, y’know, torch life on Earth even faster. So there’s no shortage of reasons for investors to cut checks for renewables like there’s no tomorrow.

Ecosia says it’s put up an initial $30M to fund startups and community energy initiatives — focusing its early investment on the supplier network of Berlin-based startup Zolar, a platform which links customers wanting to install solar systems with local planning and installation businesses to support the rollout of green energy to households across Germany.

Ecosia said it’s already invested $23M into small solar systems through Zolar’s local solar distribution network, alongside other renewable energy projects across the country.

“At the moment, we’re supporting renewable energy projects across Germany. Further investment into renewable energy will be likely as Ecosia evaluates community energy projects and pitches from founders and these may take place in other countries,” a spokesperson told us.

They added that Ecosia’s goal for the green energy investments is to encourage more businesses to invest in renewables and speed up the transition to renewables at a time when it has never been more pressing to leave fossil fuels in the ground.

“If you’re a company wanting to scale your investments into renewable energy beyond climate-neutral and need advice, or a founder or community project leader with a green energy idea that can make a difference in terms of reducing European reliance on fossil fuels, get in touch with our energy team,” it said, noting that chief operating officer, Wolfgang Oels, is heading up the initiative.

Ecosia suggested it’s looking to further diversify where it invests search ads profits to include regenerative agriculture in the future — although, for now, its focus remains on green energy projects.

Asked how the investments will be split between tree planting and renewable energy, Ecosia said there won’t be a formal split because it’ll depend on the calibre of applicants for the energy money — meaning the monthly split of profits will be determined on a case-by-case basis.

The spokesperson further noted that Ecosia will publish the divide of profit spent in its monthly financial report — “as and when” investments are made (and as it has always done with tree planting).

Startups with a broader climate tech focus hoping to score backing are encouraged to pitch the broader WorldFund, where Ecosia’s founder, Christian Kroll, is a venture partner. So far, WorldFund has made investments into plant-based steak startup Juicy Marbles; tree-planting fintech TreeCard; and cocoa-free chocolate alternative Qoa, among others.

Adobe today announced a number of new features for Customer Journey Analytics, its tool for tracking customers across platforms that is part of the company’s Experience Cloud portfolio.

As the pandemic sped up the move to online shopping for a lot of brands and their customers, the need for being able to manage and personalize the user experience across channels (think web, mobile, in-store, etc.) also increased. But it’s one thing to track all of this data and plot it on a dashboard — it’s another to make it actionable.

To help businesses better understand how even small changes can affect a customer’s overall journey across their various properties, Adobe today launched a new experimentation feature in Journey Analytics that allows them to test real-world scenarios and analyze their results. A company may want to see if a change in their mobile app reduces call center interactions, for example, or see if a change to their website leads to more downloads of their mobile app. It’s basically A/B testing with a focus on the overall customer journey, with the additional benefit that businesses can then use this data to precisely personalize the user experience for individual users or larger segments of users.

All of this is powered by machine learning models that make it easier to find these kinds of correlations across vast data sets. Adobe says this algorithm takes into account “historical data, comparable campaigns, ongoing benchmarks, and more.”

“Often, we’re seeing now people starting in digital — where they may not have before — but they’re still engaging in multiple channels through multiple devices for a singular outcome,” Nate Smith, Director of Product Marketing for Adobe Analytics, told me. “Ultimately, this has driven up the priority of omni-channel analysis for a lot of brands with that focus on lifetime value and retention. What has been a problem is the way that that type of analysis is done.”

Smith argues that the traditional approach, with a data pipeline into a data warehouse or data lake, with a SQL layer and a visualization tool on top, is too cumbersome in an environment where stakeholders need quick answers to questions that are often seemingly simple but hard to put into code.

“With Customer Journey Analytics, our customer journey analytics platform, we have a lot of purpose-built components that are powered by our Experience Platform, which has been a complete native build over the last several years for all of our acquisitions in the space to then tie into natively,” Smith said. “You’ve seen a lot of other vendors make massive acquisitions to build out these marketing cloud portfolios and we’re the ones that have actually developed a platform to actually do that, because, at some point, you run out of duct tape and baling wire to make all this work.”

It’s this custom platform that then allows the team to feed data into the new experimentation feature. As Smith noted, this also means that developers can work with any of their existing tools to build these tests, too.

Adobe also build a new integration between Customer Journey Analytics’ ability to discover customer segments and its Customer Data Platform [CDP]. “You can actually share any audience that you discovered in Customer Journey Analytics to the CDP and then take action on that in any system,” Smith explained. “For us, this is a really exciting moment […] to see not just insight discovery but insight activation, ultimately.”

Ireland’s evasive response to a major security complaint filed against Google’s adtech the year the European Union’s General Data Protection Regulation (GDPR) came into application is the target of a new lawsuit — which accuses the Data Protection Commission (DPC) of years of inaction over what the complainants assert is “the largest data breach ever”.

Today local press in Ireland reported that the Irish High Court has agreed to hear the suit.

The litigation has been prepared by the Irish Council for Civil Liberties (ICCL) whose senior fellow, Johnny Ryan, is named as the plaintiff.

At issue is the DPC’s response to a long-running complaint about Google’s role in the high velocity trading of web users’ personal data to determine which ads get served — and, more specifically, the lack of attention the data-trading systems of the tracking-based ad targeted advertising industry pay to security. (Security, of course, being a key principal of the EU’s flagship data protection regime.)

The ICCL’s suit thus accuses the DPC of a failure to act on what it couches as a “massive Google data breach”.

Ryan will be familiar to anyone who’s been following adtech’s mounting legal woes in the Europe — as the driving force behind a series of complaints and lawsuits, since 2018, which have targeted the high velocity trading of people’s data for real-time ad auctions (aka, real-time bidding; or RTB).

A former adtech insider turned whistleblower, Ryan has amped up pressure on the industry for reform through a series of strategic GDPR complaints. But, more recently, his complaints have increasingly targeted the DPC itself.

In September 2020, for example, he published a dossier of evidence highlighting how the online ad-targeting industry profiles internet users’ intimate characteristics without their knowledge or consent — calling out the DPC for ongoing inaction over the RTB security complaint.

He has also lodged a complaint with the European Commission that’s led to an ombudsperson stepping in to look into the EU’s own high level monitoring of the (decentralized) application of the GDPR, which relies upon agencies in each Member State to do the graft of investigating and enforcing violations of the law.

On the 2018 Google adtech complaint, the DPC has — so far — announced some procedural steps.

Following Ryan’s original September 2018 complaint, which named both Google and the online ad industry body the IAB Europe (as two key players in the RTB system), Ireland opened a formal inquiry into Google’s adtech in May 2019 — as the regulator is the lead EU watchdog for Google.

However Ireland did not open an inquiry based on the substance of Ryan’s complaint; rather it opened what’s known as a own volition inquiry — saying it would seek to “establish whether processing of personal data carried out at each stage of an advertising transaction is in compliance with the relevant provisions of the GDPR, including the lawful basis for processing, the principles of transparency and data minimisation, as well as Google’s retention practices”, as it put it at the time.

Notably, the DPC did not say its inquiry would interrogate Google’s role in RTB through a security lens — despite the core of Ryan’s complaint being that a system that ‘functions’ by broadcasting what can be highly sensitive data about people (browsing habits, device IDs, location etc), right across the Internet in order that it can be passed to scores of intermediaries, with no way for the users who are being tracked and profiled to control who receives their information or what gets done with it once it’s been fired out there, is literally the opposite of secure.

So that’s what Ryan, via the ICCL, is now pressing for: The lawsuit aims to force Ireland to investigate the security of RTB; an issue the regulator has so far seemed keen to avoid.

While RTB has faced a number of other GDPR complaints, in relation to issues like the legal basis for processing people’s data in the first place, Ryan’s complaint intentionally zeroed in on security — as it seemed to offer the clearest route to demonstrating that something was very rotten in the state of adtech, as he explained to TechCrunch back in 2018.

“I’m trying to be as efficient as possible with every bit of litigation that we launch,” Ryan tells us now. “For 3.5 years I have asked the Irish Data Protection Commission to investigate and act on the biggest data breach ever recorded. And it has not done so and as a result of that every European has been exposed to this.”

“The DPC is really good at muddying things,” he adds. “This is a really nice, crisp, clear example of the DPC having Europe-wide responsibility for a really big issue that affects everybody — everyone — and it’s not some small thing. And they haven’t done anything. So there isn’t really any thing that I could do — we have to sue them.”

“If they don’t act on this, they may as well not exist,” he concludes.

Commenting on the suit in a statement, Liam Herrick, executive director of ICCL, added: We are concerned that the rights of individuals across the EU are in jeopardy, because the DPC has failed to investigate Google’s RTB system over three and half years since first notified by Johnny Ryan in 2018. The issue at stake here affects the rights of every European and we are going to court to see that digital rights are protected. Repeated attempts to get the DPC to take up this rights violation have failed.”

Last month, a flagship ad industry framework that was also targeted in complaints attached to RTB, aka the IAB Europe’s Transparency and Consent Framework (TCF) — which is routinely served to web users in the form of a ‘privacy choices’ pop-up, asking people to consent to their data to be used for ad-targeting in real-time ad auctions — was found by Belgium’s data protection authority to be in breach of the GDPR. (As was the IAB itself.)

The IAB has been given a few months to find a fix for a very long list of violations — and some privacy experts argue this is likely an impossible task, given the systemic violations the TCF plugs into (and for which RTB is the core aim).

The Belgian authority was acting on other, similar RTB complaints to Ryan’s — which had been filed locally. (The IAB is overseen by Belgian’s regulator so Ireland would not be expected to lead on that branch of his complaint. Albeit, Ryan also accuses Ireland of failing to pass on his original complaint to Belgium as the GDPR’s one-stop-shop mechanism would surely intend.)

The laundry list of failures identified by the Belgian DPA with regards to the IAB’s TCF very much features security — with breaches of the security of processing; integrity of personal data; data protection by design and default among those listed in its final decision earlier this year.

Yet, despite security being clearly identified as a problem with a flagship industry framework that plugs into RTB (and, more than that, is intended to feed the system as a key strategic piece of adtech apparatus), the DPC’s still ongoing investigation of Google’s adtech — using its own terms of reference — does not mention the ‘S’ word.

In a timeline chronicling what the ICCL’s press release couches as “3 ½ years of inaction”, the civil liberties organization writes that on January 12 of this year the regulator finally said it had written a “statement of issues” of what it will investigate, vis-a-vis the Google complaint, but that statement “excludes data security — the critical issue of the complaint”.

It’s not clear why the DPC has chosen to carve out security from its probe of Google’s adtech.

Its plentiful critics would surely have thoughts on that. (Albeit Ryan says he has “no idea about their motives” when asked for a view but he does suggest that on a spectrum of ‘conspiracy to cock-up’, its “persistent inertia” looks iffy — hence “that’s why we need an independent review”.)

Reached for comment on the ICCL’s lawsuit, deputy commissioner Graham Doyle declined wider remarks — saying only that there’s “not much to say at this stage beyond the fact that our investigation is progressing”.

Ireland’s data protection regulator continues to attract trenchant criticism over its circuitous (some might say labyrinthine) approach to GDPR enforcement — especially in regards to cross-border complaints against major tech giants like Google and Facebook.

Civil society, consumer protection and digital and privacy rights groups and individual experts have all blasted the regulator for years for dragging its feet — or simply avoiding — properly investigating a string of major complaints and concerns, from systemic privacy and consent abuses to location tracking violations or indeed RTB’s massive security question; so basically the sorts of systemic issues which — if confirmed by investigation — implicate equally massive consumer harms that scale right across the bloc.

That also means these are the sorts of complaints that, were they to actually be enforced, could force wholesale reform of certain types of privacy-hostile data-mining business models.

It’s notable that the handful of final decisions the DPC has issued against tech giants to date, since the GDPR begun being applied in May 2018, have had to go through an objection resolution process baked into the regulation — after other EU data protection agencies rejected Ireland’s preference for lesser penalties (see its 2020 security breach decision against Twitter; and its 2021 transparency decision against WhatsApp).

A draft DPC decision against Facebook which was made public by the complainant (against the DPC’s wishes) last fall also looks laughably lenient. (That complainant also filed a criminal complaint against the regulator in November — accusing the DPC of using “procedural blackmail” to try to gag it.)

It’s not clear how quickly the ICCL lawsuit against the DPC might progress and potentially accelerate Ireland’s GDPR enforcement of adtech. That may depend upon which of Ireland’s courts chooses to hear it.

The regulator has faced a number of other legal challenges to its processes in recent years — including a couple in relation to a very long running complaint against Facebook’s EU-US data transfers, one component of which it settled in January 2021 by agreeing to swiftly resolve the complaint. (Albeit, a final decision on that issue is still pending.)

The UK’s Information Commissioner’s Office, meanwhile, has also faced criticism over adtech inaction and litigation over RTB complaints (starting in late 2020) — after it closed a similar complaint without taking any enforcement action against the adtech industry (despite publicly acknowledging systemic lawlessness).

However in that case the legal action only went to a tribunal which ultimately decided it lacked the jurisdiction to assess the validity of the outcome the ICO had claimed (but which the plaintiffs had sought to challenge).

A suit against the DPC that’s heard in court should not face such powers-based uncertainties — so if the ICCL and Ryan prevail in their arguments the Irish regulator could face an order to investigate the security of Google’s adtech that it can’t simply ignore; and, essentially, be forced to enforce a security-minded clean up of adtech. Which is quite a thought.

Google was contacted for comment on the ICCL lawsuit.