Steve Thomas - IT Consultant

TikTok — the hugely popular mobile video app with more than 1 billion users — has been taking its first steps to break into a new screen, the TV screen, launching and integrating a new app called TikTok TV first with Amazon Fire TV, and then Google TV and other Android TV OS devices, LG Smart TVs, and Samsung Smart TVs. Today comes news of another front in that strategy: TikTok has inked a partnership with Atmosphere, the startup that provides licensed and curated streamed video content for commercial venues like Westin, Taco Bell and Texas Roadhouse, as well as doctors’ offices, gyms and other venues where people spend dwelling time.

Initially, the partnership will see Atmosphere develop a new channel on its platform dedicated to curated TikTok videos. It will be the first time that TikTok content is being used for an out-of-home video service.

“TikTok has become a destination for more than a billion people to be entertained, get inspired, and find community,” said Dan Page, head of global business development, new screens at TikTok, in a statement. “By partnering with Atmosphere, we’re excited to make it easy for people to experience TikTok together by bringing the joy and creativity of our platform to new screens, venues, and audiences.”

Atmosphere has been using the CES tech event to announce a series of milestones. Earlier this week, it revealed that it had raised $100 million on the back of very strong growth in the last year: it doubled the number of venues using its ad-supported streaming services to 19,000, covering some 20 million monthly unique users.

Although Atmosphere already repurposes content from platforms like YouTube in channels that it builds, this will be the company’s first channel dedicated to a single social media brand. Leo Resig, the co-founder and CEO of Atmosphere, said his company and TikTok have been working on this deal and how the channel would look for eight months.

“They are the largest internet social media platform right now, and so they are very particular about how their brand and content are distributed,” he said. “But they see the power of our platform.”

To be clear, TikTok has confirmed to TechCrunch that the Atmosphere partnership is not another outlet for, or a repurposing of, TikTok TV, as TikTok’s consumer-focused TV app is called. Instead, it’s an example of how, as TikTok continues to mature, it’s diversifying in how to reach new audiences, and build different revenue streams to complement advertising and other revenue models in its main app.

In this case, a team of people from Atmosphere will have access to a library of TikTok content, from which they will select videos they believe might work well on an Atmosphere channel. The team then connects with the individual creator to get the okay to use the video, and to work out how to credit said creator. Atmosphere then strips out all the audio, overlays it with its own optional audio (or none at all — many of the venues that are Atmosphere customers use the service on mute by default), adds its own captions, and collates all that into its own video stream. The commercial arrangement in this deal is between TikTok and Atmosphere, which will run ads in the channel. That is to say, creators themselves — at least for now — do not get paid.

The financial details are not being disclosed but generally it sounds like content providers are paid, and Atmosphere makes its revenues from the advertising it runs alongside the content. As as for a basic guideline on payouts to providers, Resig said that currently the company pays “low millions of dollars per year” to content creators, but, he added, “We get most of our content for free,” instead giving creators attribution to help them grow their audiences and brands. TikTok is not an investor in Atmosphere, the startup pointed out (it said that avoiding confusion is why it chose to separate its funding news from the TikTok news).

From what we understand, Atmosphere will be able to access a selection of TikTok videos — tens of thousands of videos — rather than the much bigger catalogue.

For a first foray into partnering with a social media company to bring content from that platform to Atmosphere, TikTok is something of a bullseye for the startup. TikTok was already looking for more opportunities to expand to a wider variety of screens (and specifically TV screens), and this gives it an opportunity to repurpose and give new life to the long tail of its back catalogue of videos that might no longer get picked up for viewing via TikTok’s algorithms on its main app. On top of that, TikTok already had some parallels with Atmosphere in terms of how the two are being used, starting with audio consumption.

Atmosphere’s Resig told TechCrunch that more than 99% of its customers were already streaming its services with the sound off, which led the company to building out more content with the audio removed altogether. TikTok, as it happens, also has an audience that consumes videos on its platform with the sound off, so much so that TikTok has built its own captioning technology to let creators on the platform either add their own words as graphics or use TikTok’s AI to do this for them. Resig said that in the initial integration, the videos that Atmosphere will be curating from TikTok are not necessarily going to be these, and it will be overlaying its own captioning around them for TikTok’s Atmosphere channel.

TikTok last year scored a notable point during the US Olympics, where it carved out a place for itself not as a destination for official Olympic event streams, or even official clips (these could be found on there from official accounts, yet they were elsewhere, too); but as a place for user-generated content from people on the ground — athletes, audience, others involved in the Olympics — that was either a complement to the official coverage, or sometimes (such as with the news of Simone Biles facing challenges in her exercises) ahead of it altogether.

Atmosphere also has carved out a place for itself in providing sports content that is complementary to what a bar or other venue might already be showing. In fact, so far has eschewed trying to provide its customers with a direct replacement for premium sports channels, which comprise some of the most popular content that they broadcast in their venues, bringing in customers and often the driver for a venue paying a premium for a commercial pay-TV subscription in the first place. These days, an Atmosphere customer — such as a bar — that has TVs installed broadcasting sports on them will continue doing so even as they bring in Atmosphere on other TVs.

“Right now, we coexist with sports,” Resig said, noting that on average, Amosphere will be on 25% of a venue’s screens, with the other TVs broadcasting more traditional content. “We don’t want to be on every screen.”

That’s slowly changing. As Atmosphere scales, it is starting to have more conversations with sports media organizations — the companies that broadcast and manage the rights for teams, leagues, and so on — and is now thinking more about what shape Atmosphere premium sports content might take. Interestingly, it seems like this is precisely the place with TikTok is with sports, too.

This is just an early and first move into a partnership between the two. If it clicks, down the line there may also be more questions about how it will evolve as a business relationship. Right now, Resig confirmed that the commercial relationship it has here is with TikTok, not the creators on the platform, who are not being paid individually. Their compensation, such as it is, is currently coming in the form of promoting the creators and providing links to find them elsewhere.

One option down the line might be if creators one day start to build content specifically for TV rather than mobile formats. This would make some sense, considering how many of the videos now appear in awkward formats that need large areas of blurring on either side of the action to fill out the space on the horizontal TV screen.

“Will creators create content specifically for TikTok’s TV activities in the future? Absolutely,” predicted Resig. “Once it becomes ubiquitous those who create content will see value in the native app, and separately the TV format.” He said this is not something being discussed right now, but with any platform, “you know you’ve made it when people are creating content specific for that platform.”

Read more about CES 2022 on TechCrunch

The rise and impact of influencers has been one of the biggest forces in how the modern online social landscape has evolved in recent times. Now, a company that’s tapping into that influencer juggernaut, and specifically how it is playing out in the world of marketing, is announcing a huge round of funding to keep riding that wave.

Mavrck, which has built a platform for brands and media companies to source and engage with influencers for marketing campaigns, has raised $120 million in growth equity from a single backer, Summit Partners. Mavrck will use the capital to continue investing in its platform, and for business development. It is not disclosing valuation.

“We are going to use this investment to double down on our industry leading platform and double our team,” Lyle Stevens, the co-founder and CEO, said in an interview over email. “In doing so, we will become the most intelligent platform in the market with the petabytes of historical data we have… This data will power recommendation engines in the Mavrck platform that help connect enterprise marketers with the right creators, the right way, at the right cost.”

The company today already has some strong traction and momentum.

Stevens said that the company’s “Influencer Index” — as its directory is called — currently lists “millions of contactable macro- and micro- influencers around the globe and across all major social platforms.” Some 500 brands and 5,000 marketers are already using Mavrck to connect with those influencers. And in all, since being founded 2014, Mavrck has connected brands with more than 3 million influencers and creators, reaching more than 240 million consumers, primarily through “native” sponsored content across various media including videos and photos, blogs and podcasts.

Stevens said that the core of Mavrck’s technology is based around patented algorithms and first-party opt-in data, which brands use to find and connect with influencers that speak to the audiences that the marketers are hoping to reach. It has build some 25 different search filters — covering areas like audience demographics, historical performance, fraud risk — to whittle down the wider directory to those that match what they need. It then provides a platform for them to engage with each other to work through projects and eventually pay them. “We also have the technology for our customers to invite or import their existing influencer and consumer relationships to develop their own ambassador network,” he added.

Influencer marketing spend has become a very big business in the wider area of marketing, rising 55% in the last year, with two-thirds of brands now deploying an influencer marketing strategy in some way. It’s now estimated to be a $100 billion industry.

Alongside this, Stevens notes that the creator economy is “expanding rapidly, as more and more people have turned to content creation as a secondary or primary source of income.” He cites figures from eMarketer that estimated that in 2020, 50 million people identified themselves as “creators”. And if you have a child, you might nod a little wearily in recognition at the results of a Harris Poll last year that found some 30% of children said their ambition was to become a “YouTuber” (only 11% said “astronaut”).

“We want to support this new creative class by connecting them to the world’s enterprise consumer brands, allowing them to turn their passion into a possibility to earn a living,” said Stevens.

The role of a company like Mavrck is not just to connect marketers with influencers, but also to take a pulse on where influencer content is making the greatest impact. Although it’s a very fragmented space — blogs, for example, can still power, especially with certain kinds of products and consumers — when it comes to single platforms with the biggest scale, social apps are still at the top of the heap.

Stevens notes that Instagram remains the most popular platform today for influencer content, helped by the social app’s shift in 2016 to displaying content algorithmically instead of chronologically.  But while Instagram still dominates, that is starting to shift. “The Tiktok ‘For You’ Page algorithm appears to be taking market share from Instagram, as we have seen a 400% increase year over year in the number of creators being activated on TikTok by our brand customers,” he said. “If that trend continues, we anticipate TikTok to dominate influencer marketing over the next five years.”

Whichever platform ends up on top, Stevens believes that influencers and influencer marketing are not a flash in the pan, but are here to stay.

“If you think about the last purchase you made, odds are you were influenced or persuaded by what other people say and display online, versus what a brand said to you directly,” he said. “Those other people can be friends, like-minded strangers or creators you choose to follow online. This concept of social proof has become an essential brand building tool for consumer enterprises. In the wake of COVID and the corresponding changes in consumer buying behavior, consumer engagement with creators has increased 70%. As a result, brands are investing more in social proof to not only thrive, but stay relevant and survive.”

That’s enough to influence investors, too.

“At Summit, we have invested across the commerce segment – in both brands and the technologies that support their growth – and we believe strongly in the impact and potential of authentic brand relationships as a means to build value,” said Michael Medici, MD at Summit Partners, in a statement.

“Brands are increasingly seeking to harness the power of the very long tail of content creators to help drive brand awareness and influence purchase activity,” added Sophia Popova, principal at Summit Partners. “Mavrck’s platform is purpose-built to support brands in these efforts. We are delighted to partner with Lyle and the Mavrck team for this next phase of growth.” Both of them have joined the board with this round.

The European Commission has given its clearest signal yet that it’s prepared to intervene over weak enforcement of the EU’s data protection rules against big tech.

Today the bloc’s executive also had a warning for adtech giants Google and Facebook — accusing them of choosing “legal tricks” over true compliance with the EU’s standard of “privacy by design” — and emphasizing the imperative for them to take data protection “seriously”.

Speaking at a privacy conference this morning, Vera Jourová, the EU’s commissioner for values and transparency, said enforcement of the General Data Protection Regulation (GDPR) at a national level must buck up — and become “effective” — or else it “will have to change”, warning specifying that any “potential changes” will move toward centralized enforcement.

“When I was looking at existing enforcement decisions and pending cases, I also came to another conclusion,” she also said.  “So, we have penalties or decisions against Google, Facebook, WhatsApp.

“To me this means that clearly there is a problem with compliance culture among those companies that live off our personal data. Despite the fact that they have the best legal teams, presence in Brussels and spent countless hours discussing with us the GDPR. Sadly, I fear this is not privacy by design.

I think it is high time for those companies to take protection of personal data seriously. I want to see full compliance, not legal tricks. It’s time not to hide behind small print, but tackle the challenges head on.”

In parallel, an influential advisor to the bloc’s top court has today published an opinion which states that EU law does not preclude consumer protection agencies from bringing representative actions at a national level — following a referral by a German court in a case against Facebook Ireland — which, if the CJEU’s judges agree, could open up a fresh wave of challenges to tech giants’ misuse of people’s data without the need to funnel complaints through the single point of failure of gatekeeper regulators like Ireland’s Data Protection Commission (DPC).

Towards centralized privacy oversight?

On paper, EU law provides people in the region with a suite of rights and protections attached to their data. And while the regulation has attracted huge international attention, as other regions grapple with how to protect people in an age of data-mining giants, the problem for many GDPR critics, as it stands, is that the law decentralizes oversight of these rules and rights to a patchwork of supervisory agencies at the EU Member State level.

While this can work well for cases involving locally bounded services, major problems arise where complaints span borders within the EU — as is always the case with tech giants’ (global) services. This is because a one-stop-shop (OSS) mechanism kicks in, ostensibly to reduce the administrative burden for businesses.

But it also enables a huge get-out clause for tech giants, allowing them to forum shop for a ‘friendly’ regulator through their choice of where to locate their regional HQ. And working from a local EU base, corporate giants can use investment and job creation in that Member State as a lever to work against and erode national political will to press for vigorous oversight of their European business at the local authority level.

“In my view, it does take too long to address the key questions around processing of personal data for big tech,” said Jourová giving a keynote speech to the Forum Europe data protection & privacy conference. “Yes, I understand the lack of resources. I understand there is no pan-European procedural law to help the cross-border cases. I understand that the first cases need to be rock-solid because they will be challenged in court.

“But I want to be honest — we are in the crunch time now. Either we will all collectively show that GDPR enforcement is effective or it will have to change. And there is no way back to decentralised model that was there before the GDPR. Any potential changes will go towards more centralisation, bigger role of the EDPB [European Data Protection Board] or Commission.”

Jourová added that the “pressure” to make enforcement effective “is already here” — pointing to debate around incoming legislation that will update the EU’s rules around ecommerce, and emphasizing that, on the Digital Services Act, Member States have been advocating for enforcement change — and “want to see more central role of the European Commission”.

Point being that if there’s political will for structural changes to centralize EU enforcement among Member States, the Commission has the powers to propose the necessary amendments — and will hardly turn its nose up at being asked to take on more responsibility itself.

Jourová’s remarks are a notable step up on her approach to the thorny issue of GDPR enforcement back in summer 2020 — when, at the two year review mark of the regulation entering into application, she was still talking about the need to properly resource DPAs — in order that they could “step up their work” and deliver “vigorous but uniform enforcement”, as she put it then.

Now, in the dying days of 2021 — with a still massive backlog of decisions yet to be issued around cross-border cases, some of which are highly strategic, targeting adtech platforms’ core surveillance business model (Jourová’s speech, for example, noted that 809 procedures related to the OSS have been triggered but only 290 Final Decisions have been issued) — the Commission appears to be signalling that it’s finally running out of patience on enforcement.

And that it is already eyeing a Plan B to make the GDPR truly effective.

Criticism of weak enforcement against tech giants has been a rising chorus in Europe for years. Most recently frustration with regulatory inaction led privacy campaigner Max Schrems’ not-for-profit, noyb, to file a complaint of criminal corruption against the GDPR’s most infamous bottleneck: Ireland’s DPC, accusing the regulator of engaging in “procedural blackmail” which it suggested would help Facebook by keeping key developments out of the public eye, among other eye-raising charges.

The Irish regulator has faced the strongest criticism of all the EU DPAs over its role in hampering effective GDPR enforcement.

Although it’s not the only authority to be accused of creating a bottleneck by letting major complaints pile up on its desk and taking a painstaking ice-age to investigate complaints and issue decisions (assuming it opens an investigation at all).

The UK’s ICO — when the country was still in the EU — did nothing about complaints against real-time-bidding’s abuse of people’s data, for example, despite sounding a public warning over behavioral ads’ unlawfulness as early as 2019. While Belgium’s DPA has been taking a painstaking amount of time to issue a final decision on the IAB Europe’s TCF’s failure to comply with the GDPR. But Ireland’s central role in regulating most of big tech means it attracts the most flak. 

The sheer number of tech giants that have converged on Ireland — wooed by low corporate tax rates (likely with the added cherry of business-friendly data oversight) — gives it an outsized role in overseeing what’s done with European’s data.

Hence Ireland has open investigations into Apple, Google, Facebook and many others — yet has only issued two final decisions on cross-border cases so far (Twitter last year; and WhatsApp this year).

Both of those decisions went through a dispute mechanism that’s also baked into the GDPR — which kicks in when other EU DPAs don’t agree with a draft decision by the lead authority.

That mechanism further slowed down the DPC’s enforcement in those cases — but substantially cranked up the intervention the two companies ultimately faced. Ireland had wanted to be a lot more lenient vs the collective verdict once all of the bloc’s oversight bodies had had their say.

That too, critics say, demonstrates the DPC’s regulatory capture by platform power.

An opinion piece in yesterday’s Washington Post skewered the DPC as “the wrong privacy watchdog for Europe” — citing a study by the Irish Council for Civil Liberties that found it had only published decisions on about 2% of the 164 cross border cases it has taken on.

The number of complaints the DPC has chosen to entirely ignore — i.e. by not opening a formal investigation — or else to quietly shutter (“resolve”) without issuing a decision or taking any enforcement action is likely considerably higher. 

The agency is shielded by a very narrow application of Freedom of Information law, which applies only in relation to DPC records pertaining to the “general administration” of its office. So when TechCrunch asked the DPC, last December, how many times it had used GDPR powers such as the ability to order a ban on processing it declined to respond to our FOIs — arguing the information did not fall under Ireland’s implementation of the law.

Silence and stonewalling only go so far, though.

Calls for root and branch reform of the DPC specifically, and enforcement of the GDPR more generally, can now be heard from Ireland’s own parliament all the way up to the European Commission. And big tech’s game of tying EU regulators in knots looks as if it’s — gradually, gradually — getting toward the end of its rope.

What comes next is an interesting question. Last month the European Data Protection Superviso (EDPS) announced a conference on the future of “effective” digital enforcement — which will take place in June 2022 — and which he said would discuss best practice and also “explore alternative models of enforcement for the digital future”.

“We are ambitious,” said Wojciech Wiewiorowski as he announced the conference. “There is much scope for discussion and much potential improvement on the way current governance models are implemented in practice. We envisage a dialogue across different fields of regulation — from data protection to competition, digital markets and services, and artificial intelligence as well — both in the EU, and Europe as a continent, but also on the global level.”

Discussion of “different” and “alternative” models of enforcement will be a focus of the event, per Wiewiorowski — who further specified that this will include discussion of “a more centralized approach”. So the EDPS and the Commission appear to be singing a similar tune on reforming GDPR enforcement.

As well as the Commission itself (potentially) taking on an enforcement role in the future — perhaps specifically on major, cross border cases related to big tech, in order to beef up GDPR’s application against the most powerful offenders (as is already proposed in the case of the DSA and enforcing those rules against ‘very large online platforms’; aka vLOPs) — the GDPR steering and advisory body, the EDPB, also looks set to play an increasingly strategic and important role.

Indeed, it already has a ‘last resort’ decision making power to resolve disputes over cross border GDPR enforcement — and Ireland’s intransigence has led to it exercising this power for the first time.

In the future, the Board’s role could expand further if EU lawmakers decide that more centralization is the only way to deliver effective enforcement against tech giants that have become experts in exhausting regulators with bad faith arguments and whack-a-mole procedures, in order to delay, defer and deny compliance with European law.

The EDPB’s chair, Andrea Jelinek, was also speaking at the Forum Europe conference today. Asked for her thoughts on how GDPR enforcement could improve, including problematic elements like the OSS, she cautioned that change will be a “long term project”, while simultaneously agreeing there are notable “challenges” at the point where national oversight intersects with the needs of cross border enforcement.

“Enforcing at a national level and at the same time resolving cross border cases is time and resource intensive,” she said. “Supervisory authorities need to carry out investigations, observe procedural rules, coordinate and share information with other supervisory authorities. For the current system to work properly it is of vital important that supervisory authorities have enough resources and staff.

“The differences in national administrative procedures and the fact that in some Member States no deadlines are foreseen for handling a case also creates an obstacle to the efficient functioning of the OSS.”

Jelinek made a point of emphasizing that EDPB has been taking action to try to remedy some of issues identified — implementing what she described as “a series of practical solutions” to tackle problems around enforcement.

She said this has included developing (last year) a co-ordinated enforcement framework to facilitate joint actions (“in a flexible and coordinated manner”) — such as launching enforcement sweeps and joint investigations.

The EPBD is also establishing a pilot project to provide a pool of experts to support investigations and enforcement activities “of significant common interest”, she noted, predicting: “This will enhance the cooperation and solidarity between all the supervisory authorities by addressing their operational needs.”

“Finally we should not forget that the GDPR is a long term project and so is strengthening cooperation between supervisory authorities,” she added. “Any transformation of the GDPR will take years. I think the best solution is therefore to deploy the GDPR fully — it is likely that most of the issues identified by Member States and stakeholders will benefit from more experience in the application of the regulation in the coming years.”

However it is already well over three years since GDPR came into application. So many EU citizens may query the logic of waiting years more for regulators to figure out how to jointly work together to get the job of upholding people’s rights done. Not least because this enforcement impasse leaves data-mining tech giants free to direct their vast data-enabled wealth and engineering resource at developing new ‘innovations’ — to better evade legal restrictions on what they can do with people’s data.

One thing is clear: The next wave of big tech regulatory evasion will come dressed up in claims of privacy “innovation” from the get-go.

Indeed, that is already how adtech giants like Google are trying to re-channel regulators’ attention from enforcing against their core attention-manipulation, surveillance-based business model.

Google SVP Kent Walker also took to the (virtual) conference stage this morning for a keynote slot in which he argued that the novel ad targeting technologies Google is developing under its “Privacy Sandbox” badge (such as FloCs; aka federated learning of cohorts) will provide the answer to what big (ad)tech likes to claim is an inherent tension between European fundamental rights like privacy and economic growth.

The truth, as ever, is a lot more nuanced than that. For one thing, there are plenty of ways to target ads that don’t require processing people’s data. But as most of Europe’s regulators remain bogged down in a mire of corporate capture, under-resourcing, culture cowardice/risk aversion, internecine squabbles and, at times, a sheer lack of national political will to enforce the law against the world’s wealthiest companies, the adtech duopoly is sounding cockily confident that it will be allowed to carry on and reset the terms of the game in its own interests once again.

(The added irony here is that Google is currently working under the oversight of the UK’s Competition and Markets Authority and ICO on shaping behavioral remedies attached to its Sandbox proposals — and has said that these commitments will be applied globally if the UK is minded to accept them; which does risk tarnishing the GDPR’s geopolitical shine, given the UK is no longer a member of the EU… )

For EU citizens, it could well mean that — once again — it’s up to the CJEU to come to the rescue of their fundamental rights — assuming the court ends up concurring with advocate general Richard de la Tour’s opinion today that the GDPR:

” … does not preclude national legislation which allows consumer protection associations to bring legal proceedings against the person alleged to be responsible for an infringement of the protection of personal data, on the basis of the prohibition of unfair commercial practices, the infringement of a law relating to consumer protection or the prohibition of the use of invalid general terms and conditions, provided that the objective of the representative action in question is to ensure observance of the rights which the persons affected by the contested processing derive directly from that regulation.”

Consumer protection agencies being able to pursue representative legal actions to defend fundamental rights against tech giants’ self interest — at the Member State level, and therefore, all across the EU — could actually unblock GDPR enforcement via a genuinely decentralized wave of enforcement that’s able to route around the damage of captured gatekeepers and call out big adtech’s manipulative tricks in court.

In a significant push against big tech’s ability to maintain market dominance through sheer buying power, the UK’s competition watchdog has ordered Facebook (now Meta) to reverse its acquisition of animated GIF platform, Giphy — confirming the Financial Times‘ earlier reporting.

The Competition and Markets Authority (CMA) said its phase 2 investigation cemented earlier competition concerns about the impact of Meta owning and operating Giphy.

In a statement, Stuart McIntosh, chair of the independent inquiry group heading the CMA probe, said: “The tie-up between Facebook and Giphy has already removed a potential challenger in the display advertising market. Without action, it will also allow Facebook to increase its significant market power in social media even further, through controlling competitors’ access to Giphy GIFs.”

“By requiring Facebook to sell Giphy, we are protecting millions of social media users and promoting competition and innovation in digital advertising,” he added.

This story is developing… refresh for updates… 

The watchdog’s intervention follows an extended investigation of the acquisition that Facebook announced (and completed) in May 2020, with the CMA taking an initial look in summer 2020 — and dialling up its scrutiny over the following months.

It also, in June 2020, ordered a halt to further integration of Giphy by Facebook while the oversight continued.

In another first last month, the regulator fined Facebook almost $70 million for deliberately withholding information related to ongoing oversight of the acquisition — billing the infringement a “major” breach.

The CMA’s preliminary report on the acquisition, this August, concluded that Facebook’s takeover of Giphy raised a number of competition concerns — including that it would harm competition between social media platforms, given the lack of choice in the supply of animated GIFs.

The regulator’s concern was not only that Facebook might simply deny rivals access to Giphy content for their users to reshare but that the data-mining giant might change the terms of access — and could, for example, require rivals like TikTok, Twitter and Snapchat to provide it with more user data in order to access Giphy GIFs.

The CMA appears to have held to its concern on the risk of competitive harm through data extraction from other services, as well as from other more obvious risks — such as Facebook shutting off rivals’ access to the platform — hence rejecting all the tech giant’s proposed alternative ‘remedies’ to selling the unit as insufficient.

“After consulting with interested businesses and organisations — and assessing alternative solutions (known as ‘remedies’) put forward by Facebook — the CMA has concluded that its competition concerns can only be addressed by Facebook selling Giphy in its entirety to an approved buyer,” the CMA writes in a press release.

In the summer the watchdog had also said it was concerned about the impact on digital ‘display’ advertising — as Giphy had, pre-merger, been offering paid advertising services in the US (and considering expanding to other countries including the UK) with the potential to compete with Facebook’s ad services. An ambition that terminated with Facebook’s takeover.

“The CMA found that Giphy’s advertising services had the potential to compete with Facebook’s own display advertising services. They would have also encouraged greater innovation from others in the market, including social media sites and advertisers. Facebook terminated Giphy’s advertising services at the time of the merger, removing an important source of potential competition. The CMA considers this particularly concerning given that Facebook controls nearly half of the £7 billion display advertising market in the UK,” the regulator writes now.

A summary of the CMA’s final report can be found here.

The regulator’s merger assessment hinges on whether — on a “balance of probabilities” standard — there will be a “substantial lessening of competition” (SLC) should the takeover go ahead. And in the case of Facebook-Giphy that it what it has concluded — finding very limited choice of alternatives to Giphy for (other) social media platforms; and that Facebook of course has significant market power in display advertising in the UK, among other contributing factors.  

The CMA was particularly interested in Giphy’s potential to develop its paid advertising model, including a potential UK launch, noting feedback from advertisers had been positive about the GIF-based ad format and also that “due to its GIF format, the Paid Alignment model of advertising is subtle and intrinsic to the message, rather than interrupting it” — something its report notes was “reflected in Giphy’s internal documents” and “Facebook’s internal documents also discuss the importance of monetising messaging”.

It also concluded that Facebook would have an incentive to foreclose rivals’ access to Giphy — which led to another conclusion that the merger “will result in an SLC in social media as a result of vertical effects, in the form of input foreclosure”.

Meta/Facebook has been contacted for its response to the CMA’s order to undo the Giphy acquisition.

The company responded aggressively to the CMA’s provisional findings this summer — denouncing the analysis and questioning the UK regulator’s jurisdiction over its business.

However concern over so-called ‘killer acquisitions’ — aka the ability of tech giants’ to flex their financial muscle to protect market power by buying budding competition to defuse the risk posed by startups and new services (sometimes literally by closing them down post-purchase) — has been a major topic of concern among industry watchers for years.

The critique centers on how competition regulators have failed to evolve theories of harm to keep pace with digital market dynamics. Failing, for example, to consider how data itself can be used as a tool against competition. Dominant platforms can also easily leverage their market power in one channel to rapidly scale into a new segment, via tactics like self-preferencing. While ‘free’ at the point of use services may still entail significant harms for consumers — such as abuse of their privacy.

In recent years, legislators and regulators have started to respond to such concerns — including by updating rules, such as in Germany which passed an update to its regime to cover digital platforms at the start of this year. (The country now has a number of open procedures against tech giants (including Facebook) to confirm its ability to impose preemptive measures.)

In the US, the Biden administration’s elevation of Lina Khan to chair the FTC, earlier this year, marks key moment of change on US soil — signalling lawmakers’ support for a reformist approach toward regulating tech.

It follows Khan’s landmark paper (on Amazon) which examined how the government’s outdated ways of identifying monopolies have failed to keep up with modern business realities. What was initially dismissed by some — as ‘hipster antitrust’ — is now setting the establishment regulatory agenda. Although Khan still faces huge opposition on home soil from the tech lobby working through channels like the US Chamber of Commerce.

Over in the EU, the Europe Commission has also been working to address the lag between tech and antitrust.

Since December it’s had a draft proposal on the table for a set of ex ante rules to apply to intermediating platform giants (aka, those classified as ‘gatekeepers’ under the Digital Markets Act). Although whether the DMA goes far enough to actually help reboot competition remains to be seen.

The UK, now outside the bloc, has its own update to domestic competition law incoming, also aimed at tackling platform power — with a new regime of bespoke rules for platforms deemed to have ‘strategic market status’.

All this comes too late to undo plenty of baked in tech consolidation, however. But not too late to undo Facebook-Giphy.

Outdated approaches to regulation of digital markets has allowed thousands of tech acquisitions to be waived through over the past decades — including Facebook’s purchase of photo-sharing site Instagram, messaging platform WhatsApp and VR headset maker Oculus, to name three strategic takeovers which span the core social networking arena that Facebook/Meta owns and wants to keep owning for decades to come (in an even more immersive/invasive form; aka “the metaverse”).

Earlier this year, the Commission failed to block Google’s acquisition of health wearable Fitbit — despite a huge outcry from civil society warning out letting the adtech giant gobble up such sensitive data, for example.

More recently the CMA also cleared Facebook’s acquisition of CRM maker Kustomer — again using a fairly narrow assessment of potential competition risks — and entirely ignoring privacy advocates who were raising concerns over what the adtech giant would do with Kustomer users’ data.

The CMA’s decision now to order Facebook to reverse its acquisition of Giphy is a significant development — albeit, it’s still just one decision that hasn’t gone big tech’s way.

Discussing the move in response to questions from TechCrunch, professor Tommaso Valletti, a former chief competition economist within the Commission — who worked under current EVP Margrethe Vestage — described the CMA’s move as a “highly symbolic decision”. But he cautioned against reading too much into one ‘no’.

“I’ve been repeating the figures “1000 and 0”: mergers done by GAFAM and mergers blocked in past 20 years. So having finally a 1 does not change the overall picture but it’s a signal,” he told us.

Earlier this year the Commission made it possible for Member States to refer cases for merger review when they may fall between the cracks of national antitrust policy, with the risk of an innovative tech or business being acquired (on the cheap) by a more established rival in order to kill budding competition.

Valletti also pointed out that Vestager has finally signalled an intention to discuss big tech acquisitions with US lawmakers — which he dubbed “another good sign”, saying the EU “was (and still is) lagging on this”.

Major reworking of how antitrust gets applied in the US will clearly be essential to rein in what remain (mostly) US tech giants — however innovative the actions of individual regulators (such as the CMA) elsewhere.

“As for ‘new’ theories of harm, I think it’s just that the CMA has good economists that are aware of what economics has being saying and finding in the past 10 years: Data are part of the business model, so they must be part of the competitive assessment too,” Valletti added of its decision on Facebook-Giphy. “It’s not ‘just’ a privacy issues dealt by someone else.

“Good economics, openness of mind, and a higher risk appetite by their leadership, means the CMA is trying to move the bar in a typically extremely conservative field with shy regulators. Let’s be hopeful!”

As noted above, the UK is working on a reform of competition law that’s specifically targeted at platform giants — with so called ‘strategic market status’ — who will be regulated under an ex ante require of bespoke rules in the future. Although the necessarily legislation to empower the dedicated Digital Markets Unit that’s been set up to focus on this area is still pending.

Still, the CMA hasn’t been sitting on its hands in the meanwhile, with a number of open investigations into various aspects of big tech’s business and ongoing scrutiny of acquisitions.

The UK’s regulatory regime has a free hand to go its own way on big tech decisions — given the country is not longer a member of the EU. Although UK regulators have said the continue to consult with international counterparts on issues of common concern.

While the bloc is seeking to harmonize digital regulations under the DMA and Digital Services Act, there has been some concern that EU lawmakers’ push to reduce ‘fragmentation’ may end up benefiting tech giants — i.e. if it removes the ability of individual Member States to pass more ambitious legislation.

UK regulators could, therefore, end up addressing shortfalls in the bloc’s one-size-fits-all plan for a list of ‘dos and don’ts’ for platform giants — by applying a more tightly tailored regime to tech giants. Having creative thinking at the CMA therefore looks vital.

As part of an ongoing antitrust investigation into Google’s Privacy Sandbox by the UK’s competition regulator, the adtech giant has agreed to an expanded set of commitments related to oversight of its planned migration away from tracking cookies, the regulator announced today.

Google has also put out its own blog post on the revisions — which it says are intended to “underline our commitment to ensuring that the changes we make in Chrome will apply in the same way to Google’s ad tech products as to any third party, and that the Privacy Sandbox APIs will be designed, developed and implemented with regulatory oversight and input from the CMA [Competition and Markets Authority] and the ICO [Information Commissioner’s Office]”.

Google announced its intention to deprecate support for the third party tracking cookies that are used for targeting ads at individuals in its Chrome browser all the way back in 2019 — and has been working on a stack of what it claims are less intrusive alternative ad-targeting technologies (aka, the “Privacy Sandbox”) since then.

The basic idea is to shift away from ads being targeted at individuals (which is horrible for Internet users’ privacy) to targeting methods that put Internet users in interest-based buckets and serve ads to so-called “cohorts” of users (aka, FloCs) which may be less individually intrusive — however it’s important to note that Google’s proposed alternative still has plenty of critics (the EFF, for example, has suggested it could even amplify problems like discrimination and predatory ad targeting).

And many privacy advocates would argue that pure-play contextual targeting poses the least risk to Internet users’ rights while still offering advertisers the ability to reach relevant audiences and publishers to monetize their content.

Google’s Sandbox plan has attracted the loudest blow-back from advertisers and publishers, who will be directly affected by the changes. Some of whom have raised concerns that the shift away from tracking cookies will simply increase Google’s market power — hence the Competition and Markets Authority (CMA) opening an antitrust investigation into the plan in January.

As part of that probe, the CMA had already secured one set of commitments from Google around how it would go about the switch, including that it would agree to halt any move to deprecate cookies if the regulator was not satisfied the transition could take place in a way that respects both competition and privacy; and agreements on self-preferencing, among others.

A market consultation on the early set of commitments drew responses from more than 40 third parties — including, TechCrunch understands, input from international regulators (some of who are also investigating Google’s Sandbox, such as the European Commission, which opened its own probe of Google’s adtech in June) .

Following that, the first set of proposed commitments has been expanded and beefed up with additional requirements (see below for a summary; and here for fuller detail from the CMA’s “Notice of intent to accept the modified commitments”).

The CMA will now consult on the expanded set — with a deadline of 5pm on December 17, 2021, to take fresh feedback.

It will then make a call on whether the beefed up bundle bakes in enough checks-and-balances to ensure that Google carries out the move away from tracking cookies with the least impact on competition and the least harm to user privacy (although it will be the UK’s ICO that’s ultimately responsible for oversight of the latter piece).

If the CMA is happy with responses to the revised commitments, it would then close the investigation and move to a new phase of active oversight, as set out in the detail of what it’s proposing to agree with Google.

A potential timeline for this to happen is early 2022 — but nothing is confirmed as yet.

Commenting in a statement, CMA CEO Andrea Coscelli said:

“We have always been clear that Google’s efforts to protect user’s privacy cannot come at the cost of reduced competition.

That’s why we have worked with the Information Commissioner’s Office, the CMA’s international counterparts and parties across this sector throughout this process to secure an outcome that works for everyone.

We welcome Google’s co-operation and are grateful to all the interested parties who engaged with us during the consultation.

If accepted, the commitments we have obtained from Google become legally binding, promoting competition in digital markets, helping to protect the ability of online publishers to raise money through advertising and safeguarding users’ privacy.”

More market reassurance

In general, the expanded commitments look intended to offer a greater level of reassurance to the market that Google will not be able to exploit loopholes in regulatory oversight of the Sandbox to undo the intended effect of addressing competition risks and privacy concerns.

Notably, Google has agreed to appoint a CMA approved monitoring trustee — as one of the additional measures it’s suggesting to improve the provisions around reporting and compliance.

It will also dial up reporting requirements, agreeing to ensure that the CMA’s role and the regulator’s ongoing process — which the CMA now suggests should continue for a period of six years — are mentioned in its “key public announcements”; and to regular (quarterly) reporting to the CMA on how it is taking account of third party views as it continues building out the tech bundle.

Transparency around testing is also being beefed up.

On that, there have been instances, in recent months, where Google staffers have not been exactly fulsome in articulating the details of feedback related to the Origin Trial of its FloCs technology to the market, for example. So it’s notable that another highlighted change requires Google to instruct its staff not to make claims to customers which contradict the commitments.

Another concern reflected in the revisions is the worry of market participants of Google removing functionality or information before the full Privacy Sandbox changes are implemented — hence it has offered to delay enforcement of its Privacy Budget proposal and offered commitments around the introduction of measures to reduce access to IP addresses. 

We understand that concerns from market participants also covered Google removing other functionality — such as the user agent string — and that strengthened commitments are intended to address those wider worries too.

Self-preferencing requirements have also been dialled up. And the revised commitments include clarifications on the internal limits on the data that Google can use — and monitoring those elements will be a key focus for the trustee.

The period of active oversight by the CMA has also been extended vs the earlier plan — to six years from the date of any decision to accept Google’s modified commitments (up from around five).

This means that if the CMA agrees to the commitments next year they could be in place until 2028. And by then the UK expects to have reformed competition rules wrapping tech giant — as

In its own blog post, Google condenses the revised commitments thus:

  1. Monitoring and reporting. We have offered to appoint an independent Monitoring Trustee who will have the access and technical expertise needed to ensure compliance.
  2. Testing and consultation. We have offered the CMA more extensive testing commitments, along with a more transparent process to take market feedback on the Privacy Sandbox proposals.
  3. Further clarity on our use of data. We are underscoring our commitment not to use Google first-party personal data to track users for targeting and measurement of ads shown on non-Google websites. Our commitments would also restrict the use of Chrome browsing history and Analytics data to do this on Google or non-Google websites.

As with the earlier set of pledges, it has agreed to apply the additional commitments globally — assuming the package gets accepted by the UK regulator.

So the UK regulator continues playing a key role in shaping how key web infrastructure evolves.

Google’s blog most also makes reference to an opinion published yesterday by the UK’s information commission — which urged the adtech industry of the need to move away from current tracking and profiling methods of ad targeting.

“We also support the objectives set out yesterday in the ICO’s Opinion on Data protection and privacy expectations for online advertising proposals, including the importance of supporting and developing privacy-safe advertising tools that protect people’s privacy and prevent covert tracking,” Google noted.

This summer Google announced a delay to its earlier timeline for the deprecation of tracking cookies — saying support wouldn’t start being phased out in Chrome until the second half of 2023.

There is no suggestion from the tech giant as this point of any additional delay to that timeline — assuming it gets the regulatory greenlight to go ahead.

It’s been well over two years since the UK’s data protection watchdog warned the behavioural advertising industry it’s wildly out of control.

The ICO hasn’t done anything to stop the systematic unlawfulness of the tracking and targeting industry abusing Internet users’ personal data to try to manipulate their attention — not in terms of actually enforcing the law against offenders and stopping what digital rights campaigners have described as the biggest data breach in history.

Indeed, it’s being sued over inaction against real-time-bidding’s misuse of personal data by complainants who filed a petition on the issue all the way back in September 2018.

But today the UK’s (outgoing) information commissioner, Elizabeth Denham, published an opinion — in which she warns the industry that its old unlawful tricks simply won’t do in the future.

New methods of advertising must be compliant with a set of what she describes as “clear data protection standards” in order to safeguard people’s privacy online, she writes.

Among the data protection and privacy “expectations” Denham suggests she wants to see from the next wave of online ad technologies are:

• engineer data protection requirements by default into the design of the initiative;

• offer users the choice of receiving adverts without tracking, profiling or targeting based on personal data;

• be transparent about how and why personal data is processed across the ecosystem and who is responsible for that processing;

• articulate the specific purposes for processing personal data and demonstrate how this is fair, lawful and transparent;

• address existing privacy risks and mitigate any new privacy risks that their proposal introduces

Denham says the goal of the opinion is to provide “further regulatory clarity” as new ad technologies are developed, further specifying that she welcomes efforts that propose to:

• move away from the current methods of online tracking and profiling practices;

• improve transparency for individuals and organisations;

• reduce existing frictions in the online experience;

• provide individuals with meaningful control and choice over the processing of device information and personal data;

• ensure valid consent is obtained where required;

• ensure there is demonstrable accountability across the supply chain;

The timing of the opinion is interesting — given an impending decision by Belgium’s data protection agency on a flagship ad industry consent gathering tool. (And current UK data protection rules share the same foundation as the rest of the EU, as the country transposed the General Data Protection Regulation into national law prior to Brexit.)

Earlier this month the IAB Europe warned that it expects to be found in breach of the EU’s General Data Protection Regulation, and that its so-called ‘transparency and consent’ framework (TCF) hasn’t managed to achieve either of the things claimed on the tin.

But this is also just the latest ‘reform’ missive from the ICO to rule-breaking adtech.

And Denham is merely restating requirements that are derived from standards that already exist in UK law — and wouldn’t need reiterating had her office actually enforced the law against adtech breache(r)s. But this is the regulatory dance she has preferred.

This latest ICO salvo looks more like an attempt by the outgoing commissioner to claim credit for wider industry shifts as she prepares to leave office — such as Google’s slow-mo shift toward phasing out support for third party cookies (aka, it’s ‘Privacy Sandbox’ proposal, which is actually a response to evolving web standards such as competing browsers baking in privacy protections; rising consumer concern about online tracking and data breaches; and a big rise in attention on digital matters from lawmakers) — than it is about actually moving the needle on unlawful tracking.

If Denham wanted to do that she could have taken actual enforcement action long ago.

Instead the ICO has opted for — at best — a partial commentary on embedded adtech’s systematic compliance problem. And, essentially, to stand by as the breach continues; and wait/hope for future compliance.

 

Change may be coming regardless of regulatory inaction, however.

And, notably, Google’s ‘Privacy Sandbox’ proposal (which claims ‘privacy safe’ ad targeting of cohorts of users, rather than microtargeting of individual web users) gets a significant call-out in the ICO’s remarks — with Denham’s office writing in a press release that it is: “Currently, one of the most significant proposals in the online advertising space is the Google Privacy Sandbox, which aims to replace the use of third party cookies with alternative technologies that still enable targeted digital advertising.”

“The ICO has been working with the Competition and Markets Authority (CMA) to review how Google’s plans will safeguard people’s personal data while, at the same time, supporting the CMA’s mission of ensuring competition in digital markets,” the ICO goes on, giving a nod to ongoing regulatory oversight, led by the UK’s competition watchdog, which has the power to prevent Google’s Privacy Sandbox ever being implemented — and therefore to stop Google phasing out support for tracking cookies in Chrome — if the CMA decides the tech giant can’t do it in a way that meets competition and privacy criteria.

So this reference is also a nod to a dilution of the ICO’s own regulatory influence in a core adtech-related arena — one that’s of market-reforming scale and import.

The backstory here is that the UK government has been working on a competition reform that will bring in bespoke rules for platform giants considered to have ‘strategic market status’ (and therefore the power to damage digital competition); with a dedicated Digital Markets Unit already established and up and running within the CMA to lead the work (but which is still pending being empowered by incoming UK legislation).

So the question of what happens to ‘old school’ regulatory silos (and narrowly-focused regulatory specialisms) is a key one for our data-driven digital era.

Increased cooperation between regulators like the ICO and the CMA may give way to oversight that’s even more converged or even merged — to ensure powerful digital technologies don’t fall between regulatory cracks — and therefore that the ball isn’t so spectacularly dropped on vital issues like ad tracking in the future.

Intersectional digital oversight FTW?

As for the ICO itself, there is a further sizeable caveat in that Denham is not only on the way out (ergo her “opinion” naturally has a short shelf life) but the UK government is busy consulting on ‘reforms’ to the UK’s data protection rules.

Said reforms could see a major downgrading of domestic privacy and data protections; and even legitimize abusive ad tracking — if ministers, who seem more interested in vacuous soundbites (about removing barriers to “innovation”), end up ditching legal requirements to ask Internet users for consent to do stuff like track and profile them in the first place, per some of the proposals.

So the UK’s next information commissioner, John Edwards, may have a very different set of ‘data rules’ to apply.

And — if that’s the case — Denham will, in her roundabout way, have helped make sliding standards happen.

 

It’s been almost a year since the EU’s executive announced it would propose rules for political ads transparency in response to concern about online microtargeting and big data techniques making mincemeat of democratic integrity and accountability.

Today it’s come out with its proposal. But frankly it doesn’t look like the wait was worth it.

The Commission’s PR claims the proposal will introduce “strict conditions for targeting and amplifying” political advertising using digital tools — including what it describes as a ban on targeting and amplification that use or infer “sensitive personal data, such as ethnic origin, religious beliefs or sexual orientation”.

However the claimed ‘ban’ does not apply if “explicit consent” is obtained from the person whose sensitive data is to be exploited to better target them with propaganda — and online ‘consents’ to ad targeting are already a total trashfire of non-compliance in the region.

So it’s not clear why the Commission believes politically vested interests hell-bent on influencing elections are going to play by a privacy rule-book that almost no online advertisers operating in the region currently do, even the ones that are only trying to get people to buy useless plastic trinkets or ‘detox’ teas.

In a Q&A offering further detail on the proposal, the Commission lists a set of requirements that it says anyone making use of political targeting and amplification will need to comply with, which includes having an internal policy on the use of such techniques; maintaining records of the targeting and use of personal data; and recording the source of said personal data — so at best it seems to be hoping to burden propagandists with the need to create and maintain a plausible paper trail.

Because it is also allowing a further carve-out to allow for political targeting — writing: “Targeting could also be allowed in the context of legitimate activities of foundations, associations or not-for-profit bodies with a political, philosophical, religious or trade union aim, when it targets their own members.”

This is incredibly vague. A “foundation” or an “association” with a political “aim” sounds like something any campaign group or vested interest could set up — i.e. to carry on the “legitimate” activity of (behaviorally?) targeting propaganda at voters.

In short, the scope for loopholes for political microtargeting — including via the dissemination of disinformation — looks massive.

On scope, the Commission says it wants the incoming rules to apply to “ads by, for or on behalf of a political actor” as well as “so called” issue-based ads — aka politically charged issues that can be a potent proxy to sway voters — which it notes are “liable to influence the outcome of an election or referendum, a legislative or regulatory process or voting behaviour”.

But how exactly the regulation will define ads that fall in and out of scope remains to be seen.

Perhaps the most substantial measure of a very thin proposal is around transparency — where the Commission has proposed “transparency labels” for paid political ads.

It says these must be “clearly labelled” and provide “a set of key information” — including the name of the sponsor “prominently displayed and an easily retrievable transparency notice”; along with the amount spent on the political advertisement; the sources of the funds used; and a link between the advertisement and the relevant elections or referenda.

However, again, the Commission appears to be hoping that a few transparency requirements will enforce a sea change on an infamously opaque and fraud-filled industry — one that has been fuelled by rampant misuse and unlawful exploitation of people’s data. Rather than cutting off the head of the hydra by actually curbing targeting — such as by limiting political targeting to broad-brush contextual buckets.

Hence it writes: “All political advertising services, from adtech that intermediate the placement of ads, to consultancies and advertising agencies producing the advertising campaigns, will have to retain the information they have access to through the provision of their service about the ad, the sponsor and the dissemination of the ad. They will have to transfer this information to the publisher of the political ad — this can be the website or app where the ad is seen by an individual, a newspaper, a TV broadcaster, a radio station, etc. The publisher will need to make the information available to the individual who sees the ad.”

“Transparency of political advertising will help people understand when they see a paid political advertisement,” the Commission further suggests, adding: “With the proposed rules, every political advertisement – whether on Twitter, Facebook or any other online platform – will have to be clearly marked as political advertisement as well as include the identity of the sponsor and a transparency notice with the wider context of the political advertisement and its aims, or a clear indication of where it can be easily retrieved.”

It’s a nice theory but for one thing plenty of election interference originates from outside a region where the election itself is taking place.

On that the Commission says it will require organisations that provide political advertising services in the EU but do not have a physical presence there to designate a legal representative in a Member States where the services are offered, suggesting: “This will ensure more transparency and accountability of services providers acting from outside the Union.”

How exactly it will require (and enforce) that stipulation isn’t clear.

Another problem is that all these transparency obligations will only apply to “political advertising services”.

Propaganda that gets uploaded to online platforms like Facebook by a mere “user” — aka an entity that does not self-identify as a political advertising service — will apparently escape the need for any transparency accountability at all.

Even if they’re — y’know — working out of a Russian trollfarm that’s actively trying to destabilize the European Union… Just so long as they claim to be ‘Hans, 32, Berliner, loves cats, hates the CSU’.

Now if platforms like Facebook were perfectly great at identifying, reporting and purging inauthentic activity, fake accounts and shadey influence ops in their own backyards it might not be such a problem to leave the door open for “a user” to post unaccountable political propaganda. But a whole clutch of whistleblowers have pointed out, in excruciating detail, that Facebook at least is very much not that.

So that looks like another massive loophole — one which underlines why the only genuine way to fix the problem of online disinformation and election interference is to put an end to behavioral targeting period, rather than just fiddling around the edges. Not least because by fiddly with some tepid measures that will offer only a flawed, partial transparency you risk lulling people into a false sense of security — as well as further normalizing exploitative manipulation (just so long as you have a ‘policy’ in place).

Once online ads and content can be targeted at individuals based on tracking their digital activity and harvesting their personal data for profiling, it’s open season for opaque InfluenceOps and malicious interests to workaround whatever political ads transparency rules you try to layer on top of the cheap, highly scalable tools offered by advertising giants like Facebook to keep spreading their propaganda — at the expense of your free and fair elections.

Really what this regulation proposes is to create a large admin burden for advertisers who intend to run genuinely public/above board political campaigns — leaving the underbelly of paid mud slingers, hate spreaders and disinformation peddlers to exploit its plentiful loopholes to run mass manipulation campaigns right through it.

So it will be interesting to see whether the European Parliament takes steps to school the Commission by adding some choice amendments to its draft — as MEPs have been taking a stronger line against microtargeting in recent months.

On penalties, for now, under the Commission proposal, ‘official’ advertising services could be fined for breaking things like the transparency and record-keeping requirements but how much will be determined locally, by Member States — at a level the Commission says should be “effective, proportionate and dissuasive”.

What might that mean? Well under the proposal, national Data Protection Authorities (DPAs) will be responsible for monitoring the use of personal data in political targeting and for imposing fines — so, ultimately, for determining the level of fines that domestic rule-breaking political operators might face.

Which does not exactly inspire a whole lot of confidence. DPAs are, after all, resourced by the same set of political entities — or whichever flavor happens to be in government.

The UK’s ICO carried out an extensive audit of political parties data processing activities following the 2018 Cambridge Analytica Facebook data misuse scandal — and in 2020 it reported finding a laundry list of failures across the political spectrum.

So what did the EU’s (at the time) best resourced DPA do about all these flagrant breaches by UK political parties?

The ICO’s enforcement action at that point consisted of — checks notes — issuing a series of recommendations.

There was also a warning that it might take further action in the future. And this summer the ICO did issue one fine: Slapping the Conservative Party with a £10,000 penalty for spamming voters. Which doesn’t really sound very dissuasive tbh.

Earlier this month another of these UK political data offenders, the Labour Party, was forced to fess up to what it dubbed a “data incident” — involving an unnamed third party data processor. It remains to be seen what sanction it may face for failing to protect supporters’ information in that (post-ICO-audit) instance.

Adtech generally has also faced very little enforcement from EU DPAs — despite scores of complaints against its privacy-eviscerating targeting methods — and despite the ICO saying back in 2019 that its methods are rampantly unlawful under existing data protection law.

Vested interests in Europe have been incredibly successful at stymieing regulatory enforcement against invasive ad targeting.

And, apparently, also derailing progress by defanging incoming EU rules — so they won’t do anything much to stop the big-data ‘sausage-factory’ of (in this case) political microtargeting from keeping on slicing ‘n’ dicing up the eyeballs of the citizenry.

In what looks like bad news for adtech giants like Facebook and Google, MEPs in the European Parliament have voted for tougher restrictions on how Internet users’ data can be combined for ad targeting purposes — backing a series of amendments to draft legislation that’s set to apply to the most powerful platforms on the web.

The Internal Market and Consumer Protection Committee (IMCO) today voted overwhelmingly to support beefed up consent requirements on the use of personal data for ad targeting within the Digital Markets Act (DMA); and for a complete prohibition on the biggest platforms being able to process the personal data of minors for commercial purposes — such as marketing, profiling or behaviorally targeted ads — to be added to the draft legislation.

The original Commission proposal for the DMA was notably weak in the area of surveillance business models — with the EU’s executive targeting the package of measures at other types of digital market abuse, such as self-preferencing and unfair T&Cs for platform developers, which its central competition authority was more familiar with.

“The text says that a gatekeeper shall, ‘for its own commercial purposes, and the placement of third-party advertising in its own services, refrain from combining personal data for the purpose of delivering targeted or micro-targeted advertising’, except if there is a ‘clear, explicit, renewed, informed consent’, in line with the General Data Protection Regulation,” IMCO writes in a press release. “In particular, personal data of minors shall not be processed for commercial purposes, such as direct marketing, profiling and behaviourally targeted advertising.”

It’s fair to say that adtech giants are masters of manipulating user consent at scale — through the use of techniques like A/B testing and dark pattern design — so beefed up consent requirements (for adults) aren’t likely to offer as much of a barrier against ad-targeting abuse as the committee seems to think they might.

Although if Facebook was finally forced to offer an actual opt-out of tracking ads that would still be a major win (as it doesn’t currently give users any choice over being surveilled and profiled for ads).

However the stipulation that children should be totally protected from commercial stuff like profiling and behavioral ads is potentially a lot more problematic for the likes of Facebook and Google — given the general lack of robust age assurance across the entire Internet.

It suggests that if this partial prohibition makes it into EU law, adtech platforms may end up deciding it’s less legally risky to turn off tracking-based ads altogether (in favor of using alternatives that don’t require processing users’ personal data, such as contextual targeting) vs trying to correctly age verify their entire user base in order to firewall only minors’ eyeballs from behavioral ads.

At the very least, such a ban could present big (ad)tech with a compliance headache — and more work for their armies of in-house lawyers — though MEPs have not proposed to torpedo their entire surveillance business model at this juncture.

In recent months a number of parliamentarians have been pushing for just that: An outright ban on tracking-based advertising period to be included, as an amendment, to another pan-EU digital regulation that’s yet to be voted on by the committee (aka the Digital Services Act; DSA).

However IMCO does not look likely to go so far in amending either legislative package — despite a call this week by the European Data Protection Board for the bloc to move towards a total ban on behavioral ads given the risks posed to citizens fundamental rights.

Digital Markets Act

The European Parliament is in the process of finalizing its negotiating mandate on one of the aforementioned digital reforms — aka, the DMA — which is set to apply to Internet platforms that have amassed market power by occupying a so-called ‘gatekeeping’ role as online intermediaries, typically giving them a high degree of market leverage over consumers and other digital businesses.

Critics argue this can lead to abusive behaviors that negatively impact consumers (in areas like privacy) — while also chilling fair competition and impeding genuine innovation (including in business models).

For this subset of powerful platforms, the DMA — which was presented as a legislative proposal at the end of last year — will apply a list of pre-emptive ‘dos and don’ts’ in an attempt to rebalance digital markets that have become dominated by a handful of (largely) US-based giants.

EU lawmakers argue the regulation is necessary to respond to evidence that digital markets are prone to tipping and unfair practices as a result of asymmetrical dynamics such as network effects, big data and ‘winner takes all’ investor strategies.

Under the EU’s co-legislative process, once the Commission proposes legislation the European Parliament (consisting of directly elected MEPs) and the Council (the body that represents Member States’ governments) must adopt their own negotiating mandates — and then attempt to reach consensus — meaning there’s always scope for changes to the original draft, as well as a long period where lobbying pressure can be brought to bear to try to influence the final shape of the law.

The IMCO committee vote this morning will be followed by a plenary vote in the European Parliament next month to confirm MEPs’ negotiating mandate — before the baton passes to the Council next year. There trilogue negotiations, between the Parliament, Commission and Member States’ governments, are slated to start under the French presidency in the first semester of 2022. Which means more jockeying, horse-trading and opportunities for corporate lobbying lie ahead. And (likely) many months before any vote to approve a final DMA text.

Still, MEPs’ push to strengthen the tech giant-targeting package is notable nonetheless.

A second flagship digital update, the DSA, which will apply more broadly to digital services — dealing with issues like illegal content and algorithmic recommendations — is still being debated by MEPs and committee votes like IMCO’s remain outstanding.

So the DMA has passed through parliamentary debate relatively quickly (vs the DSA), suggesting there’s political consensus (and appetite) to rein in tech giants.

In its press release summarizing the DMA amendments, rapporteur Andreas Schwab (of the EPP and DE political grouping) made this point, loud and clear, writing: “The EU stands for competition on the merits, but we do not want bigger companies getting bigger and bigger without getting any better and at the expense of consumers and the European economy. Today, it is clear that competition rules alone cannot address all the problems we are facing with tech giants and their ability to set the rules by engaging in unfair business practices. The Digital Markets Act will rule out these practices, sending a strong signal to all consumers and businesses in the Single Market: rules are set by the co-legislators, not private companies!”

In other interesting tweaks, the committee has voted to expand the scope of the DMA — to cover not just online intermediation services, social networks, search engines, operating systems, online advertising services, cloud computing, and video-sharing services (i.e. where those platforms meet the relevant criteria to be designated “gatekeepers”) — but also add in web browsers (hi Google Chrome!), virtual assistants (Ok Google; hey Siri!) and connected TV (hi, Android TV) too.

On gatekeeper criteria, MEPs backed an increase in the quantitative thresholds for a company to fall under scope — to €8 billion in annual turnover in the European Economic Area; and a market capitalisation of €80 billion.

The sorts of tech giants who would qualify — based on that turnover and market cap alone (NB: other criteria would also apply) — include the usual suspects of Apple, Amazon, Meta (Facebook), Google, Microsoft etc but also — potentially — the European booking platform, Booking.com.

Although the raised threshold may keep another European gatekeeper, music streaming giant Spotify, out of scope.

MEPs supported the additional criteria for a platform to qualify as a gatekeeper and fall under scope of the DMA of: Namely, providing a “core platform service” in at least three EU countries; having at least 45M monthly end users and 10,000+ business users. The committee also noted their support that these thresholds do not prevent the Commission from designating other companies as gatekeepers — “when they meet certain conditions”.

In other changes, the committee backed adding new provisions around the interoperability of services, such as for number-independent interpersonal communication services and social network services.

And — making an intervention on so-called ‘killer acquisitions’ — MEPs voted for the Commission to have powers to impose “structural or behavioural remedies” where gatekeepers have engaged in systematic non-compliance.

“The approved text foresees in particular the possibility for the Commission to restrict gatekeepers from making acquisitions in areas relevant to the DMA in order to remedy or prevent further damage to the internal market. Gatekeepers would also be obliged to inform the Commission of any intended concentration,” they note on that.

The committee backed a centralized enforcement role for the Commission — while adding some clarifications around the role of national competition authorities.

Failures of enforcement have been a major bone of contention around the EU’s flagship data protection regime, the GDPR, which allows for enforcement to be devolved to Member States but also for forum shopping and gaming of the system — as a couple of EU countries have outsized concentrations of tech giants on their soil and have been critized as bottlenecks to effective GDPR enforcement.

(Only today, for example, Ireland’s Data Protection Commission has been hit with a criminal complaint accusing it of procedural blackmail in an attempt to gag complainants in a way that benefits tech giants like Facebook… )

On sanctions for gatekeepers which break the DMA rules, MEPs want the Commission to impose fines of “not less than 4% and not exceeding 20%” of total worldwide turnover in the preceding financial year — which, in the case of adtech giants Facebook’s and Google’s full year 2020 revenue would allow for theoretical sanctions in the $3.4BN-$17.2BN and $7.2BN-$36.3BN range, respectively.

Which would be a significant step up on the sorts of regulatory sanctions tech giants have faced to date in the EU.

Facebook has yet to face any fines under GDPR, for example — over three years since it came into application, despite facing numerous complaints. (Although Facebook-owned WhatsApp was recently fined $267M for transparency failures.)

While Google received an early $57M GDPR from France before it moved users to fall under Ireland’s legal jurisdiction — where its adtech has been under formal investigation since 2019 (without any decisions/sanctions as yet).

Mountain View has also faced a number of penalties elsewhere in Europe, though — with France again leading the charge and slapping Google with a $120M fine for dropping tracking cookies without consent (under the EU ePrivacy Directive) last year.

Its competition watchdog has also gone after Google — issuing a $268M penalty this summer for adtech abuses and a $592M sanction (also this summer) related to requirements to negotiate licensing fees with news publishers over content reuse.

It’s interesting to imagine such stings as a mere amuse-bouche compared to the sanctions EU lawmakers want to be able to hand out under the DMA.

Facebook’s problems with European privacy law could be about to get a whole lot worse. But ahead of what may soon be a major (and long overdue) regulatory showdown over the legality of its surveillance-based business model, Ireland’s Data Protection Commission (DPC) is facing a Facebook-shaped problem of its own: It’s now the subject of a criminal complaint alleging corruption and even bribery in the service of covering its own backside (we paraphrase) and shrinking the public understand of the regulatory problems facing Facebook’s business.

European privacy campaign group noyb has filed the criminal complaint against the Irish DPC, which is Facebook’s lead regulator in the EU for data protection.

noyb is making the complaint under Austrian law — reporting the Irish regulator to the Austrian Office for the Prosecution of Corruption (aka WKStA) after the DPC sought to use what noyb terms “procedural blackmail” to try to gag it and prevent it from publishing documents related to General Data Protection Regulation (GDPR) complaints against Facebook.

The not-for-profit alleges that the Irish regulator sought to pressure it to sign an “illegal” non-disclosure agreement (NDA) in relation to a public procedure — its complaint argues there is no legal basis for such a requirement — accusing the DPC of seeking to coerce it into silence, as Facebook would surely wish, by threatening not to comply with its regulatory duty to hear the complainant unless noyb signed the NDA. Which is quite the (alleged) quid-pro-quo.

“The DPC acknowledges that it has a legal duty to hear us but it now engaged in a form of ‘procedural coercion’,” said noyb chair, Max Schrems, in a statement. “The right to be heard was made conditional on us signing an agreement, to the benefit of the DPC and Facebook. It is nothing but an authority demanding to give up the freedom of speech in exchange for procedural rights.”

The regulator has also demanded noyb remove documents it has previously made public — related to the DPC’s draft decision of a GDPR complaint against Facebook — again without clarifying what legal basis it has to make such a demand.

As noyb points out, it is based in Austria, not Ireland — so is subject to Austrian law, not Irish law. But, regardless, even under Irish law it argues there’s no legal duty for parties to keep documents confidential — pointing out that Section 26 of the Irish Data Protection Act, which was cited by the DPC in this matter, only applies to DPC staff (“relevant person”), not to parties.

“Generally we have very good and professional relationships with authorities. We have not taken this step lightly, but the conduct of the DPC has finally crossed all red lines. The basically deny us all our rights to a fair procedure unless we agree to shut up,” added Schrems.

He went on to warn that “Austrian corruption laws are far reaching” — and to further emphasize: “When an official requests the slightest benefit to conduct a legal duty, the corruption provisions may be triggered. Legally there is no difference between demanding an unlawful agreement or a bottle of wine.”

All of which looks exceptionally awkward for the Irish regulator. Which already, let’s not forget — at the literal start of this year — agreed to “swiftly” finalize another fractious complaint made by Schrems, this one relating to Facebook’s EU-US data transfers, and which dates all the way back to 2013, following noyb bringing a legal procedure.

(But of course there’s still no sign of a DPC resolution of that Facebook complaint either… So, uhhh, ‘Siri: Show me regulatory capture’… )

Last month noyb published a draft decision by the DPC in relation to another (slightly less vintage) complaint against Facebook — which suggested the tech giant’s lead EU data regulator intended not to challenge Facebook’s attempt to use an opaque legal switch to bypass EU rules (by claiming that users are actually in a contract with it receive targeted ads, ergo GDPR consent requirements do not apply).

The DPC had furthermore suggested a wrist-slap penalty of $36M — for Facebook failing transparency requirements over the aforementioned ‘ad contract’.

That decision remains to be finalized because — under the GDPR’s one-stop-shop mechanism, for deciding cross-border complaints — other EU DPAs have a right to object to a lead supervisor’s preliminary decision and can ratchet out a different outcome. Which is what noyb is suggesting may be about to happen vis-a-vis this particular Facebook complaint saga.

Winding back slightly, despite the EU’s GDPR being well over three years old (in technical application terms), the DPC has yet to make a single final finding against Facebook proper.

So far it’s only managed one decision against Facebook-owned WhatsApp — which resulted in an inflated financial penalty for transparency failures by the messaging platform after other EU DPAs intervened to object to a (similarly) low-ball draft sanction Ireland had initially suggested. In the end WhatsApp was hit with a fine of $267M — also for breaching GDPR transparency obligations. A notable increase on the DPC’s offer of a fine of up to $56M.

The tech giant is appealing that penalty — but has also said it will be tweaking its privacy policy in Europe in the meanwhile. So it’s a (hard won) win for European privacy advocates — for now.

The WhatsApp GDPR complaint is just the tip, of course. The DPC has been sitting, hen-like, on a raft of data protection complaints against Facebook and other Facebook-owned platforms — including several filed by noyb on the very the day the regulation came into technical application all the way back in May 2018.

These ‘forced consent’ complaints by noyb strike at the heart of the headlock Facebook applies to users by not offering them an opt-out from tracking based advertising. Instead the ‘deal’ Facebook (now known as Meta) offers is a take-it or leave-it ‘choice’ — either accept ads or delete your account — despite the GDPR setting a robust standard for what can legally constitute consent that states it must be specific, informed and freely given.

Arm twisting is not allowed. Yet Facebook has been twisting European’s arms before and since the GDPR, all the same.

So the ‘forced consent’ complaints — if they do ever actually get enforced — have the potential to purge the tech giant’s surveillance-based business model once and for all. As, perhaps, does the vintage EU-US data transfers issue. (Certainly it would crank up Facebook’s operational costs if it had to federate its service so that Europeans’ data was stored and processed within the EU to fix the risk of US government mass surveillance.)

However, per the draft DPC decision on the forced consent issue, published (by noyb) last month, the Irish regulator appeared to be preparing to (at best) sidestep the crux question of the the legality of Facebook’s data mining, writing in a summary: “There is no obligation on Facebook to seek to rely solely on consent for the purposes of legitimising personal data processing where it is offering a contract to a user which some users might assess as one that primarily concerns the processing of personal data. Nor has Facebook purported to rely on consent under the GDPR.”

noyb has previously accused the DPC of holding secret meetings with Facebook around the time it came up with the claimed consent bypass and just as the GDPR was about come into application — implying the regulator was seeking to support Facebook in finding a workaround for EU law.

The not-for-profit also warned last month that if Facebook’s relabelling “trick” (i.e. switching a claim of ‘consent’ to a claim of ‘contract’) were to be accepted by EU regulators it would undermine the whole of the GDPR — making the much lauded data protection regime trivially easy for data-mining giants to bypass.

Likewise, noyb argues, had it signed the DPC’s demanded NDA it would have “greatly benefited Facebook”.

It would also have helped the DPC by keeping a lid on the awkward detail of lengthy and labyrinthine proceedings — at a time when the regulator is facing rising heat over its inaction against big tech, including from lawmakers on home soil. (Some of which are now pushing for reform of the Commission — including the suggestion that more commissioners should be recruited to remove sole decision-making power from the current incumbent, Helen Dixon.)

“The DPC is continuously under fire by other DPAs, in public inquiries and the media. If an NDA would hinder noyb’s freedom of speech, the DPC’s reputational damage could be limited,” noyb suggests in a press release, before going on to note that had it been granted a benefit by signing an NDA (“in direct exchange for the DPC to conduct its legal duties”) its own staff could have potentially committed a crime under the Austrian Criminal Act.

The not-for-profit instead opted to dial up publicity — and threaten a little disinfecting sunlight — by filing a criminal complaint with the Austrian Office for the Prosecution of Corruption.

It’s essentially telling the DPC to put up a legal defence of its procedural gagging attempts — or, well, shut up.

Here’s Schrems again: “We very much hope that Facebook or the DPC will file legal proceedings against us, to finally clarify that freedom of speech prevails over the scare tactics of a multinational and its taxpayer-funded minion. Unfortunately we must expect that they know themselves that they have no legal basis to take any action, which is why they reverted to procedural blackmail in the first place.”

Nor is noyb alone in receiving correspondence from the DPC that’s seeking to apply swingeing confidentiality clauses to complainants. TechCrunch has reviewed correspondence sent to the regulator earlier this fall by another complainant who writes to query its legal basis for a request to gag disclosure of correspondence and draft reports.

Despite repeated requests for clarification, the DPC appears to have entirely failed — over the course of more than a month — to reply to the request for its legal basis for making such a request.

This suggests noyb’s experience of scare tactics without legal substance is not unique and backs up its claim that the DPC has questions to answer about how it conducts its office.

We’ll be reaching out to the DPC for comment on the allegations it’s facing.

But what about Facebook? noyb’s press release goes on to predict a “tremendous commercial problem” looming for the data-mining giant — as it says DPC correspondence “shows that other European DPAs have submitted ‘relevant and reasoned objections’ and oppose the DPC’s view” [i.e. in the consent bypass complaint against Facebook].

“If the other DPAs have a majority and ultimately overturn the DPC’s draft decision, Facebook could face a legal disaster, as most commercial use of personal data in the EU since 2018 would be retroactively declared illegal,” noyb suggests, adding: “Given that the other DPAs passed Guidelines in 2019 that are very unfavourable to Facebook’s position, such a scenario is highly likely.”

The not-for-profit has more awkward revelations for the DPC and Facebook in the pipe, too.

It says it’s preparing fresh document releases in the coming weeks — related to correspondence from the DPC and/or Facebook — as a “protest” against attempts to gag it and to silence democratic debate about public procedures.

“On each Sunday in advent, noyb will publish another document, together with a video explaining the documents and an analysis why the use of these documents is fully compliant with all applicable laws,” it notes, adding that what it’s billing as the “advent reading” will be published on noyb.eu“so tune in!”.

So looks like the next batch of ‘Facebook Papers‘ that Meta would really rather you didn’t see will be dropping soon…

via GIPHY

 

The European Data Protection Board (EDPB), an expert steering body which advises EU lawmakers on how to interpret rules wrapping citizen’s personal data, has warned the bloc’s legislators that a package of incoming digital regulations risks damaging people’s fundamental rights — without “decisive action” to amend the suite of proposals.

The reference is to draft rules covering digital platform governance and accountability (the Digital Services Act; DSA); proposals for ex ante rules for Internet gatekeepers (the Digital Markets Act; DMA), the Data Governance Act (DGA), which aims to encourage data reuse as an engine for innovation and AI; and the Regulation on a European approach for Artificial Intelligence (AIR), which sets out a risk-based framework for regulating applications of AI. 

The EDPB’s analysis further suggests that the package of pan-EU digital rules updates will be hampered by fragmented oversight and legal inconsistencies — potentially conflicting with existing EU data protection law unless clarified to avoid harmfully inconsistent interpretations.

Most notably, in a statement published today following a plenary meeting yesterday, the EDPB makes a direct call for EU legislators for implement stricter regulations on targeted advertising in favor of alternatives that do not require the tracking and profiling of Internet users — going on to call for lawmakers to consider “a phase-out leading to a prohibition of targeted advertising on the basis of pervasive tracking”.

Furthermore, the EDPB statement urges that the profiling of children for ad targeting should “overall be prohibited”.

As it happens, the European Parliament’s internal market and consumer protection (IMCO) committee was today holding a hearing to discuss targeted advertising, as MEPs consider amendments to a wider digital regulation package known as the Digital Services Act (DSA).

There has been a push by a number of MEPs for an outright ban on tracking-based ads to be added to the DSA package — given rising concern about the myriad harms flowing from surveillance-based ads, from ad fraud to individual manipulation and democratic erosion (to name a few).

However MEPs speaking during the IMCO committee hearing today suggested there would not be overall support in the Parliament to ban tracking ads — despite compelling testimony from a range of speakers articulating the harms of surveillance-based advertising and calling out the adtech industry for misleading lobbying on the issue by seeking to conflate targeting and tracking.

Although retail lobbyist, Ilya Bruggeman, did speak up for tracking and profiling — parroting the big adtech platforms’ line that SMEs rely on privacy invasive ads — even as other speakers at the committee session aligned with civil society in challenging the claim.

Johnny Ryan, a former adtech industry insider (now a fellow at the ICCL) — who has filed numerous GDPR complaints against real-time bidding (RTB)’s rampant misuse of personal data, dubbing it the biggest security breach in history — kicked off his presentation with a pointed debunking of industry spin, telling MEPs that the issue isn’t, as the title of the session had it, “targeted ads”; rather the problem boils down to “tracking-based ads”.

“You can have targeting, without having tracking,” he told MEPs, warning: “The industry that makes money from tracking wants you to think otherwise. So let’s correct that.”

The direction of travel of the European Parliament on behavioral ads (i.e. tracking-based targeting) in relation to another key digital package, the gatekeeper-targeting DMA, also looks like it will eschew a ban for general users in favor of beefing up consent requirements. Which sounds like great news for purveyors of dark pattern design.

That said, MEPs do appear to be considering a prohibition on tracking and profiling of minors for ad targeting — which raises questions of how that could be implemented without robust age verification also being implemented across all Internet services… Which, er, is not at all the case currently — nor in most people’s favored versions of the Internet. (The UK government might like it though.)

So, if that ends up making it into the final version of the DMA, one way for services to comply/shrink their risk (i.e. of being accused of ad-targeting minors) could be for them to switch off tracking ads for all users by default — unless they really have robustly age-verified a specific user is an adult. (So maybe adtech platforms like Facebook would start requiring users to upload a national ID to use their ‘free’ services… )

In light of MEPs’ tentativeness, the EDPB’s intervention looks significant — although the body does not have lawmaking powers itself.

But by urging EU co-legislators to take “decisive action” it’s firing a clear shot across the Council, Parliament and Commission’s bows to screw their courage to the sticking place and avoid the bear-pit of lobbying self-interest and remember that alternative forms of online advertising are available. And profitable.

“Our concerns consist of three categories: (1) lack of protection of individuals’ fundamental rights and freedoms; (2) fragmented supervision; and (3) risks of inconsistencies,” the Board writes in the statement, going on to warn that it “considers that, without further amendments, the proposals will negatively impact the fundamental rights and freedoms of individuals and lead to significant legal uncertainty that would undermine both the existing and future legal framework”.

“As such, the proposals may fail to create the conditions for innovation and economic growth envisaged by the proposals themselves,” it also warns.

The EDPB’s concerns for citizens’ fundamental rights also encompass the Commission’s proposal to regulate high risk applications of artificial intelligence, with the body saying the draft AI Regulation does not go far enough to prevent the development of AI systems that are intended to categorize individuals — such as by using their biometrics (e.g. facial recognition) and according to ethnicity, gender, and/or political or sexual orientation, or other prohibited grounds of discrimination.

“The EDPB considers that such systems should be prohibited in the EU and calls on the co-legislators to include such a ban in the AIR,” it writes. “Furthermore, the EDPB considers that the use of AI to infer emotions of a natural person is highly undesirable and should be prohibited, except for certain well-specified use-cases, namely for health or research purposes, subject to appropriate safeguards, conditions and limits.”

The Board has also reiterated its earlier call for a ban on the use of AI for remote biometric surveillance in public places — following an joint statement with the European Data Protection Supervisor back in June.

MEPs have also previously voted for a ban on remote biometric surveillance.

The Commission proposal offered a very tepid, caveated restriction which has been widely criticized as insufficient.

“[G]iven the significant adverse effect for individuals’ fundamental rights and freedoms, the EDPB reiterates that the AIR should include a ban on any use of AI for an automated recognition of human features in publicly accessible spaces — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — in any context,” the Board writes in the statement.

“The proposed AIR currently allows for the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement in certain cases. The EDPB welcomes the recently adopted EP Resolution where the significant risks are highlighted.”

On oversight, the EDBP sounds concerned about data protection bodies being bypassed by the bloc’s ambitious flotilla of digital regulations updates — urging “complementarity in oversight” to enhance legal certainty, as well as emphasizing the need for DPAs to be provided with “sufficient resources to perform these additional tasks”. (A perennial problem in an age of ever bigger data.)

Legal certainty would also be improved by including explicit references to existing data protection legislation (such as the GDPR and ePrivacy Directive), it argues, to avoid the risk of incoming data packages weakening core concepts of the GDPR such as consent to data processing.

“It also creates the risk that certain provisions could be read as deviating from the GDPR or the ePrivacy Directive. Consequently, certain provisions could easily be interpreted in a manner that is inconsistent with the existing legal framework and subsequently lead to legal uncertainty,” the Board warns.

So far from the EU’s much vaunted digital regulation reboot strengthening protections for citizens to boost their trust in data-driven services there is a risk of death by a thousand cuts — and/or regulatory complexity — to foundational fundamental rights, with potentially ruinous consequences for the bloc’s much proclaimed ‘European values’.

 

Conductor — a marketing technology company that was snapped up by WeWork at the height of the latter company’s expansion ambitions, only then to buy itself out in the wake of WeWork’s collapse — has raised its first round of funding as a once-again independent startup. It has picked up $150 million, money that it will be using to continue investing in its technology and building out its business: an organic marketing platform aimed at SEO, content and web marketing teams, leveraging insights from search traffic to help build more accurate marketing strategies.

Conductor’s CEO and co-founder Seth Besmertnik has confirmed to us that the deal — led by Bregal Sagemount, with other investors not being disclosed — was made at a $525 million post-money valuation. Relatively speaking, that is a big leap considering that the management buyout he led for himself and others at the company was made at a price of $3.5 million, according to data from PitchBook.

About half of the investment, he said, would be in the form of secondary shares that are going to the employee-owners of the business; and half is new equity to put into the business.

For those who might not be familiar with Conductor’s backstory, here’s a brief summary, since it’s relevant to what the startup is doing today:

Conductor’s appeal to WeWork back in 2018 was based in part around WeWork already being one of Conductor’s big customers. The highly-capitalised WeWork was using Conductor’s marketing technology to figure out what businesses might be looking for when it came to office space, and it made sure that its marketing was aligned with this to drum up more business for WeWork itself. That proved to be a very successful partnership, enough to give WeWork the idea that it if it owned Conductor itself, it could leverage its technology for its existing business customers to help them grow their “virtual” real estate presence as much as their physical one, just as WeWork itself had done.

For Conductor, the deal also made sense, Besmertnik said, because Conductor already counted a number of enterprises among its customers, and WeWork potentially could provide another route to it reaching more of them. 

Those developments, as we now know, never quite came to fruition as the two companies thought it might. Luckily, the acquisition never touched Conductor’s pre-existing business. So when it started to become apparent that employees at the division might get laid off as part of WeWork’s drastic cost cuts, Besmertnik and the existing team hatched a plan to buy out the business to give it a shot of survival. When it went independent again in 2019, Conductor left with its customer list intact, and it has built on it since then. Between then and now, clients that it has added to its list include Microsoft, GlaxoSmithKline, and AT&T, with other customers including Visa, Twitter, Comcast and LG — some 450 big names in all.

At its heart, Conductor’s technology would typically be one of many tools — maybe dozens these days — that a marketer would use to both gather and analyse data to formulate and run campaigns. And to that end, the company already integrates with dozens of other sources and platforms to make that management and use easier for customers (The list includes insights tools like Dragon Metrics, Google Trends, TalkWalker, DeepCrawl, and SEMrush; project management platforms like Jira, Asana, Trello and WorkFront; and measurement tech from Adobe, Google, Webtrends, and more.)

But what is also interesting is that its approach in focusing around search is possibly more relevant today than ever before: not only are there an increasing number of controls being put in place to safeguard data (whether those are regulatory measures like GDPR, or choices being made by platform providers like Apple), but we are also seeing rapid growth of walled gardens in specific apps: audiences spend a lot of time in gaming environments or social media platforms like TikTok and Instagram, and that gives a lot less visibility to marketers when it comes to understand what users want, or what they are doing.

Search essentially breaks through that, and Conductor’s belief is that it’s a big enough area that it provides a window into those activities and needs.

“No one ever lies to a search engine,” said Besmertnik. “Whenever someone needs to buy something or look something up, they are searching on Google. Big social media companies may be taking eyeballs away from other forms of media but when you want to act or seek something, search is ingrained in our blood. They still go to search engines.”

The traffic going to search higher than ever, he points out, in part because of Covid-19 and the shift that it brought around more people carrying out their lives online, which has served as a big fillip to Conductor’s business, and specifically around how companies are leveraging search.

“Covid has hit the accelerator on digital for many companies that were not digital-first before,” he said. “You can’t just have a great digital experience now. It has to be discoverable. It has to be able to be found.”

There are a lot of other martech companies out there that have also discovered the power of search — specifically search-engine optimization specialists like SEMrush, Botify, BrightEdge, DeepCrawl and many more. Conductor’s customer list is one way that it has stepped out from the crowd to appear at to the top of the results, so to speak.

“At Bregal Sagemount, we pride ourselves on working with market leaders. The feedback we heard from Conductor’s customers and the market was definitive— Conductor is the leader in organic marketing,” said partner Michael Kosty, in a statement. “We are excited to partner with Seth and the whole Conductor team to advance their technology and their mission to empower brands to transform their wisdom into marketing that helps people.” Kosty is joining the board with this round.

Google’s challenge to a 2017 EU antitrust finding against its shopping comparison service (Google Shopping) has been largely dismissed by the General Court of the European Union.

It’s an important win for the Commission’s antitrust division, which — in recent years — has brought a string of enforcements against big tech, including multiple decisions against Google. But this summer lost a major case against Apple back taxes.

Today the General Court of the EU has upheld the €2.42BN penalty imposed on Google and its parent company Alphabet over four years ago for competition abuses related to Google Shopping, a product comparison search service.

It’s not clear whether Google will seek to pursue further appeals vis-a-vis Google Shopping. A spokesperson declined to comment when asked.

Back in 2017 the Commission found that Google had abused its dominance in search by giving prominent placement to its eponymous shopping comparison service — while simultaneously demoting rivals in organic search results.

Google and its parent entity Alphabet appealed the decision but the General Court has dismissed most of their claims — agreeing that the sanctioned actions were anti-competitive and that Google had favored its own comparison shopping service over competing services rather than serving a better result.

In a press release on the judgement, the Court also calls out another problematic tactic, writing: “While Google did subsequently enable competing comparison shopping services to enhance the quality of the display of their results by appearing in its ‘boxes’ in return for payment, the General Court notes that that service depended on the comparison shopping services changing their business model and ceasing to be Google’s direct competitors, becoming its customers instead.”

In additional findings, the Court agreed there were harmful effects for Google’s competitors of its anti-competitive behavior. And it rejected an argument by Google — which had sought to claim that competition for comparison shopping services remains strong on account of the presence of merchant platforms on that market — agreeing with the Commission’s assessment that those platforms are not on the same market.

In one bright spot for Google, the Court found that the Commission did not establish that the tech giant’s conduct had had (even potential) anticompetitive effects on the market for general search services. So it annulled the finding of an infringement in respect of that market alone.

But, again, it upheld the Commission’s analysis of the market for specialised search services for comparison shopping (and Google’s anti-competitive activity within it).

The Court also rejected Google’s claim that its conduct was objectively justified because it ‘improved the quality of its search service’.

And it dismissed another claim by Google of technical constraints preventing it from providing equal treatment.

“Google has not demonstrated efficiency gains linked to that practice that would counteract its negative effects on competition,” it added in the press release.

In upholding the level of financial penalty imposed by the Commission, the Court noted that the portion of the decision it annulled had no impact on the amount of the fine (“since the Commission did not take the value of sales on that market into consideration in order to determine the basic amount of the fine”); and emphasized what is described as the “particularly serious nature of the infringement” — also taking into account the fact the conduct was intentional, rather than negligent.

The Commission said the judgement “delivers the clear message that Google’s conduct was unlawful and it provides the necessary legal clarity for the market”.

“Comparison shopping delivers an important service to consumers, at a time when ecommerce has become more and more important for retailers and consumers. As digital services have become omnipresent in our society nowadays, consumers should be able to rely on them in order to make informed and unbiased choices,” the Commission said in the statement.

It added that it will continue to use “all tools at its disposal to address the role of big digital platforms on which businesses and users depend to, respectively, access end users and access digital services” — pointing to its Digital Markets Act regulatory proposal, that’s currently being debated by the European Parliament and Council, and which it said aims to ensure “fairness and contestability”.

Responding to the ruling in its own statement, a Google spokesperson sought to downplay any significance, writing:

“Shopping ads have always helped people find the products they are looking for quickly and easily, and helped merchants to reach potential customers. This judgement relates to a very specific set of facts and while we will review it closely, we made changes back in 2017 to comply with the European Commission’s decision. Our approach has worked successfully for more than three years, generating billions of clicks for more than 700 comparison shopping services.”

However one rival to Google in the local search space — Yelp — seized on the judgement, saying it establishes a framework for “the swift assessment of the illegality of this type of conduct in other verticals”, and calling on the Commission to take action against Google over local search.

Yelp welcomes today’s ruling of the European General Court, which has found Google guilty of abusing its dominance in general search to extinguish competitors in vertical search services without any efficiency justification,” said Luther Lowe, SVP, public policy, in a statement. “While the decision focuses on comparison shopping, it establishes a framework for the swift assessment of the illegality of this type of conduct in other verticals, namely local search. The local search market has not yet tipped, and European consumers conduct local searches on Google billions of times per week.

“Rather than accept a Pyrrhic victory, the European Commission must now take this favorable precedent and prosecute Google for its parallel abuses in the local search market and allow services like Yelp to compete on the merits,” he went on. “It may be hard to remember now, but Google and its Big Tech ilk were not always this unpopular. In 2015, Vice-President Vestager exhibited incredible courage by showing the world that Google’s abuses were not acceptable. She should be commended for this courage.

“But history’s verdict on this period will be based on whether this odyssey ultimately produced a tangible impact for European consumers, which is why it is vital that these tools be utilized in markets where competition remains salvageable.”

In the meanwhile, Google has a pipeline of appeals against others EU antitrust findings (Android and AdSense), as well as an open EU investigation of its adtech — not to mention a raft of antitrust cases on home soil.

So its lawyers will be kept very busy regardless of the ramifications of today’s loss.