Steve Thomas - IT Consultant

Australia’s competition watchdog is the latest to push for legal powers to curb Google’s dominance in the adtech sector.

It has made the call as it published its final report on an inquiry examining competition concerns in the digital advertising sector. In the report, the Australian Competition and Consumer Commission (ACCC) concludes that new regulatory solutions are needed to address Google’s dominance and competition to the adtech sector — “for the benefit of businesses and consumers”.

The tech giant’s grip on first party data is a particular focus in the report, with the regulator floating the idea of special measures being needed to tackle Google’s dominance — such as data separation powers or data access requirements.

“We have identified systemic competition concerns relating to conduct over many years and multiple adtech services, including conduct that harms rivals. Investigation and enforcement proceedings under general competition laws are not well suited to deal with these sorts of broad concerns, and can take too long if anti-competitive harm is to be prevented,” said ACCC chair Rod Sims in a statement.

“We are concerned that the lack of competition has likely led to higher adtech fees. An inefficient adtech industry means higher costs for both publishers and advertisers, which is likely to reduce the quality or quantity of online content and ultimately results in consumers paying more for advertised goods,” he added.

In a specific finding against Google, the ACCC found the tech giant has used its position to preference its own services (aka self-preferencing) and shield them from competition — with the watchdog giving the example of how Google prevents rival adtech services from accessing ads on YouTube (which it said gives Google’s adtech services an “important advantage”).

More generally, it found Google has a dominant position in key parts of the adtech supply chain — estimating that more than 90% of ad impressions traded via the adtech supply chain passed through at least one Google service last year. 

Google’s dominance is underpinned by multiple factors, per the ACCC’s analysis — including access to consumer and other data; access to exclusive inventory; and integration across its adtech services.

The report also highlights key acquisitions by Google — including of DoubleClick in 2007, AdMob in 2009, and YouTube in 2006 — which the regulator said had helped Google entrench its position in adtech.

The lack of transparency in the sector is another target, with the report highlighting opaque pricing and operation which it said compounds the complexity of the market, making it difficult for advertisers and publishers to understand how the supply chain is functioning and detect misconduct.

The UK’s competition watchdog highlighted similar concerns in its own adtech sector report last year. And UK lawmakers are now working on a digitally focused reform of domestic competition law.

As well as calling for new legal powers to curb Google’s dominance in the advertising sector, the ACCC recommends that the industry establishes standards — such as requiring providers to publish average fees and take rates to enable their customers to easily compare fees across different providers and services.

It also recommends an industry standard to enable “full and independent verification of the services advertisers use in the supply chain”. And it flags Google’s refusal to participate in the publisher-led ‘header bidding’ push — an industry initiative, developed around 2014/15, which tried to boost competition for publishers’ inventory but was stymied by lack of support from Google — highlighting that Google previously allowed its services to have a ‘last look’ opportunity to outbid rivals, in another critical observation.

“Google has used its vertically integrated position to operate its adtech services in a way that has, over time, led to a less competitive adtech industry. This conduct has helped Google to establish and entrench its dominant position in the ad tech supply chain,” said Sims.

“Google’s activities across the supply chain also mean that, in a single transaction, Google can act on behalf of both the advertiser (the buyer) and the publisher (the seller) and operate the ad exchange connecting these two parties. As the interests of these parties do not align, this creates conflicts of interest for Google which can harm both advertisers and publishers.”

Perhaps the really striking point here is that none of the ACCC’s findings feel especially new. Rather these are problems that regulators and lawmakers all over the world have been fixing on — and considering how best to fix.

The Australian watchdog’s report follows a major penalty levied again Mountain View in France this summer, for instance, in a case also relating to self-preferencing in the adtech sector.

France’s competition authority also extracted a number of commitments from Google on interoperability in the adtech market.

The ACCC is recommending that the government Down Under creates rules to manage conflicts of interest; prevent anti-competitive self-preferencing; and ensure rival ad tech providers “can compete on their merits” — also echoing many of the concerns European Union legislators have similarly identified in a set of proposed ex ante rules aimed at tech giants like Google (aka, so-called “gatekeeper” platforms).

And, as mentioned above, the UK is also planning to update competition rules to give regulators bespoke powers to tackle platform giants. While — in Germany — legislators have already updated competition rules to target digital giants, passing an update to the law at the start of this year which gives antitrust regulator powers to intervene again Internet giants by, for example, banning self-preferencing.

The ACCC notes that it’s considering specific allegations against Google under existing competition laws. But the report emphasizes that new regulatory mechanism are essential to tackle its dominance.

“We have identified systemic competition concerns relating to conduct over many years and multiple ad tech services, including conduct that harms rivals. Investigation and enforcement proceedings under general competition laws are not well suited to deal with these sorts of broad concerns, and can take too long if anti-competitive harm is to be prevented,” said Sims.

Simultaneously, Australia is also considering broader regulations for the digital sector — with a report on that due in a year’s time.

The ACCC said that report should also consider how to implement sector-specific rules for adtech — and whether they need to form part of a broader regulatory scheme to address “common competition and consumer concerns” the watchdog said it has identified in digital platform markets.

“Many of the concerns we identified in the adtech supply chain are similar to concerns in other digital platform markets, such as online search, social media and app marketplaces,” added Sims. “These markets are also dominated by one or two key providers, which benefit from vertical integration, leading to significant competition concerns. In many cases these are compounded by a lack of transparency.”

Consultation on that piece of work will kick off in the first quarter of 2022 — with the ACCC saying it will “take into account” overseas legislative proposals to deal with these issues.

The EU presented its plan for grappling with Big Tech in the Digital Markets Act proposal at the end of last year, along with a broader set of rules for digital platforms (the Digital Services Act) that aims to dial up accountability more generally across Internet services, targeting areas like illegal content or the sale of dangerous goods online.

While in Germany — which is pushing ahead of any pan-EU measures — the FCO has opened a raft of procedures against tech giants (including Amazon, Apple, Facebook and Google), looking at whether their market power is significant enough for their businesses to fall under scope of its new law. So the competition authority there could soon step in to curb market abuses.

Asia has also been taking an increasingly active stance against regulating tech giants. Earlier this month, for example, South Korea fined Google $177M for market abuse related to how it operates its smartphone operating system, Android. While, in China, the regime is turning its guns on all big tech — even homegrown companies.

And even on home turf, US tech giants — including Google and Facebook — are facing regulatory challenges on a number of fronts, including over how they operate app stores, and on issues like self-preferencing and predatory market consolidation.

The tl;dr is there is now a global consensus that big tech must be cut down to size. The only questions are over how that happens — see, for example, Australia already pushing ahead with legislation for a news media bargaining code that targets Facebook and Google — and how quickly digital markets can be rebooted.

Responding to the ACCC’s report, a Google spokesperson offered this statement:

“Google’s digital advertising technology services are delivering benefits for businesses and consumers — helping publishers fund their content, enabling small businesses to reach customers and grow, and protecting people online from bad ad practices.

Analysis by PwC Australia for Google Australia found that three quarters of Google’s adtech customers are Australian small and medium businesses — and three in four businesses surveyed observed important benefits from using Google’s services including cost savings, time savings and business growth, compared to other services.

PwC also estimated that the existence and use of Google’s advertising technology directly supports more than 15,000 full-time equivalent jobs and contributes $2.45 billion to the Australian economy annually.

As one of the many advertising technology providers in Australia, we will continue to work collaboratively with industry and regulators to support a healthy ads ecosystem.”

Google today announced a change to its online ads, which will now display new disclosures that allow web searchers to see not just who the advertiser is and why the ad was served to you, but also what other ads the advertiser has run with Google, starting with the most recent. The changes are a part of Google’s broader revamp of its ads business in the face of increased regulatory scrutiny and a broader shift across the tech industry to technologies that promote transparency and consumer privacy.

In this case, Google is building on last year’s launch of its advertiser identity verification program, which requires advertisers to disclose their personal information — including documents that prove they are who they say they are and those that confirm which country they operate from — as well as details about what they’re selling. These disclosures began rolling out last year to advertisers who buy ads from Google’s network. So far, Google says it’s begun verifying advertisers in over 90 countries worldwide.

Now it’s including expanded disclosures in its “About this Ad” product, too.

Within these new advertiser pages, anyone will be able to click to learn more about the advertiser and access a menu where they can view all the ads a specific advertiser has run over the past 30 days.

Google presents this as a useful tool from a consumer perspective, by noting how a consumer who saw a product for sale, like a coat, could use the tool to learn more about a brand and its other products. But it’s clearly also useful as a means of identifying possible bad actors in the advertising ecosystem, as it would showcase a history of the advertisers’ ads in one public-facing destination.

Image Credits: Google

From here, users will also more easily be able to report an ad for violating Google policies around things like prohibited or restricted content, such as counterfeit goods, dangerous products, inappropriate content, abuse, violations of interest-based ad policies, ads that deceive the user, noncompliance with local election laws and regulations, and more.

The changes come about at a time when Google’s approach to online advertising had been shifting. Google hints at its broader strategy today, saying the new ad disclosures “build on our efforts to create a clear and intuitive experience for users who engage with ads on Google products.” It also noted that over 30 million users interact with its ads transparency and control menus on a daily basis. For a feature that’s relatively buried in the product — you have to click the tiny “i” icon to access these menus — that speaks to Google’s massive global scale.

To date, Google has also announced a number of significant moves in the ads space, including the addition of integrated ad-blocking in Chrome, new limits to political ad targeting, and it announced plans to move away from third-party cookies — though these have since been delayed.

Facebook today provided an update on how Apple’s privacy changes have impacted its ad business. The company had already warned investors during its second quarter earnings that it expected to feel an even more significant impact in its ad targeting business by Q3. This morning, it reiterated that point, but also noted that it had been underreporting iOS web conversions by approximately 15%, which had led advertisers to believe the impact was even worse than they had expected.

According to Facebook’s announcement published to its business blog, this exact percentage could vary broadly among individual advertisers. But it said the real-world conversions, including things like sales and app installs, are likely higher than what advertisers are seeing when using Facebook’s analytics.

Facebook’s stock has dropped by nearly 4% on this news, as of the time of writing.

This is not the first time Facebook has shared misleading metrics. In the past, however, it had inflated its video ad metrics and didn’t quickly act to correct the problem, leading to a class-action lawsuit. In this case, however, the issue with the metrics isn’t making Facebook look better than it is, but worse. The company noted it’s been hearing from its advertising community that they are seeing a larger-than-planned impact to their ad investments on the network, raising concerns.

Facebook offered advertisers a few tips to help them better understand a campaign’s impact and performance in this new era. It suggested waiting a minimum of 72 hours or the full length of the optimization window before evaluating performance rather than making assessments on a daily basis, as before. It also said advertisers should analyze reporting at the campaign level, when possible, as some estimated conversations are reported with a delay. And it suggested advertisers choose web events (like a purchase or sign-up) that are most aligned with their core business, among other things.

To address the issues with improving its measurements, Facebook said it’s working to improve its conversion modeling, accelerating its investments to address reporting gaps, launching new capabilities to track web conversions, and extending its ability to measure in-app conversions in apps that have already been installed. The company said it would work quickly to fix bugs, including one that recently had led to underreporting of approximately 10%, which was previously shared with advertisers.

The company in August explained how it’s been working to adapt its personalized ads business in light of both Apple and Google’s privacy changes and the new regulatory landscape, but those efforts will take time, it said.

Outside of the ad tech updates themselves, Facebook has also been working on new products that would allow advertisers to better position themselves in front of consumers browsing Facebook’s apps. Just last week, for instance, it revamped its business tool lineup with the introduction of new features and expansions of smaller tests that would offer businesses more ways to be discovered. One such test in the U.S. would direct consumers to other businesses and topics directly underneath news feed posts. It also now allows businesses to add WhatsApp buttons to their Instagram profiles and create ads that send Instagram users to WhatsApp business chats.

Facebook has been warning advertisers for some time that Apple’s new privacy features, which allow mobile users to opt out of being tracked across their iOS apps, would cause issues for the way its ad targeting business typically operated. And it repeatedly argued that Apple’s changes would impact small businesses that relied on Facebook ads to reach their customers. When the changes went into effect, Facebook’s concerns were validated as studies found very few consumers are opting into tracking on iOS.

 

Facebook today is announcing the launch of new products and features for business owners, following the threat to its ad targeting business driven by Apple’s new privacy features, which now allow mobile users to opt out of being tracked across their iOS apps. The social networking giant has repeatedly argued that Apple’s changes would impact small businesses that relied on Facebook ads to reach their customers. But it was not successful in getting any of Apple’s changes halted. Instead, the market is shifting to a new era focused more on user privacy, where personalization and targeting are more of an opt-in experience. That’s required Facebook to address its business advertiser base in new ways.

As the ability to track consumers declines — very few consumers are opting into tracking, studies find — Facebook is rolling out new features that will allow businesses to better position themselves in front of relevant audiences. This includes updates that will let them reach customers, advertise to customers, chat with customers across Facebook apps, generate leads, acquire customers and more.

The company earlier this year began testing a way for customers to explore businesses from underneath News Feed posts by tapping on topics they were interested in — like beauty, fitness, and clothing, and explore content from other related businesses. The feature allows people to come across new businesses that may also like, and would allow Facebook to create its own data set of users who like certain types of content. Over time, it could possibly even turn the feature into an ad unit, where businesses could pay for higher placement.

But for the time being, Facebook will expand this feature to more users across the U.S., and launch it in Australia, Canada, Ireland, Malaysia, New Zealand, Philippines, Singapore, South Africa, and the U.K.

Image Credits: Facebook

Facebook is also making it easier for businesses to chat with customers. They’re already able to buy ads that encourage people to message them on Facebook’s various chat platforms — Messenger, Instagram Direct, or WhatsApp. Now, they’ll be able to choose all the messaging platforms where they’re available, and Facebook will default the chat app showcased in the ad based on where the conversation is most likely to happen.

Image Credits: Facebook

The company will tie WhatsApp to Instagram, as well, as part of this effort. Facebook explains that many businesses market themselves or run shops across Instagram, but rely on WhatsApp to communicate with customers and answer questions. So, Facebook will now allow businesses to add a WhatsApp click-to-chat button to their Instagram profiles.

This change, in particular, represents another move that ties Facebook’s separate apps more closely together, at a time when regulators are considering breaking up Facebook over antitrust concerns. Already, Facebook interconnected Facebook’s Messenger and Instagram messaging services, which would make such a disassembly more complicated. And more recently, it’s begun integrating Messenger directly into Facebook’s platform itself.

Image Credits: Facebook

In a related change, soon businesses will be able to create ads that send users directly to WhatsApp from the Instagram app. (Facebook also already offers ads like this.)

Separately from this news, Facebook announced the launch of a new business directory on WhatsApp, allowing consumers to find shops and services on the chat platform, as well.

Another set of changes being introduced involve an update to Facebook Business Suite. Businesses will be able to manage emails through Inbox and sending remarketing emails; use a new File Manager for creating, managing, and posting content; and access a feature that will allow businesses to test different versions of a post to see which one is most effective.

Image Credits: Facebook

Other new products include tests of paid and organic lead generation tools on Instagram; quote requests on Messenger, where customers answer a few questions prior to their conversations; and a way for small businesses to access a bundle of tools to get started with Facebook ads, which includes a Facebook ad coupon along with free access to QuickBooks for 3 months or free access to Canva Pro for 3 months.

Image Credits: Facebook

Facebook will also begin testing something called “Work Accounts,” which will allow business owners to access their business products, like Business Manager, separately from their personal Facebook account. They’ll be able to manage these accounts on behalf of employees and use single-sign-on integrations.

Work Accounts will be tested through the remainder of the year with a small group of businesses, and Facebook says it expects to expand availability in 2022.

Other efforts it has in store include plans to incorporate more content from creators and local businesses and new features that let users control the content they see, but these changes were not detailed at this time.

Most of the products being announced are either rolling out today or will begin to show up soon.

In the latest quasi-throwback toward ‘do not track‘, the UK’s data protection chief has come out in favor of a browser- and/or device-level setting to allow Internet users to set “lasting” cookie preferences — suggesting this as a fix for the barrage of consent pop-ups that continues to infest websites in the region.

European web users digesting this development in an otherwise monotonously unchanging regulatory saga, should be forgiven — not only for any sense of déjà vu they may experience — but also for wondering if they haven’t been mocked/gaslit quite enough already where cookie consent is concerned.

Last month, UK digital minister Oliver Dowden took aim at what he dubbed an “endless” parade of cookie pop-ups — suggesting the government is eyeing watering down consent requirements around web tracking as ministers consider how to diverge from European Union data protection standards, post-Brexit. (He’s slated to present the full sweep of the government’s data ‘reform’ plans later this month so watch this space.)

Today the UK’s outgoing information commissioner, Elizabeth Denham, stepped into the fray to urge her counterparts in G7 countries to knock heads together and coalesce around the idea of letting web users express generic privacy preferences at the browser/app/device level, rather than having to do it through pop-ups every time they visit a website.

In a statement announcing “an idea” she will present this week during a virtual meeting of fellow G7 data protection and privacy authorities — less pithily described in the press release as being “on how to improve the current cookie consent mechanism, making web browsing smoother and more business friendly while better protecting personal data” — Denham said: “I often hear people say they are tired of having to engage with so many cookie pop-ups. That fatigue is leading to people giving more personal data than they would like.

“The cookie mechanism is also far from ideal for businesses and other organisations running websites, as it is costly and it can lead to poor user experience. While I expect businesses to comply with current laws, my office is encouraging international collaboration to bring practical solutions in this area.”

“There are nearly two billion websites out there taking account of the world’s privacy preferences. No single country can tackle this issue alone. That is why I am calling on my G7 colleagues to use our convening power. Together we can engage with technology firms and standards organisations to develop a coordinated approach to this challenge,” she added.

Contacted for more on this “idea”, an ICO spokeswoman reshuffled the words thusly: “Instead of trying to effect change through nearly 2 billion websites, the idea is that legislators and regulators could shift their attention to the browsers, applications and devices through which users access the web.

“In place of click-through consent at a website level, users could express lasting, generic privacy preferences through browsers, software applications and device settings – enabling them to set and update preferences at a frequency of their choosing rather than on each website they visit.”

Of course a browser-baked ‘Do not track’ (DNT) signal is not a new idea. It’s around a decade old at this point. Indeed, it could be called the idea that can’t die because it’s never truly lived — as earlier attempts at embedding user privacy preferences into browser settings were scuppered by lack of industry support.

However the approach Denham is advocating, vis-a-vis “lasting” preferences, may in fact be rather different to DNT — given her call for fellow regulators to engage with the tech industry, and its “standards organizations”, and come up with “practical” and “business friendly” solutions to the regional Internet’s cookie pop-up problem.

It’s not clear what consensus — practical or, er, simply pro-industry — might result from this call. If anything.

Indeed, today’s press release may be nothing more than Denham trying to raise her own profile since she’s on the cusp of stepping out of the information commissioner’s chair. (Never waste a good international networking opportunity and all that — her counterparts in the US, Canada, Japan, France, Germany and Italy are scheduled for a virtual natter today and tomorrow where she implies she’ll try to engage them with her big idea).

Her UK replacement, meanwhile, is already lined up. So anything Denham personally champions right now, at the end of her ICO chapter, may have a very brief shelf life — unless she’s set to parachute into a comparable role at another G7 caliber data protection authority.

Nor is Denham the first person to make a revived pitch for a rethink on cookie consent mechanisms — even in recent years.

Last October, for example, a US-centric tech-publisher coalition came out with what they called a Global Privacy Standard (GPC) — aiming to build momentum for a browser-level pro-privacy signal to stop the sale of personal data, geared toward California’s Consumer Privacy Act (CCPA), though pitched as something that could have wider utility for Internet users.

By January this year they announced 40M+ users were making use of a browser or extension that supports GPC — along with a clutch of big name publishers signed up to honor it. But it’s fair to say its global impact so far remains limited. 

More recently, European privacy group noyb published a technical proposal for a European-centric automated browser-level signal that would let regional users configure advanced consent choices — enabling the more granular controls it said would be needed to fully mesh with the EU’s more comprehensive (vs CCPA) legal framework around data protection.

The proposal, for which noyb worked with the Sustainable Computing Lab at the Vienna University of Economics and Business, is called Advanced Data Protection Control (ADPC). And noyb has called on the EU to legislate for such a mechanism — suggesting there’s a window of opportunity as lawmakers there are also keen to find ways to reduce cookie fatigue (a stated aim for the still-in-train reform of the ePrivacy rules, for example).

So there are some concrete examples of what practical, less fatiguing yet still pro-privacy consent mechanisms might look like to lend a little more color to Denham’s ‘idea’ — although her remarks today don’t reference any such existing mechanisms or proposals.

(When we asked the ICO for more details on what she’s advocating for, its spokeswoman didn’t cite any specific technical proposals or implementations, historical or contemporary, either, saying only: “By working together, the G7 data protection authorities could have an outsized impact in stimulating the development of technological solutions to the cookie consent problem.”)

So Denham’s call to the G7 does seem rather low on substance vs profile-raising noise.

In any case, the really big elephant in the room here is the lack of enforcement around cookie consent breaches — including by the ICO.

Add to that, there’s the now very pressing question of how exactly the UK will ‘reform’ domestic law in this area (post-Brexit) — which makes the timing of Denham’s call look, well, interestingly opportune. (And difficult to interpret as anything other than opportunistically opaque at this point.)

The adtech industry will of course be watching developments in the UK with interest — and would surely be cheering from the rooftops if domestic data protection ‘reform’ results in amendments to UK rules that allow the vast majority of websites to avoid having to ask Brits for permission to process their personal data, say by opting them into tracking by default (under the guise of ‘fixing’ cookie friction and cookie fatigue for them).

That would certainly be mission accomplished after all these years of cookie-fatigue-generating-cookie-consent-non-compliance by surveillance capitalism’s industrial data complex.

It’s not yet clear which way the UK government will jump — but eyebrows should raise to read the ICO writing today that it expects compliance with (current) UK law when it has so roundly failed to tackle the adtech industry’s role in cynically sicking up said cookie fatigue by failing to take any action against such systemic breaches.

The bald fact is that the ICO has — for years — avoided tackling adtech abuse of data protection, despite acknowledging publicly that the sector is wildly out of control.

Instead, it has opted for a cringing ‘process of engagement’ (read: appeasement) that has condemned UK Internet users to cookie pop-up hell.

This is why the regulator is being sued for inaction — after it closed a long-standing complaint against the security abuse of people’s data in real-time bidding ad auctions with nothing to show for it… So, yes, you can be forgiven for feeling gaslit by Denham’s call for action on cookie fatigue following the ICO’s repeat inaction on the causes of cookie fatigue…

Not that the ICO is alone on that front, however.

There has been a fairly widespread failure by EU regulators to tackle systematic abuse of the bloc’s data protection rules by the adtech sector — with a number of complaints (such as this one against the IAB Europe’s self-styled ‘transparency and consent framework’) still working, painstakingly, through the various labyrinthine regulatory processes.

France’s CNIL has probably been the most active in this area — last year slapping Amazon and Google with fines of $42M and $120M for dropping tracking cookies without consent, for example. (And before you accuse CNIL of being ‘anti-American’, it has also gone after domestic adtech.)

But elsewhere — notably Ireland, where many adtech giants are regionally headquartered — the lack of enforcement against the sector has allowed for cynical, manipulative and/or meaningless consent pop-ups to proliferate as the dysfunctional ‘norm’, while investigations have failed to progress and EU citizens have been forced to become accustomed, not to regulatory closure (or indeed rapture), but to an existentially endless consent experience that’s now being (re)branded as ‘cookie fatigue’.

Yes, even with the EU’s General Data Protection Regulation (GDPR) coming into application in 2018 and beefing up (in theory) consent standards.

This is why the privacy campaign group noyb is now lodging scores of complaints against cookie consent breaches — to try to force EU regulators to actually enforce the law in this area, even as it also finds time to put up a practical technical proposal that could help shrink cookie fatigue without undermining data protection standards. 

It’s a shining example of action that has yet to inspire the lion’s share of the EU’s actual regulators to act on cookies. The tl;dr is that EU citizens are still waiting for the cookie consent reckoning — even if there is now a bit of high level talk about the need for ‘something to be done’ about all these tedious pop-ups.

The problem is that while GDPR certainly cranked up the legal risk on paper, without proper enforcement it’s just a paper tiger. And the pushing around of lots of paper is very tedious, clearly. 

Most cookie pop-ups you’ll see in the EU are thus essentially privacy theatre; at the very least they’re unnecessarily irritating because they create ongoing friction for web users who must constantly respond to nags for their data (typically to repeatedly try to deny access if they can actually find a ‘reject all’ setting).

But — even worse — many of these pervasive pop-ups are actively undermining the law (as a number of studies have shown) because the vast majority do not meet the legal standard for consent.

So the cookie consent/fatigue narrative is actually a story of faux compliance enabled by an enforcement vacuum that’s now also encouraging the watering down of privacy standards as a result of such much unpunished flouting of the law.

There is a lesson here, surely.

‘Faux consent’ pop-ups that you can easily stumble across when surfing the ‘ad-supported’ Internet in Europe include those failing to provide users with clear information about how their data will be used; or not offering people a free choice to reject tracking without being penalized (such as with no/limited access to the content they’re trying to access), or at least giving the impression that accepting is a requirement to access said content (dark pattern!); and/or otherwise manipulating a person’s choice by making it super simple to accept tracking and far, far, far more tedious to deny.

You can also still sometimes find cookie notices that don’t offer users any choice at all — and just pop up to inform that ‘by continuing to browse you consent to your data being processed’ — which, unless the cookies in question are literally essential for provision of the webpage, is basically illegal. (Europe’s top court made it abundantly clear in 2019 that active consent is a requirement for non-essential cookies.)

Nonetheless, to the untrained eye — and sadly there are a lot of them where cookie consent notices are concerned — it can look like it’s Europe’s data protection law that’s the ass because it seemingly demands all these meaningless ‘consent’ pop-ups, which just gloss over an ongoing background data grab anyway.

The truth is regulators should have slapped down these manipulative dark patterns years ago.

The problem now is that regulatory failure is encouraging political posturing — and, in a twisting double-back throw by the ICO! — regulatory thrusting around the idea that some newfangled mechanism is what’s really needed to remove all this universally inconvenient ‘friction’.

An idea like noyb’s ADPC does indeed look very useful in ironing out the widespread operational wrinkles wrapping the EU’s cookie consent rules. But when it’s the ICO suggesting a quick fix after the regulatory authority has failed so spectacularly over the long duration of complaints around this issue you’ll have to forgive us for being sceptical.

In such a context the notion of ‘cookie fatigue’ looks like it’s being suspiciously trumped up; fixed on as a convenient scapegoat to rechannel consumer frustration with hated online tracking toward high privacy standards — and away from the commercial data-pipes that demand all these intrusive, tedious cookie pop-ups in the first place — whilst neatly aligning with the UK government’s post-Brexit political priorities on ‘data’.

Worse still: The whole farcical consent pantomime — which the adtech industry has aggressively engaged in to try to sustain a privacy-hostile business model in spite of beefed up European privacy laws — could be set to end in genuine tragedy for user rights if standards end up being slashed to appease the law mockers.

The target of regulatory ire and political anger should really be the systematic law-breaking that’s held back privacy-respecting innovation and non-tracking business models — by making it harder for businesses that don’t abuse people’s data to compete.

Governments and regulators should not be trying to dismantle the principle of consent itself. Yet — at least in the UK — that does now look horribly possible.

Laws like GDPR set high standards for consent which — if they were but robustly enforced — could lead to reform of highly problematic practices like behavorial advertising combined with the out-of-control scale of programmatic advertising.

Indeed, we should already be seeing privacy-respecting forms of advertising being the norm, not the alternative — free to scale.

Instead, thanks to widespread inaction against systematic adtech breaches, there has been little incentive for publishers to reform bad practices and end the irritating ‘consent charade’ — which keeps cookie pop-ups mushrooming forth, oftentimes with ridiculously lengthy lists of data-sharing ‘partners’ (i.e. if you do actually click through the dark patterns to try to understand what is this claimed ‘choice’ you’re being offered).

As well as being a criminal waste of web users’ time, we now have the prospect of attention-seeking, politically charged regulators deciding that all this ‘friction’ justifies giving data-mining giants carte blanche to torch user rights — if the intention is to fire up the G7 to send a collect invite to the tech industry to come up with “practical” alternatives to asking people for their consent to track them — and all because authorities like the ICO have been too risk averse to actually defend users’ rights in the first place.

Dowden’s remarks last month suggest the UK government may be preparing to use cookie consent fatigue as convenient cover for watering down domestic data protection standards — at least if it can get away with the switcheroo.

Nothing in the ICO’s statement today suggests it would stand in the way of such a move.

Now that the UK is outside the EU, the UK government has said it believes it has an opportunity to deregulate domestic data protection — although it may find there are legal consequences for domestic businesses if it diverges too far from EU standards.

Denham’s call to the G7 naturally includes a few EU countries (the biggest economies in the bloc) but by targeting this group she’s also seeking to engage regulators further afield — in jurisdictions that currently lack a comprehensive data protection framework. So if the UK moves, cloaked in rhetoric of ‘Global Britain’, to water down its (EU-based) high domestic data protection standards it will be placing downward pressure on international aspirations in this area — as a counterweight to the EU’s geopolitical ambitions to drive global standards up to its level.

The risk, then, is a race to the bottom on privacy standards among Western democracies — at a time when awareness about the importance of online privacy, data protection and information security has actually never been higher.

Furthermore, any UK move to weaken data protection also risks putting pressure on the EU’s own high standards in this area — as the regional trajectory would be down not up. And that could, ultimately, give succour to forces inside the EU that lobby against its commitment to a charter of fundamental rights — by arguing such standards undermine the global competitiveness of European businesses.

So while cookies themselves — or indeed ‘cookie fatigue’ — may seem an irritatingly small concern, the stakes attached to this tug of war around people’s rights over what can happen to their personal data are very high indeed.

TikTok is making it easier for brands and agencies to work with the influencers using its service. The company is rolling out a new “TikTok Creator Marketplace API,” which allows marketing companies to integrate more directly with TikTok’s Creator Marketplace, the video app’s in-house influencer marketing platform.

On the Creator Marketplace website, launched in late 2019, marketers have been able to discover top TikTok personalities for their brand campaigns, then create and manage those campaigns and track their performance.

The new API, meanwhile, allows partnered marketing companies to access TikTok’s first-party data about audience demographics, growth trends, best-performing videos, and real-time campaign reporting (e.g. views, likes, shares, comments, engagement, etc.) for the first time.

They can then bring this data back into their own platforms, to augment the insights they’re already providing to their own customer base.

TikTok is not officially announcing the API until later in September, but it is allowing its alpha partners to discuss their early work.

One such partner is Capitv8, which tested the API with a NRF top 50 retailer on one of their first TikTok campaigns. The retailer wanted to discover a diverse and inclusive group of TikTok creators to partner with on a new collaboration and wanted help with launching its own TikTok channel. Captiv8 says the branded content received nearly 10 million views, and the campaign resulted in a “significant increase” in several key metrics, which performed about the Nielsen average. This included familiarity (+4% above average), affinity (+6%), purchase intent (+7%) and recommendation intent (+9%).

Image Credits: TikTok Creator Marketplace website

Capitv8 is now working with TikTok’s API to pull in audience demographics, to centralize influencer offers and activations, and to provide tools to boost branded content and monitor campaign performance. On that last front, the API allows the company to pull in real-time metrics from the TikTok Creator Marketplace API — which means Capitv8 is now one of only a handful of third-party companies with access to TikTok first-party data.

Another early alpha partner is Influential, who shared it’s also leveraging the API to access first-party insights on audience demographics, growth trends, best-performing videos, and more, to help its customer base of Fortune 1000 brands to identify the right creators for both native and paid advertising campaigns.

One partner it worked with was DoorDash, who launched multiple campaigns on TikTok with Influential’s help. It’s also planning to work with McDonald’s USA on several new campaigns that will run this year, including those focused on the chain’s new Crispy Chicken Sandwich and the return of Spicy McNuggets.

Other early alpha partners include Whalar and INCA. The latter is currently only available in the U.K. and its integration stems from the larger TikTok global partnership with WPP, announced in February. That deal provided WPP agencies with early access to new advertising products marketing API integrations, and new AR offerings, among other things.

Creator marketplaces are now common to social media platforms with large influencer communities as this has become a standard way to advertise to online consumers, particular the younger generation. Facebook today offers its Brands Collabs Manager, for both Facebook and Instagram; YouTube has BrandConnect; while Snapchat recently announced a marketplace to connect brands with Lens creators. These type of in-house platforms make it easier for marketers to work with the wider influencer community by offering trusted data on metrics that matter to brands’ own ROI, rather than relying on self-reported data from influencers or on data they have to manually collect themselves. And as campaigns run, marketers can compare how well their partnered creators are able to drive results to inform their future collaborations.

TikTok isn’t making a formal announcement about its new API at this time, telling TechCrunch the technology is still in pilot testing phases for the time being.

“Creators are the lifeblood of our platform, and we’re constantly thinking of new ways to make it easy for them to connect and collaborate with brands. We’re thrilled to be integrating with an elite group of trusted partners to help brands discover and work with diverse creators who can share their message in an authentic way,” said Melissa Yang, TikTok’s Head of Ecosystem Partnerships, in a statement provided to select marketing company partners.

 

As it gears up to expand access to younger users, Instagram this morning announced a series of updates designed to make its app a safer place for online teens. The company says it will now default users to private accounts at sign-up if they’re under the age of 16  — or under 18 in certain locales, including in the E.U. It will also push existing users under 16 to switch their account to private, if they have not already done so. In addition, Instagram will roll out new technology aimed at reducing unwanted contact from adults — like those who have already been blocked or reported by other teens — and it will change how advertisers can reach its teenage audience.

The most visible change for younger users will be the shift to private accounts.

Historically, when users signed up for a new Instagram account, they were asked to choose between a public or private account. But Instagram says that its research found that 8 out 10 young people selected the “private” option during setup, so it will now make this the default for those under the age of 16.

Image Credits: Instagram

It won’t, however, force teens to remain private. They can switch to public accounts at any time, including during signup. Those with existing public accounts will be alerted to the benefits of going private and be instructed on how to make the change through an in-app notification, but Instagram will not force them to go private, it says.

This change follows a similar move by rival platform TikTok, which this January announced it would update the private settings and defaults for users under the age of 18. In TikTok’s case, it changed the accounts for users ages 13 to 15 to private by default but also tightened other controls related to how young teens use the app — including with comments, video downloads, and other TikTok features, like Duets and Stitches.

Instagram isn’t going so far as to restrict other settings beyond suggesting teens’ default account type, but it is taking action to address some of the problems that result from having adults participate on the same app that minors use.

The company says it will use new technology to identify accounts that have shown “potentially suspicious behavior,” including those that have been recently blocked or reported by other young teens. This is only one of many signals Instagram uses to identify suspicious behavior, but the company says it won’t publicize the others, as it doesn’t want people to be able to game its system.

Once identified as “potentially suspicious,” Instagram will then restrict these adults’ accounts from being able to interact with young people’s accounts.

For starters, Instagram will no longer show young people’s accounts in Explore, Reels or in the “Accounts Suggested For You” feature to these potentially suspicious adults. If the adult instead locates a young person’s account by way of a search, they won’t be able to follow them. And they won’t be able to see comments form young people on other people’s posts or be able to leave comments of their own on young people’s posts.

(Any teens planning to report and block their parents probably won’t trigger the algorithm, Instagram tells us, as it uses a combination of signals to trigger its restrictions.)

These new restrictions build on the technology Instagram introduced earlier this year, which restricted the ability for adults to contact teens who didn’t already follow them. This made it possible for teens to still interact with their family and family friends, while limiting unwanted contact from adults they didn’t know.

Cutting off problematic adults from young teens’ content like this actually goes further that what’s available on other social networks, like TikTok or YouTube, where there are often disturbing comments left on videos of young people — in many cases, girls who are being sexualized and harassed by adult men. YouTube’s comments section was even once home to a pedophile ring, which pushed YouTube to entirely disable comments on videos featuring minor children.

Instagram isn’t blocking the comments section in full, it’s more selectively seeking out the bad actors, then making content created by minors much harder for them to find in the first place.

The other major change rolling out in the next few weeks impacts advertisers looking to target ads to teens under 18 (or older in certain countries).

Image Credits: Instagram

Previously available targeting options — like those based on teens’ interests or activity on other apps or websites — will no longer be available to advertisers. Instead, advertisers will only be able to target based on age, gender and location. This will go into effect across Instagram, Facebook and Messenger.

The company says the decision was influenced by recommendations from youth advocates who said younger people may not be as well-equipped to make decisions related to opting out of interest-based advertising, which led to the new restrictions.

In reality, however, Facebook’s billion-dollar interest-based ad network has been under attack by regulators and competitors alike, and the company has been working to diversify its revenue beyond ads to include things like e-commerce with the expectation that potential changes to its business are around the corner.

In a recent iOS update, for example, Apple restricted the ability for Facebook to collect data from third-party apps by asking users if they wanted to opt out of being tracked. Most people said “no” to tracking. Meanwhile, attacks on the personalized ad industry have included those from advocacy groups who have argued that tech companies should turn off personalized ads for those under 18 — not just the under-13 crowd, who are already protected under current children’s privacy laws.

At the same time, Instagram has been toying with the idea of opening its app up to kids under the age of 13, and today’s series of changes could help to demonstrate to regulators that it’s moving forward with the safety of young people in mind, or so the company hopes.

On this front, Instagram says it has expanded its “Youth Advisors” group to include new experts like Jutta Croll at Stiftung Digitale Chancen, Pattie Gonsalves at Sangath and It’s Okay To Talk, Vicki Shotbolt at ParentZone UK, Alfiee M. Breland-Noble at AAKOMA Project, Rachel Rodgers at Northeastern University, Janis Whitlock at Cornell University, and Amelia Vance at the Future of Privacy Forum.

The group also includes the Family Online Safety Institute, Digital Wellness Lab, MediaSmarts, Project Rockit and the Cyberbullying Research Center.

It’s also working with lawmakers on age verification and parental consent standards that it expects to talk more about in the months to come. In a related announcement, Instagram said it’s using A.I. technology that estimates people’s ages. It can look for signals like people wishing someone a “happy birthday” or “happy quinceañera,” which can help narrow down someone’s age, for instance. This technology is already being used to stop some adults from interacting with young people’s accounts, including the new restrictions announced today.

Outbrain, an adtech company that provides clickbait ads below news articles, has raised $200 million in funding — Outbrain didn’t disclose the valuation of the company for this deal. The Baupost Group is investing in the company — it’s a Boston-based hedge fund. Outbrain filed for an initial public offering just last week. Today’s funding round should be the last traditional private investment round before going public.

If you’re not familiar with Outbrain, you may have seen its content recommendation widgets on popular news websites, such as CNN, Le Monde and The Washington Post. They mostly feature sponsored links that lead to third-party websites.

“We are excited to announce this investment from The Baupost Group, who share our vision and commitment for our business, our team and our future prospects” co-CEO David Kostman said in a statement.

Outbrain is often compared with its rival Taboola. While both startups planned to merge at some point, they had to cancel their merger. Taboola already went public after merging with a SPAC — a special purpose acquisition company. Taboola shares started trading last week.

In its IPO filing, Outbrain reported $767 million in revenue for 2020 and $228 million in revenue for the first quarter of 2021 alone. In 2020, Outbrain managed to generate $4.4 million in net income. During Q1 2021, the company reported $10.7 million in net income.

“We proudly lead the recommendation space we created. We have bold plans for the future to continue delivering critical innovation to our premium media partners worldwide and expanding our powerful open web global advertising platform” Outbrain co-CEO Yaron Galai said in a statement.

The advertising market has recovered from the global health pandemic and there has been plenty of initial public offerings during the first half of 2021. Everything seems to be lining up for Taboola and Outbrain, which means it’s time to reach the next level and become public companies.

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab-bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory.

And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted.

The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of ‘bottom-feeding’/low grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side-effect of the platform’s rapacious appetite to harvest views to serve ads.

That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform.

The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of ‘commercial secrecy’.

But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

To fix YouTube’s algorithm, Mozilla is calling for “common sense transparency laws, better oversight, and consumer pressure” — suggesting a combination of laws that mandate transparency into AI systems; protect independent researchers so they can interrogate algorithmic impacts; and empower platform users with robust controls (such as the ability to opt out of “personalized” recommendations) are what’s needed to rein in the worst excesses of the YouTube AI.

Regrets, YouTube users have had a few…

To gather data on specific recommendations being made made to YouTube users — information that Google does not routinely make available to external researchers — Mozilla took a crowdsourced approach, via a browser extension (called RegretsReporter) that lets users self-report YouTube videos they “regret” watching.

The tool can generate a report which includes details of the videos the user had been recommended, as well as earlier video views, to help build up a picture of how YouTube’s recommender system was functioning. (Or, well, ‘dysfunctioning’ as the case may be.)

The crowdsourced volunteers whose data fed Mozilla’s research reported a wide variety of ‘regrets’, including videos spreading COVID-19 fear-mongering, political misinformation and “wildly inappropriate” children’s cartoons, per the report — with the most frequently reported content categories being misinformation, violent/graphic content, hate speech and spam/scams.

A substantial majority (71%) of the regret reports came from videos that had been recommended by YouTube’s algorithm itself, underscoring the AI’s starring role in pushing junk into people’s eyeballs.

The research also found that recommended videos were 40% more likely to be reported by the volunteers than videos they’d searched for themselves.

Mozilla even found “several” instances when the recommender algorithmic put content in front of users that violated YouTube’s own community guidelines and/or was unrelated to the previous video watched. So a clear fail.

A very notable finding was that regrettable content appears to be a greater problem for YouTube users in non-English speaking countries: Mozilla found YouTube regrets were 60% higher in countries without English as a primary language — with Brazil, Germany and France generating what the report said were “particularly high” levels of regretful YouTubing. (And none of the three can be classed as minor international markets.)

Pandemic-related regrets were also especially prevalent in non-English speaking countries, per the report — a worrying detail to read in the middle of an ongoing global health crisis.

The crowdsourced study — which Mozilla bills as the largest-ever into YouTube’s recommender algorithm — drew on data from more than 37,000 YouTube users who installed the extension, although it was a subset of 1,162 volunteers — from 91 countries — who submitted reports that flagged 3,362 regrettable videos which the report draws on directly.

These reports were generated between July 2020 and May 2021.

What exactly does Mozilla mean by a YouTube “regret”? It says this is a crowdsourced concept based on users self-reporting bad experiences on YouTube, so it’s a subjective measure. But Mozilla argues that taking this “people-powered” approach centres the lived experiences of Internet users and is therefore helpful in foregrounding the experiences of marginalised and/or vulnerable people and communities (vs, for example, applying only a narrower, legal definition of ‘harm’).

“We wanted to interrogate and explore further [people’s experiences of falling down the YouTube ‘rabbit hole’] and frankly confirm some of these stories — but then also just understand further what are some of the trends that emerged in that,” explained Brandi Geurkink, Mozilla’s senior manager of advocacy and the lead researcher for the project, discussing the aims of the research.

“My main feeling in doing this work was being — I guess — shocked that some of what we had expected to be the case was confirmed… It’s still a limited study in terms of the number of people involved and the methodology that we used but — even with that — it was quite simple; the data just showed that some of what we thought was confirmed.

“Things like the algorithm recommending content essentially accidentally, that it later is like ‘oops, this actually violates our policies; we shouldn’t have actively suggested that to people’… And things like the non-English-speaking user base having worse experiences — these are things you hear discussed a lot anecdotally and activists have raised these issues. But I was just like — oh wow, it’s actually coming out really clearly in our data.”

Mozilla says the crowdsourced research uncovered “numerous examples” of reported content that would likely or actually breach YouTube’s community guidelines — such as hate speech or debunked political and scientific misinformation.

But it also says the reports flagged a lot of what YouTube “may” consider ‘borderline content’. Aka, stuff that’s harder to categorize — junk/low quality videos that perhaps toe the acceptability line and may therefore be trickier for the platform’s algorithmic moderation systems to respond to (and thus content that may also survive the risk of a take down for longer).

However a related issue the report flags is that YouTube doesn’t provide a definition for borderline content — despite discussing the category in its own guidelines — hence, says Mozilla, that makes the researchers’ assumption that much of what the volunteers were reporting as ‘regretful’ would likely fall into YouTube’s own ‘borderline content’ category impossible to verify.

The challenge of independently studying the societal effects of Google’s tech and processes is a running theme underlying the research. But Mozilla’s report also accuses the tech giant of meeting YouTube criticism with “inertia and opacity”.

It’s not alone there either. Critics have long accused YouTube’s ad giant parent of profiting off-of engagement generated by hateful outrage and harmful disinformation — allowing “AI-generated bubbles of hate” surface ever more baleful (and thus stickily engaging) stuff, exposing unsuspecting YouTube users to increasingly unpleasant and extremist views, even as Google gets to shield its low grade content business under a user-generated content umbrella.

Indeed, ‘falling down the YouTube rabbit hole’ has become a well-trodden metaphor for discussing the process of unsuspecting Internet users being dragging into the darkest and nastiest corners of the web. This user reprogramming taking place in broad daylight via AI-generated suggestions that yell at people to follow the conspiracy breadcrumb trail right from inside a mainstream web platform.

Back as 2017 — when concern was riding high about online terrorism and the proliferation of ISIS content on social media — politicians in Europe were accusing YouTube’s algorithm of exactly this: Automating radicalization.

However it’s remained difficult to get hard data to back up anecdotal reports of individual YouTube users being ‘radicalized’ after viewing hours of extremist content or conspiracy theory junk on Google’s platform.

Ex-YouTube insider — Guillaume Chaslot — is one notable critic who’s sought to pull back the curtain shielding the proprietary tech from deeper scrutiny, via his algotransparency project.

Mozilla’s crowdsourced research adds to those efforts by sketching a broad — and broadly problematic — picture of the YouTube AI by collating reports of bad experiences from users themselves.

Of course externally sampling platform-level data that only Google holds in full (at its true depth and dimension) can’t be the whole picture — and self-reporting, in particular, may introduce its own set of biases into Mozilla’s data-set. But the problem of effectively studying big tech’s blackboxes is a key point accompanying the research, as Mozilla advocates for proper oversight of platform power.

In a series of recommendations the report calls for “robust transparency, scrutiny, and giving people control of recommendation algorithms” — arguing that without proper oversight of the platform, YouTube will continue to be harmful by mindlessly exposing people to damaging and braindead content.

The problematic lack of transparency around so much of how YouTube functions can be picked up from other details in the report. For example, Mozilla found that around 9% of recommended regrets (or almost 200 videos) had since been taken down — for a variety of not always clear reasons (sometimes, presumably, after the content was reported and judged by YouTube to have violated its guidelines).

Collectively, just this subset of videos had had a total of 160M views prior to being removed for whatever reason.

In other findings, the research found that regretful views tend to perform well on the platform.

A particular stark metric is that reported regrets acquired a full 70% more views per day than other videos watched by the volunteers on the platform — lending weight to the argument that YouTube’s engagement-optimising algorithms disproportionately select for triggering/misinforming content more often than quality (thoughtful/informing) stuff simply because it brings in the clicks.

While that might be great for Google’s ad business, it’s clearly a net negative for democratic societies which value truthful information over nonsense; genuine public debate over artificial/amplified binaries; and constructive civic cohesion over divisive tribalism.

But without legally-enforced transparency requirements on ad platforms — and, most likely, regulatory oversight and enforcement that features audit powers — these tech giants are going to continue to be incentivized to turn a blind eye and cash in at society’s expense.

Mozilla’s report also underlines instances where YouTube’s algorithms are clearly driven by a logic that’s unrelated to the content itself — with a finding that in 43.6% of the cases where the researchers had data about the videos a participant had watched before a reported regret the recommendation was completely unrelated to the previous video.

The report gives examples of some of these logic-defying AI content pivots/leaps/pitfalls — such as a person watching videos about the U.S. military and then being recommended a misogynistic video entitled ‘Man humiliates feminist in viral video.’

In another instance, a person watched a video about software rights and was then recommended a video about gun rights. So two rights make yet another wrong YouTube recommendation right there.

In a third example, a person watched an Art Garfunkel music video and was then recommended a political video entitled ‘Trump Debate Moderator EXPOSED as having Deep Democrat Ties, Media Bias Reaches BREAKING Point.’

To which the only sane response is, umm what???

YouTube’s output in such instances seems — at best — some sort of ‘AI brain fart’.

A generous interpretation might be that the algorithm got stupidly confused. Albeit, in a number of the examples cited in the report, the confusion is leading YouTube users toward content with a right-leaning political bias. Which seems, well, curious.

Asked what she views as the most concerning findings, Mozilla’s Geurkink told TechCrunch: “One is how clearly misinformation emerged as a dominant problem on the platform. I think that’s something, based on our work talking to Mozilla supporters and people from all around the world, that is a really obvious thing that people are concerned about online. So to see that that is what is emerging as the biggest problem with the YouTube algorithm is really concerning to me.”

She also highlighted the problem of the recommendations being worse for non-English-speaking users as another major concern, suggesting that global inequalities in users’ experiences of platform impacts “doesn’t get enough attention” — even when such issues do get discussed.

Responding to Mozilla’s report in a statement, a Google spokesperson sent us this statement:

“The goal of our recommendation system is to connect viewers with content they love and on any given day, more than 200 million videos are recommended on the homepage alone. Over 80 billion pieces of information is used to help inform our systems, including survey responses from viewers on what they want to watch. We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content. Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%.”

Google also claimed it welcomes research into YouTube — and suggested it’s exploring options to bring in external researchers to study the platform, without offering anything concrete on that front.

At the same time, its response queried how Mozilla’s study defines ‘regrettable’ content — and went on to claim that its own user surveys generally show users are satisfied with the content that YouTube recommends.

In further non-quotable remarks, Google noted that earlier this year it started disclosing a ‘violative view rate‘ (VVR) metric for YouTube — disclosing for the first time the percentage of views on YouTube that comes from content that violates its policies.

The most recent VVR stands at 0.16-0.18% — which Google says means that out of every 10,000 views on YouTube, 16-18 come from violative content. It said that figure is down by more than 70% when compared to the same quarter of 2017 — crediting its investments in machine learning as largely being responsible for the drop.

However, as Geurkink noted, the VVR is of limited use without Google releasing more data to contextualize and quantify how far its AI was involved in accelerating views of content its own rules state shouldn’t be viewed on its platform. Without that key data the suspicion must be that the VVR is a nice bit of misdirection.

“What would be going further than [VVR] — and what would be really, really helpful — is understanding what’s the role that the recommendation algorithm plays in this?” Geurkink told us on that, adding: “That’s what is a complete blackbox still. In the absence of greater transparency [Google’s] claims of progress have to be taken with a grain of salt.”

Google also flagged a 2019 change it made to how YouTube’s recommender algorithm handles ‘borderline content’ — aka, content that doesn’t violate policies but falls into a problematic grey area — saying that that tweak had also resulted in a 70% drop in watchtime for this type of content.

Although the company confirmed this borderline category is a moveable feast — saying it factors in changing trends as well as context and also works with experts to determine what’s get classed as borderline — which makes the aforementioned percentage drop pretty meaningless since there’s no fixed baseline to measure against.

It’s notable that Google’s response to Mozilla’s report makes no mention of the poor experience reported by survey participants in non-English-speaking markets. And Geurkink suggested that, in general, many of the claimed mitigating measures YouTube applies are geographically limited — i.e. to English-speaking markets like the US and UK. (Or at least arrive in those markets first, before a slower rollout to other places.) 

A January 2019 tweak to reduce amplification of conspiracy theory content in the US was only expanded to the UK market months later — in August — for example.

“YouTube, for the past few years, have only been reporting on their progress of recommendations of harmful or borderline content in the US and in English-speaking markets,” she also said. “And there are very few people questioning that — what about the rest of the world? To me that is something that really deserves more attention and more scrutiny.”

We asked Google to confirm whether it had since applied the 2019 conspiracy theory related changes globally — and a spokeswoman told us that it had. But the much higher rate of reports made to Mozilla of — a yes broader measure of — ‘regrettable’ content being made in non-English-speaking markets remains notable.

And while there could be others factors at play, which might explain some of the disproportionately higher reporting, the finding may also suggest that, where YouTube’s negative impacts are concerned, Google directs greatest resource at markets and languages where its reputational risk and the capacity of its machine learning tech to automate content categorization are strongest.

Yet any such unequal response to AI risk obviously means leaving some users at greater risk of harm than others — adding another harmful dimension and layer of unfairness to what is already a multi-faceted, many-headed-hydra of a problem.

It’s yet another reason why leaving it up to powerful platforms to rate their own AIs, mark their own homework and counter genuine concerns with self-serving PR is for the birds.

(In additional filler background remarks it sent us, Google described itself as the first company in the industry to incorporate “authoritativeness” into its search and discovery algorithms — without explaining when exactly it claims to have done that or how it imagined it would be able to deliver on its stated mission of ‘organizing the world’s information and making it universally accessible and useful’ without considering the relative value of information sources… So color us baffled at that claim. Most likely it’s a clumsy attempt to throw disinformation shade at rivals.)

Returning to the regulation point, an EU proposal — the Digital Services Act — is set to introduce some transparency requirements on large digital platforms, as part of a wider package of accountability measures. And asked about this Geurkink described the DSA as “a promising avenue for greater transparency”.

But she suggested the legislation needs to go further to tackle recommender systems like the YouTube AI.

“I think that transparency around recommender systems specifically and also people having control over the input of their own data and then the output of recommendations is really important — and is a place where the DSA is currently a bit sparse, so I think that’s where we really need to dig in,” she told us.

One idea she voiced support for is having a “data access framework” baked into the law — to enable vetted researchers to get more of the information they need to study powerful AI technologies — i.e. rather than the law trying to come up with “a laundry list of all of the different pieces of transparency and information that should be applicable”, as she put it.

The EU also now has a draft AI regulation on the table. The legislative plan takes a risk-based approach to regulating certain applications of artificial intelligence. However it’s not clear whether YouTube’s recommender system would fall under one of the more closely regulated categories — or, as seems more likely (at least with the initial Commission proposal), fall entirely outside the scope of the planned law.

“An earlier draft of the proposal talked about systems that manipulate human behavior which is essentially what recommender systems are. And one could also argue that’s the goal of advertising at large, in some sense. So it was sort of difficult to understand exactly where recommender systems would fall into that,” noted Geurkink.

“There might be a nice harmony between some of the robust data access provisions in the DSA and the new AI regulation,” she added. “I think transparency is what it comes down to, so anything that can provide that kind of greater transparency is a good thing.

“YouTube could also just provide a lot of this… We’ve been working on this for years now and we haven’t seen them take any meaningful action on this front but it’s also, I think, something that we want to keep in mind — legislation can obviously take years. So even if a few of our recommendations were taken up [by Google] that would be a really big step in the right direction.”

Germany’s federal information commissioner has run out of patience with Facebook.

Last month, Ulrich Kelber wrote to government agencies “strongly recommend[ing]” they to close down their official Facebook Pages because of ongoing data protection compliance problems and the tech giant’s failure to fix the issue.

In the letter, Kelber warns the government bodies that he intends to start taking enforcement action from January 2022 — essentially giving them a deadline of next year to pull their pages from Facebook.

So expect not to see official Facebook Pages of German government bodies in the coming months.

While Kelber’s own agency, the BfDi, does not appear to have a Facebook Page (although Facebook’s algorithms appear to generate this artificial stub if you try searching for one) plenty of other German federal bodies do — such as the Ministry of Health, whose public page has more than 760,000 followers.

The only alternative to such pages vanishing from Facebook’s platform by Christmas — or else being ordered to be taken down early next year by Kelber — seems to be for the tech giant to make more substantial changes to how its platform operators than it has offered so far, allowing the Pages to be run in Germany in a way that complies with EU law.

However Facebook has a long history of ignoring privacy expectations and data protection laws.

It has also, very recently, shown itself more than willing to reduce the quality of information available to users — if doing so further its business interests (such as to lobby against a media code law, as users in Australia can attest).

So it looks rather more likely that German government agencies will be the ones having to quietly bow off the platform soon…

Kelber says he’s avoided taking action over the ministries’ Facebook Pages until now on account of the public bodies arguing that their Facebook Pages are an important way for them to reach citizens.

However his letter points out that government bodies must be “role models” in matters of legal compliance — and therefore have “a particular duty” to comply with data protection law. (The EDPS is taking a similar tack by reviewing EU institutions’ use of US cloud services giants.)

Per his assessment, an “addendum” provided by Facebook in 2019 does not rectify the compliance problem and he concludes that Facebook has made no changes to its data processing operations to enable Page operators to comply with requirements set out in the EU’s General Data Protection Regulation.

A ruling by Europe’s top court, back in June 2018, is especially relevant here — as it held that the administrator of a fan page on Facebook is jointly responsible with Facebook for the processing of the data of visitors to the page.

That means that the operators of such pages also face data protection compliance obligations, and cannot simply assume that Facebook’s T&Cs provide them with legal cover for the data processing the tech giant undertakes.

The problem, in a nutshell, is that Facebook does not provide Pages operates with enough information or assurances about how it processes users’ data — meaning they’re unable to comply with GDPR principles of accountability and transparency because, for example, they’re unable to adequately inform followers of their Facebook Page what is being done with their data.

There is also no way for Facebook Page operators to switch off (or otherwise block) wider processing of their Page followers by Facebook. Even if they don’t make use of any of the analytics features Facebook provides to Page operators.

The processing still happens.

This is because Facebook operates a take-it-or-leave it ‘data maximizing’ model — to feed its ad-targeting engines.

But it’s an approach that could backfire if it ends up permanently reducing the quality of the information available on its network because there’s a mass migration of key services off its platform. Such as, for example, every government agency in the EU deleted its Facebook Page.

A related blog post on the BfDi’s website also holds out the hope that “data protection-compliant social networks” might develop in the Facebook compliance vacuum.

Certainly there could be a competitive opportunity for alternative platforms that seek to sell services based on respecting users’ rights.

The German Federal Ministry of Health’s verified Facebook Page (Screengrab: TechCrunch/Natasha Lomas)

Discussing the BfDis intervention, Luca Tosoni, a research fellow at the University of Oslo’s Norwegian Research Center for Computers and Law, told TechCrunch: “This development is strictly connected to recent CJEU case law on joint controllership. In particular, it takes into account the Wirtschaftsakademie ruling, which found that the administrator of a Facebook page should be considered a joint controller with Facebook in respect of processing the personal data of the visitors of the page.

“This does not mean that the page administrator and Facebook share equal responsibility for all stages of the data processing activities linked to the use of the Facebook page. However, they must have an agreement in place with a clear allocation of roles and responsibilities. According to the German Federal Commissioner for Data Protection and Freedom of Information, Facebook’s current data protection ‘Addendum’ would not seem to be sufficient to meet the latter requirement.”

“It is worth noting that, in its Fashion ID ruling, the CJEU has taken the view that the GDPR’s obligations for joint controllers are commensurate with those data processing stages in which they actually exercise control,” Tosoni added. “This means that the data protection obligations a Facebook page administrator would normally tend to be quite limited.”

Warnings for other social media services

This particular compliance issue affects Facebook in Germany — and potentially any other EU market. But other social media services may face similar problems too.

For example, Kelber’s letter flags an ongoing audit of Instagram, TikTok and Clubhouse — warning of “deficits” in the level of data protection they offer too.

He goes on to recommend that agencies avoid using the three apps on business devices.  

In an earlier, 2019 assessment of government bodies’ use of social media services, the BfDi suggested usage of Twitter could — by contrast — be compliant with data protection rules. At least if privacy settings were fully enabled and analytics disabled, for example.

At the time the BfDi also warned that Facebook-owned Instagram faced similar compliance problems to Facebook, being subject to the same “abusive” approach to consent he said was taken by the whole group.

Reached for comment on Kelber’s latest recommendations to government agencies, Facebook did not engage with our specific questions — sending us this generic statement instead:

“At the end of 2019, we updated the Page Insights addendum and clarified the responsibilities of Facebook and Page administrators, for which we took questions regarding transparency of data processing into account. It is important to us that also federal agencies can use Facebook Pages to communicate with people on our platform in a privacy-compliant manner.”

An additional complication for Facebook has arisen in the wake of the legal uncertainty following last summer’s Schrems II ruling by the CJEU.

Europe’s top court invalidated the EU-US Privacy Shield arrangement, which had allowed companies to self-certify an adequate level of data protection, removing the easiest route for transferring EU users’ personal data over to the US. And while the court did not outlaw international transfers of EU users’ personal data altogether it made it clear that data protection agencies must intervene and suspend data flows if they suspect information is being moved to a place, and in in such a way, that it’s put at risk.

Following Schrems II, transfers to the US are clearly problematic where the data is being processed by a US company that’s subject to FISA 702, as is the case with Facebook.

Indeed, Facebook’s EU-to-US data transfers were the original target of the complainant in the Schrems II case (by the eponymous Max Schrems). And a decision remains pending on whether the tech giant’s lead EU data supervisor will follow through on a preliminary order last year to it should suspend its EU data flows — due in the coming months.

Even ahead of that long-anticipated reckoning in Ireland, other EU DPAs are now stepping in to take action — and Kelber’s letter references the Schrems II ruling as another issue of concern.

Tosoni agrees that GDPR enforcement is finally stepping up a gear. But he also suggested that compliance with the Schrems II ruling comes with plenty of nuance, given that each data flow must be assessed on a case by case basis — with a range of supplementary measures that controllers may be able to apply.

“This development also shows that European data protection authorities are getting serious about enforcing the GDPR data transfer requirements as interpreted by the CJEU in Schrems II, as the German Federal Commissioner for Data Protection and Freedom flagged this as another pain point,” he said.

“However, the German Federal Commissioner sent out his letter on the use of Facebook pages a few days before the EDPB adopted the final version its recommendations on supplementary measures for international data transfers following the CJEU Schrems II ruling. Therefore, it remains to be seen how German data protection authorities will take these new recommendations into account in the context of their future assessment of the GDPR compliance of the use of Facebook pages by German public authorities.

“Such recommendations do not establish a blanket ban on data transfers to the US but impose the adoption of stringent safeguards, which will need to be followed to keep on transferring the data of German visitors of Facebook pages to the US.”

Another recent judgment by the CJEU reaffirmed that EU data protection agencies can, in certain circumstances, take action when they are not the lead data supervisor for a specific company under the GDPR’s one-stop-shop mechanism — expanding the possibility for litigation by watchdogs in Member States if a local agency believes there’s an urgent need to act.

Although, in the case of the German government bodies’ use of Facebook Pages, the earlier CJEU ruling finding on joint law controllership means the BfDi already has clear jurisdiction to target these agencies’ Facebook Pages itself.

 

The UK’s more expansive, post-Brexit role in digital regulation continues to be felt today via a policy change by Google which has announced that it will, in the near future, only run ads for financial products and services when the advertiser in question has been verified by the financial watchdog, the FCA.

The Google Ads Financial Products and Services policy will be updated from August 30, per Google, which specifies that it will start enforcing the new policy from September 6 — meaning that purveyors of online financial scams who’ve been relying on its ad network to net their next victim still have more than two months to harvest unsuspecting clicks before the party is over (well, in the UK, anyway).

Google’s decision to allow only regulator authorized financial entities to run ads for financial products & services follows warnings from the Financial Conduct Authority that it may take legal action if Google continued to accept unscreened financial ads, as the Guardian reported earlier.

The FCA told a parliamentary committee this month that it’s able to contemplate taking such action as a result of no longer being bound by European Union rules on financial adverts, which do not extend to online platforms, per the newspaper’s report.

Until gaining the power to go after Google itself, the FCA appears to have been trying to combat the scourge of online financial fraud by paying Google large amounts of UK taxpayer money to fight scams with anti-scam warnings.

According to the Register, the FCA paid Google more than £600,000 (~$830k) in 2020 and 2021 to run ‘anti-scam’ ads — with the regulator essentially engaged in a bidding war with scammers to pour enough money into Google’s coffers so that regulator warnings about financial scams might appear higher than the scams themselves.

The full-facepalm situation was presumably highly lucrative for Google. But the threat of legal action appears to have triggered a policy rethink.

Writing in its blog post, Ronan Harris, a VP and MD for Google UK & Ireland, said: “Financial services advertisers will be required to demonstrate that they are authorised by the UK Financial Conduct Authority or qualify for one of the limited exemptions described in the UK Financial Services verification page.”

“This new update builds on significant work in partnership with the FCA over the last 18 months to help tackle this issue,” he added. “Today’s announcement reflects significant progress in delivering a safer experience for users, publishers and advertisers. While we understand that this policy update will impact a range of advertisers in the financial services space, our utmost priority is to keep users safe on our platforms — particularly in an area so disproportionately targeted by fraudsters.”

The company’s blog also claims that it has pledged $5M in advertising credits to support financial fraud public awareness campaigns in the UK. So not $5M in actual money then.

Per the Register, Google did offer to refund the FCA’s anti-scam ad spend — but, again, with advertising credits.

The UK parliament’s Treasury Committee was keen to know whether the tech giant would be refunding the spend in cash. But the FCA’s director of enforcement and market insight, Mark Steward, was unable to confirm what it would do, according to the Register’s report of the committee hearing.

We’ve reached out to the FCA for comment on Google’s policy change, and with questions about the refund situation, and will update this report with any response.

In recent years the financial watchdog has also been concerned about financial scam ads running on social media platforms.

Back in 2018, legal action by a well-known UK consumer advice personality, Martin Lewis — who filed a defamation suit against Facebook — led the social media giant to add a ‘report scam ad’ button in the market as of July 2019.

However research by consumer group, Which?, earlier this year, suggested that neither Facebook nor Google had entirely purged financial scam ads — even when they’d been reported.

Per the BBC, Which?’s survey found that Google had failed to remove around a third (34%) of the scam adverts reported to it vs Facebook failing to remove over a fifth (26%).

It’s almost like the incentives for online ad giants to act against lucrative online scam ads simply aren’t pressing enough…

More recently, Lewis has been pushing for scam ads to be included in the scope of the UK’s Online Safety Bill.

The sweeping piece of digital regulation aims to tackle a plethora of so-called ‘online harms’ by focusing on regulating user generated content. However Lewis makes the point that a scammer merely needs to pay an ad platform to promote their fraudulent content for it to escape the scope of the planned rules, telling the Good Morning Britain TV program today that the situation is “ludicrous” and “needs to change”.

It’s certainly a confusing carve-out, as we reported at the time the bill was presented. Nor is it the only confusing component of the planned legislation. However on the financial fraud point the government may believe the FCA has the necessary powers to tackle the problem.

We’ve contacted the Department for Digital, Media, Culture and Sport for comment.

An international coalition of consumer protection, digital and civil rights organizations and data protection experts has added its voice to growing calls for a ban on what’s been billed as “surveillance-based advertising”.

The objection is to a form of digital advertising that relies upon a massive apparatus of background data processing which sucks in information about individuals, as they browse and use services, to create profiles which are used to determine which ads to serve (via multi-participant processes like the high speed auctions known as real-time bidding).

The EU’s lead data protection supervisor previously called for a ban on targeted advertising which relies upon pervasive tracking — warning over a multitude of associated rights risks.

Last fall the EU parliament also urged tighter rules on behavioral ads.

Back in March, a US coalition of privacy, consumer, competition and civil rights groups also took collective aim at microtargeting. So pressure is growing on lawmakers on both sides of the Atlantic to tackle exploitative adtech as consensus builds over the damage associated with mass surveillance-based manipulation.

At the same time, momentum is clearly building for pro-privacy consumer tech and services — showing the rising store being placed by users and innovators on business models that respect people’s data.

The growing uptake of such services underlines how alternative, rights-respecting digital business models are not only possible (and accessible, with many freemium offerings) but increasingly popular.

In an open letter addressing EU and US policymakers, the international coalition — which is comprised of 55 organizations and more than 20 experts including groups like Privacy International, the Open Rights Group, the Center for Digital Democracy, the New Economics Foundation, Beuc, Edri and Fairplay — urges legislative action, calling for a ban on ads that rely on “systematic commercial surveillance” of Internet users in order to serve what Facebook founder Mark Zuckerberg likes, euphemistically, to refer to as ‘relevant ads’.

The problem with Zuckerberg’s (self-serving) framing is that, as the coalition points out, the vast majority of consumers don’t actually want to be spied upon to be served with these creepy ads.

Any claimed ‘relevance’ is irrelevant to consumers who experience ad-stalking as creepy and unpleasant. (And just imagine how the average Internet user would feel if they could peek behind the adtech curtain — and see the vast databases where people are profiled at scale so their attention can be sliced and diced for commercial interests and sold to the highest bidder).

The coalition points to a report examining consumer attitudes to surveillance-based advertising, prepared by one of the letter’s signatories (the Norwegian Consumer Council; NCC), which found that only one in ten people are positive about commercial actors collecting information about them online — and only one in five think ads based on personal information are okay.

A full third of respondents to the survey were “very negative” about microtargeted ads — while almost half think advertisers should not be able to target ads based on any form of personal information.

The report also highlights a sense of impotence among consumers when they go online, with six out of ten respondents feeling that they have no choice but to give up information about themselves.

That finding should be particularly concerning for EU policymakers as the bloc’s data protection framework is supposed to provide citizens with a suite of rights related to their personal data that should protect them against being strong-armed to hand over info — including stipulating that if a data controller intends to rely on user consent to process data then consent must be informed, specific and freely given; it can’t be stolen, strong-armed or sneaked through using dark patterns. (Although that remains all too often the case.)

Forced consent is not legal under EU law — yet, per the NCC’s European survey, a majority of respondents feel they have no choice but to be creeped on when they use the Internet.

That in turn points to an ongoing EU enforcement failure over major adtech-related complaints, scores of which have been filed in recent years under the General Data Protection Regulation (GDPR) — some of which are now over three years old (yet still haven’t resulted in any action against rule-breakers).

Over the past couple of years EU lawmakers have acknowledged problems with patchy GDPR enforcement — and it’s interesting to note that the Commission suggested some alternative enforcement structures in its recent digital regulation proposals, such as for oversight of very large online platforms in the Digital Services Act (DSA).

In the letter, the coalition suggests the DSA as the ideal legislative vehicle to contain a ban on surveillance-based ads.

Negotiations to shape a final proposal which EU institutions will need to vote on remain ongoing — but it’s possible the EU parliament could pick up the baton to push for a ban on surveillance ads. It has the power to amend the Commission’s legislative proposals and its approval is needed for draft laws to be adopted. So there’s plenty still to play for.

“In the US, we urge legislators to enact comprehensive privacy legislation,” the coalition adds.

The coalition is backing up its call for a ban on surveillance-based advertising with another report (also by the NCC) which lays out the case against microtargeting — summarizing the raft of concerns that have come to be attached to manipulative ads as awareness of the adtech industry’s vast, background people-profiling and data trading has grown.

Listed concerns not only focus on how privacy-stripping practices are horrible for individual consumers (enabling the manipulation, discrimination and exploitation of individuals and vulnerable groups) but also flag the damage to digital competition as a result of adtech platforms and data brokers intermediating and cannibalizing publishers’ revenues — eroding, for example, the ability of professional journalism to sustain itself and creating the conditions where ad fraud has been able to flourish.

Another contention is that the overall health of democratic societies is put at risk by surveillance-based advertising — as the apparatus and incentives fuel the amplification of misinformation and create security risks, and even national security risks. (Strong and independent journalism is also, of course, a core plank of a healthy democracy.)

“This harms consumers and businesses, and can undermine the cornerstones of democracy,” the coalition warns.

“Although we recognize that advertising is an important source of revenue for content creators and publishers online, this does not justify the massive commercial surveillance systems set up in attempts to ‘show the right ad to the right people’,” the letter goes on. “Other forms of advertising technologies exist, which do not depend on spying on consumers, and cases have shown that such alternative models can be implemented without significantly affecting revenue.

“There is no fair trade-off in the current surveillance-based advertising system. We encourage you to take a stand and consider a ban of surveillance-based advertising as part of the Digital Services Act in the EU, and the for U.S. to enact a long overdue federal privacy law.”

The letter is just the latest salvo against ‘toxic adtech’. And advertising giants like Facebook and Google have — for several years now — seen the pro-privacy writing on the wall.

Hence Facebook’s claimed ‘pivot to privacy‘; its plan to lock in its first party data advantage (by merging the infrastructure of different messaging products); and its keen interest in crypto.

It’s also why Google has been working on a stack of alternative adtech that it wants to replace third party tracking cookies. Although its proposed replacement — the so-called ‘Privacy Sandbox‘ — would still enable groups of Internet users to be opaquely clustered by its algorithms in ‘interest’ buckets for ad targeting purposes which still doesn’t look great for Internet users’ rights either. (And concerns have been raised on the competition front too.)

Where its ‘Sandbox’ proposal is concerned, Google may well be factoring in the possibility of legislation that outlaws — or, at least, more tightly controls — microtargeting. And it’s therefore trying to race ahead with developing alternative adtech that would have much the same targeting potency (maintaining its market power) but, by swapping out individuals for cohorts of web users, could potentially sidestep a ban on ‘microtargeting’ technicalities.

Legislators addressing this issue will therefore need to be smart in how they draft any laws intended to tackle the damage caused by surveillance-based advertising.

Certainly they will if they want to prevent the same old small- and large-scale manipulation abuses from being perpetuated.

The NCC’s report points to what it dubs as “good alternatives” for digital advertising models which don’t depend on the systematic surveillance of consumers to function. And which — it also argues — provide advertisers and publishers with “more oversight and control over where ads are displayed and which ads are being shown”.

The problem of ad fraud is certainly massively underreported. But, well, it’s instructive to recall how often Facebook has had to ‘fess up to problems with self reported ad metrics

“It is possible to sell advertising space without basing it on intimate details about consumers. Solutions already exist to show ads in relevant contexts, or where consumers self-report what ads they want to see,” the NCC’s director of digital policy, Finn Myrstad, noted in a statement.

“A ban on surveillance-based advertising would also pave the way for a more transparent advertising marketplace, diminishing the need to share large parts of ad revenue with third parties such as data brokers. A level playing field would contribute to giving advertisers and content providers more control, and keep a larger share of the revenue.”