Steve Thomas - IT Consultant

Oracle has begun auditing TikTok’s algorithms and content moderation models, according to a new report from Axios out this morning. Those reviews began last week, and follow TikTok’s June announcement it had moved its U.S. traffic to Oracle servers amid claims its U.S. user data had been accessed by TikTok colleagues in China.

The new arrangement is meant to allow Oracle the ability to monitor TikTok’s systems to help the company in its efforts to assure U.S. lawmakers that its app is not being manipulated by Chinese government authorities. Oracle will audit how TikTok’s algorithm surfaces content to “ensure outcomes are in line with expectations,” and that those models have not been manipulated, the report said. In addition, TikTok will regularly audit TikTok’s content moderation practices, including both its automated systems and its moderation decisions where people are choosing how to enforce TikTok policy.

TikTok’s moderation policies have been controversial in years past. In 2019, The Washington Post reported TikTok’s U.S. employees had often been ordered to restrict some videos on its platform at the behest of Beijing-based teams, and that teams in China would sometimes block or penalize certain videos out of caution about Chinese government restrictions. That same year, The Guardian also reported TikTok had been telling its moderators to censor videos that mentioned things like Tiananmen Square, Tibetan independence, or the banned religious group Falun Gong, per a set of leaked documents. In 2020, The Intercept reported TikTok moderators were told to censor political speech in livestreams and to suppress posts from “undesirable users” — the unattractive, poor or disabled, its documents said.

All the while, TikTok disputed the various claims — calling leaked documents outdated, for instance, in the latter two scenarios. It also continued to insist that its U.S. arm didn’t take instructions from its Chinese parent, ByteDance.

But a damning June 2022 report by BuzzFeed News proved that TikTok’s connection to China was closer than it had said. The news outlet found that U.S. data had been repeatedly accessed by staff in China, citing recordings from 80 TikTok internal meetings.

Following BuzzFeed’s reporting, TikTok announced that it was moving all U.S. traffic to Oracle’s infrastructure cloud service — a move designed to keep TikTok’s U.S. user data from prying eyes.

That agreement, a part of a larger operation called “Project Texas,” had been in progress for over a year and was focused on further separating TikTok’s U.S. operations from China, and employing an outside firm to oversee its algorithms.

Now, it seems Oracle is in charge of keeping an eye on TikTok to help prevent data emanating from the U.S. from being directed to China. The deal steps up Oracle’s involvement with TikTok as not only the host for the user data, but an auditor who could later back up or dispute TikTok’s claims that its system is operating fairly and without China’s influence. 

Oracle and TikTok have an interesting history. Towards the end of the Trump administration, the former president tried to force a sale between the two companies, bringing in long-time supporter, Oracle founder and CTO Larry Ellison to help broker the deal for his company. That deal eventually fell apart in February 2021, but the story didn’t end there, as it turned out.

But while this new TikTok-Oracle agreement has significance in terms of the tech industry and in politics, Oracle’s deal with TikTok doesn’t necessarily make the firm a more powerful player in the cloud infrastructure market.

Even with TikTok’s business, Oracle’s cloud infrastructure service represents just a fraction of the cloud infrastructure market. In the most recent quarter, Synergy Research, a firm that tracks this data, reported the cloud infrastructure market reached almost $55 billion with Amazon leading the way with 34%, Microsoft in second with 21%, and Google in third place with 10%. Oracle remains under 2%, says John Dinsdale, who is a principal analyst at the firm.

“Oracle’s share of the worldwide cloud infrastructure services market remains at just below 2% and has shown no signs of meaningful increase. So Oracle’s cloud revenue growth is pretty much keeping pace with overall market growth,” Dinsdale told TechCrunch. Synergy defines “cloud infrastructure services” as Infrastructure as a Service, Platform as a Service and hosted private cloud services. Dinsdale points out that Oracle’s SaaS business is much stronger.”

Jailed Kremlin critic, Alexey Navalny, has hit out at adtech giants Meta and Google for shutting off advertising inside Russia following the country’s invasion of Ukraine which he argues has been a huge boon to Putin’s regime by making it harder for the opposition to get out anti-war messaging.

The remarks came after Navalny was asked to address a conference on democracy. Not in person of course as he remains incarcerated in Russia — rather he posted the comments on his website.

“It would be downright banal to say that the new information world can be both a boon for democracy and a huge bane. Nevertheless, it is so,” he writes. “Our organization has built all its activities on information technology and has achieved serious success with it, even when it was practically outlawed. And information technology is being actively used by the Kremlin to arrest participants in protest rallies. It is proudly claimed that all of them will be recognized even with their faces covered.

“The Internet gives us the ability to circumvent censorship. Yet, at the same time, Google and Meta, by shutting down their advertising in Russia, have deprived the opposition of the opportunity to conduct anti-war campaigns, giving a grandiose gift to Putin.”

Navalny has previously called for Meta and Google to allow their adtech to be weaponized against Putin’s propaganda machine — arguing that highly scalable ad targeting tools could be used to circumvent restrictions on access to free information imposed by the regime as a way to show Russian citizens the bloody reality of the “special military operation” in Ukraine.

Now, in thinly veiled criticism of the tech giants — which would presumably be delivered in a sarcastic tone if his address were being given in person — Navalny writes: “Should the Internet giants continue to pretend that they it’s ‘just business’ for them and act like ‘neutral platforms’? Should they continue to claim that social network users in the United States and Eritrea, in Denmark and Russia, should operate under the same rules? How should the internet treat government directives, given that Norway and Uganda seem to have slightly different ideas about the role of the internet and democracy?

“It’s all very complicated and very controversial, and it all needs to be discussed while keeping in mind that the discussion should also lead to solutions.”

“We love technology. We love social networks. We want to live in a free informational society. So let’s figure out how to keep the bad guys from using the information society to drive their nations and all of us into the dark ages,” he adds.

Meta and Google were contacted for a response to the criticism but at the time of writing neither had sent comment.

The tech industry’s response to the war in Ukraine remains patchy, with Western companies increasingly closing down services inside Russia — but not all their services.

For example, despite shuttering advertising inside Russia, Meta and Google have not shut down access to their social platforms, Facebook and YouTube — likely as they would argue these services help Russians access independent information vs the state-controlled propaganda that fills traditional broadcast media channels in the country.

In Facebook’s case, it’s an argument that was bolstered when Russia’s Internet regulator targeted the service soon after the invasion of Ukraine — initially restricting access; and then, in early March, announcing that Facebook would be blocked after the company had restricted access to a number of state-linked media outlets.

Interestingly, though, Google owned YouTube appears to have escaped a direct state block — although it has received plenty of warnings from Russia’s Internet regulator in recent months, including for distributing “anti-Russian ads“.

This discrepancy suggests the Kremlin continues (for now) to view YouTube as an important conduit for its own propaganda — likely owing to the platform’s huge popularity in Russia, where use of YouTube outstrips locally developed alternatives (like GazProm Media-owned Rutube), which would be far easier for Putin’s regime to censor.

This is not the case for Facebook — where the leading local alternative, VK.com, has been massively popular for years — making it easier for the Kremlin to block access to the Western equivalent since Russians have less incentive to try to circumvent a block by using a VPN.

However if the Kremlin is intent on shaping citizens’ access to digital information over the long haul it may not be content to let YouTube’s popularity stand — and could opt to use technical means to degrade access while actively promoting local alternatives, as a strategy to drive usage of local rivals until they’re big enough to supplant the influence of the foreign giant. (And, indeed, reports have suggested the Kremlin is sinking money into Rutube.)

Given YouTube’s ongoing influence in Russia — coupled with rising threats from Russia’s state regulator that YouTube remove ‘banned content’ or face fines and/or a slowdown of the service — Navalny may have, at least, an overarching point that Google risks playing right into Putin’s hands.

The jailed opposition politician has also been even more critical of local search giant, Yandex — over its equivalent service to Google News which operates in a regulatory regime that requires it aggregate only state-approved media sources, allowing the Kremlin to shape the narrative it presents to the millions of Russians who visit a search portal homepage where this News feed is displayed.

Back in April, Yandex announced that it had signed a deal to sell News and another media property, called Zen, to VK — but it remains to be seen how, or indeed whether, this ownership change will make any difference to the state-controlled news narrative Russians are routinely exposed to when they visit popular local services.

While the world continues to wonder what ‘free speech absolutist‘ and gadfly billionaire Elon Musk might mean for the future of Twitter, the European Union has chalked up an early PR win in the long game of platform regulation — extracting agreement from the Tesla founder that its freshly rebooted approach toward content policy sounds like good shiz.

EU internal market commissioner, Thierry Breton, paid a visit to the would-be Twitter owner, Musk, yesterday for a meeting at his gigafactory in Austin, Texas, where we’re told regulation of online speech was a key discussion topic, alongside “mutual interest” supply chain chat.

Breton was keen to introduce Musk to the newly agreed Digital Services Act (DSA), which will come into force across the bloc in the coming years — likely in early 2023 for larger platforms such as Twitter — with the aim of harmonizing content governance rules and dialling up consumer protections. Breaches of the regulation, meanwhile, can attract fines of up to 6% of global annual turnover.

Asked whether the newly agreed regulation fits with his planned approach for Twitter, Musk responded: “I think it’s exactly aligned with my thinking”.

“It’s been a great discussion,” Musk also said in the brief Q&A with Breton. “I agree with everything you said really. I think we’re very much of the same mind. And I think anything that my companies can do that would be beneficial to Europe, we want to do that. That’s what I’m saying.”

“On social media, they had a constructive exchange on the impact of the recently adopted EU Digital Services Act on online platforms in areas such as freedom of speech, algorithm transparency, or user responsibility,” a spokesman for Breton’s office also told us — pointing to a “short video” summary which was promptly posted to Twitter, post-meeting, where Musk can be heard making the aforementioned remarks.

“Great meeting!” he also tweeted afterwards. “We are very much on the same page.”

Setting aside the awkward body language between Musk and Breton (defensive vs obsequious), it remains to be seen whether the former might have the last (hollow) laugh — should it turn out he’s inadvertently highlighted a major hole in the bloc’s plan.

In recent weeks, since news of Musk’a $44BN bid to buy Twitter broke, he’s suggested his rule of thumb for moderating speech on the social media platform will cleave to local laws that require the removal of illegal speech — but leave pretty much everything else up.

Which could mean he’ll happily open the floodgates to toxic abuse and mindless conspiracy theories — aka ‘legal but harmful speech’…

Europe’s grand plan for modernizing platform rules, meanwhile, essentially sidesteps this fuzzier (controversial) area of legal but harmful speech in favor of fixing hard-and-fast rules to harmonize speedy takedowns of strictly illegal stuff (e.g. CSAM; copyright infringement; hate speech in certain markets; another EU regulation that’s due to start applying this year also targets terrorist content with a one-hour takedown rule).

So it’s perhaps no wonder that Musk came away from the meeting with the EU commissioner professing their approaches align — assuming Breton’s core message was that the rules focus on illegal speech.

Confirmation bias is a helluva a drug!

That said, EU lawmakers do have a number of (softer) mechanisms in the pipe to tackle fuzzier content problems such as disinformation — and to set transparency rules around political ads. So it may be that Musk hasn’t fully grokked all the ways the bloc intends to pressure platform providers not to spread other types of toxic and/or harmful content.

If he succeeds in buying Twitter, one thing is clear: Musk will be fielding many more requests for meetings from lawmakers at home and abroad. And if he chooses to pull out the speech stops, and let toxic abuse and damaging disinformation rip, he’ll quickly find a lot of those requests turning into hard and fast demands.

Elon Musk joked earlier this month that he hoped buying Twitter won’t be too painful for him. But the self-proclaimed “free speech absolutist” may indeed be inviting a world of pain for himself (and his wallet) if he sets the platform on a collision course with the growing mass of legislation now being applied to social media services all around the world.

The US lags far behind regions like Europe when it comes to digital rule-making. So Musk may simply not have noticed that the bloc just agreed on the fine detail of the Digital Services Act (DSA): A major reboot of platform rules that’s intended to harmonize governance procedures to ensure the swift removal of illegal speech, including by dialling up fines on so-called “very large online platforms” (aka, VLOPs; a classification that will likely apply to Twitter) to 6% of global annual turnover.

On news of Musk’s winning bid for Twitter, EU lawmakers were swift to point out the hard limits coming down the pipe.

“The Commission can fine [non-compliant platforms] 6% of worldwide turnover,” emphasizes MEP Paul Tang, discussing why he believes the DSA will be able to rein in any more absolutist tendencies Musk may have.

“Given the profit margin currently for Twitter that is a lot — because the net profit margin in negative, that’s the reason why he bought it in the first place I assume… It doesn’t make a profit and if it makes a profit it will be chipped away by the penalty. So he really has to find ways to make it more profitable to make sure that the company doesn’t lose money.”

Tang also points out that Musk’s $43BN bid for Twitter is financed, in large part, with loans — rather than him just cashing in Tesla equity to fully fund the bid, which means Musk is not as free to act on his impulses as he may otherwise have been.

“He’s in debt. He needs to pay his debts — his creditors… If he’d used all his equity to buy Twitter then he would have had more leeway, in a sense. He could take any loss he wants — up to a point, depending on the value of Tesla. But he’s in debt now, in this construction, so at least he has to pay for the interest — so the company needs to make a profit. And even more than before, I think.”

“It’s 6% for every failure to comply. It would be an expensive hobby,” agrees MEP Alexandra Geese, dismissing the idea that the billionaire will just view any DSA fines as the equivalent of ‘parking tickets’ to be picked off and flicked away.

As well as headline fines of up to 6% for breaches of the regulation, repeated failures by a Musk-owned Twitter to comply with the DSA could lead to the European Commission issuing daily fines; suing him for non-compliance; or even ordering a regional block on the service.

Article 41(3) of the regulation sets out powers in the event of repeated, serious infringements which — per snippets of the final text (still pending publication) we’ve seen — include the ability to temporarily block access to a service for four weeks, with the further possibility that that temporary ban could be repeated for a set number of times and thus prolonged for months, plural.

So even if Musk’s fortune (and inclination) were to extend to regularly shelling out for very large fines (up to hundreds of millions of dollars on current Twitter revenue) he may have harder pause about taking a course of ‘free speech’ action that, de facto, limits Twitter’s reach by leading to the service being blocked across the EU — since he claims he’s buying it to defend it as an important speech forum for human “civilization”. (Assuming he’s not narrowly defining that to mean ‘in the U.S. of A.’.)

While the full spectrum of the DSA isn’t due to come into force until the start of 2024, rules for VLOPs have a shorter implementation period — of six months — so the regime will likely be up and running for platforms like Twitter in early 2023.

That means if Musk wants to — shall we say — ‘fuck around and find out’ how far he can push the needle on speech absolutism across the EU, he won’t have very long before the Commission and other regulators are empowered to come calling if/when he fails to meet their democratic standard. (In Germany, there are already laws in place for platforms: The country has been regulating hate speech takedowns on social media since 2017 — hence the ‘joke’ has long been that if you want a de-nazified version of Twitter you just change your country to ‘Germany’ in the settings and your Twitter feed will instantly become fascist-free.)

The DSA puts a number of specific obligations on VLOPs that may not exactly be front of mind for Musk as he celebrates an expensive new addition to his company portfolio (albeit, still pending shareholder approval) — including requiring platforms to carry out risk assessments related to the dissemination of illegal content; to consider any negative impacts on fundamental European rights such as privacy and freedom of expression; and look at risks of intentional manipulation of the service (such as election interference) — and then to act upon these assessments by putting in place “reasonable, effective and proportionate” measures to combat the specific systemic risks identified; all of which must be detailed in “comprehensive” yearly reports, among a slew of other requirements.

“An ‘Elonised’ version of Twitter would probably not meet the DSA requirements of articles 26-27,” predicts Mathias Vermeulen, public policy director at the digital rights agency AWO. “This can lead to fines (which Musk doesn’t care about), but it could lead to Twitter being banned in the EU in case of repeated violations. That’s when it really gets interesting: Would he change his ideal vision of Twitter to preserve the EU market? Or is he prepared to drop it because he didn’t buy this as a business opportunity but to ‘protect free speech in the US’?”

The UK, meanwhile — which now sits outside the bloc — has its own, bespoke ‘harms mitigating’ social media-focused legislation on the horizon. The Online Safety Bill now before the country’s parliament bakes in even higher fines (up to 10% of global turnover) and adds another type of penalty into the mix: The threat of jail time for named executives deemed to be failing to comply with regulatory procedures. (Or, put another way, trolling the regulator won’t be tolerated; they’re British… )

Is Musk willing to go to jail over a maximalist conception of free speech? Or would he just avoid ever visiting the UK again — and let his local Twitter execs do the time and take the lumps?

Even if he’s willing to let his staff languish in jail, the UK’s draft legislation also envisages the regulator being able to block non-compliant services in the market — so, again, if Musk kicks against local speech rules he’ll face trading off limits on speech on Twitter with limits on Twitter’s global reach.

“This is not a way to make money,” Musk said earlier this month of its bid for Twitter. “My strong intuitive sense it that having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization. I don’t care about the economics at all.”

“I think he doesn’t understand quite how big a fight he’s in for, nor how complex free speech is in practice. It’s going to be interesting to watch!” says Paul Bernal, a UK-based professor of IT law at the University of East Anglia. “The big risk (as ever) is that he fails to understand that his own experience of Twitter is very different from what happens to other people.”

Elsewhere, a growing number of countries are setting their own local restrictions and ramping up operational risks for owners of speech fencing platforms.

This includes a growing number of autocratic (or autocratically leaning) regimes that are actively taking steps to censor the Internet and restrict access to social media — countries such as Russia, Turkey, India or Nigeria — where platforms that are ‘non-compliant’ with state mandates may also face fines, service shut downs, police raids and threats of jail time for local execs.

“A platform might be able to get away with free-speech absolutism if it restricted its operations to the United States with its specific and peculiar first amendment tradition. But most platforms have 90% or more of their users outside the US, and a growing number of governments around the world, both democratic and less so, increasingly want to influence what people are and are not allowed to see online. This goes for the European Union. It clearly goes for China. It also goes for a number of other countries,” says Rasmus Kleis Nielsen, a professor of political communication and director at the Reuters Institute for the Study of Journalism at the University of Oxford.

“Saying ‘I’m for free speech’ is at best a starting point for how private companies can respond to this. If you are rich enough, you can maybe soak up the fines some governments will impose on companies that in the name of free speech (or for other reasons) refuse to do as they are asked to. But in a growing number of cases the next steps include for example going after individual company employees (Indian police has raided local Twitter offices), forcing ISPs to throttle access to a platform (as Turkish government has done), or ultimately block it entirely (as Nigerian government has done).

“So while in the United States, a platform that conducts content moderation on the basis of simplistic slogans will primarily face the fact that free speech is more complicated and ambiguous in practice than it is in theory, and that both users and other customers like advertisers clearly see that and expect companies to manage that complexity as clearly, consistently, and transparently as possible, in the rest of the world every platform will — if it wants to do business — face the fact that governments increasingly want to influence who gets to say what, where, and when.

“And that they (at best through independent judiciary and independent regulators, at worst directly through the executive branch), not individual proprietors or private companies, will want to decide what should and should not be free speech.”

Nothing Musk has said or done suggests he has anything other than a US-centric understanding of ‘free speech’. And that he therefore has a limited understanding of the scope of speech restrictions that can, legally, apply to Twitter — depending on where in the world the service is being used. And the prospect of Musk owning Twitter can’t change any actual laws around speech — however much he makes striking statements equating his ownership to the ‘defence of human civilization’.

Away from the (relatively clearly defined) line of illegal speech, perhaps most obviously (and depressingly), Musk having an overly simplistic understanding of ‘free speech’ could doom Twitter users everywhere to a ‘Groundhog Day’ style repeat of earlier stumbles — when the company’s (US-centric) operators allowed the platform to steep in its worst users’ bile, professing themselves “the free speech wing of the free speech party” while victimized users were essentially censored by the loudest bullies.

It still feels like the very recent past (actually it was circa 2018!) when Twitter’s then CEO and co-founder, Jack Dorsey, appeared to have a micro epiphany about the need for Twitter to factor in “conversational health” if it wanted anyone other than nazis to stick around on its platform. That in turn led to a slow plod of progress by the company to tackle toxicity and improve tools for users to protect themselves from abuse. (And, very latently, to a ban on ‘king’ Twitter bully, Donald Trump.)

A free speech absolutist like Musk — who is expert at using Twitter to bully his own targets — threatens to burn all that hard won work right back down to ground zero.

But, of course, if he actually wants the world to want to hang out in his ‘town square’ and talk that would be the polar opposite of a smart strategy.

Musk may also try to turn his bullying on international regulators too, of course. He has (in)famously and publicly clashed with US oversight bodies — repeatedly trolling the SEC, via Twitter (ofc) — including tweeting veiled insults which pun on its three letter acronym. Or recently referring to it (or at least some of its California staff) as “those bastards”, in reference to an investigation it had instigated when he tweeted about wanting to take Tesla private.

Musk’s demonstrable rage and discomfort at the SEC’s perceived regulation of his own speech — and his open contempt for a public body whose job it is to police long-standing rules in areas like insider trading — doesn’t exactly bode well for him achieving friction-free relations with the long list of international oversight bodies poised to scrutinize his tenure at Twitter. And with whom he may soon be openly clashing on Twitter.

If he’s not yet taking tweet pot shots at these ‘bastards’ it’s likely because he doesn’t really know they exist yet — but that’s about to change. (After all, EU commissioners are already tweeting Musk to school him on how “quickly” he’ll “adapt” to their “rules”, as internal market commissioner Thierry Breton put it to “Mr Musk” earlier today… Touché!)

The Commission will be in charge of the oversight of VLOPs under the DSA once it’s in force — which means the EU’s executive body will be responsible for deciding whether larger platforms are in breach of the bloc’s governance structure for handling illegal speech, and, if so, for determining appropriate penalties or other measures to encourage them to reset their approach.

Asked whether it has any concerns with Musk owning Twitter — given his free speech “absolutist” agenda — the Commission declined to comment on Twitter’s ownership change, or any individual (and still ongoing) business transaction, but a spokesperson told us: “The Commission will keep monitoring developments as they take place to ensure that once DSA enters into force, Twitter, like all other online platforms concerned will follow the rules.”

What does Musk want to do with Twitter?

Musk hasn’t put a whole lot of meat on the bones of exactly what he plans to do with Twitter — beyond broad brush strokes of taking the company private — and claiming his leadership will unlock its “tremendous potential”.

What he has said so far has focused in a few areas — freedom of expression being his seeming chief concern.

Indeed, the first two words of his emoji-laden victory tweet on having his bid to buy Twitter accepted are literally “free speech”.

Though he also listed a few feature ideas, saying for example that he wants to open source Twitter’s algorithms “to increase trust”. He also made “defeating spam bots” part of his mission statement — which may or may not explain another reference in the tweet to “authenticating all humans” (which has privacy advocates understandably concerned).

In a TED interview earlier this month, before the sale deal had been sealed, Musk also responded to the question of why he wanted to buy Twitter by saying: “I think it’s very important for there to be an inclusive arena for free speech”.

He further dubbed the platform a “de facto town square”.

“It’s just really important that people have both the reality and the perception that they are able to speak freely within the bounds of the law,” Musk went on. “One of the things I believe Twitter should do is open source the algorithm. And make any changes to people’s tweets — if they’re emphasized or deemphasized — that action should be made apparent so anyone can see that that action’s been taken. There’s no ‘behind the scenes’ manipulation, either algorithmically or manually.”

“It’s important to the function of democracy, it’s important to the function of the United States as a free country — and many other countries — and actually to help freedom in the world more broadly than the US,” he added. “And so I think the civilizational risk is decreased the more we can increase the trust of Twitter as a public platform and so I do think this will be somewhat painful.”

What Musk means by freedom of speech is more fuzzy, of course. He was directly pressed in the TED interview on what his self-claimed “absolutist” free speech position would mean for content moderation on Twitter — with the questioner asking a version of whether it might, for example, mean hateful tweets must flow?

“Obviously Twitter or any forum is bound by the laws of any country it operates in so obviously there are some limitations on free speech in the US and of course Twitter would have to abide by those rules,” Musk responded, sounding rather more measured than his ebullient, shitposting Twitter persona typically comes across (which neatly illustrates the tonal dilemma of digital vs in person speech; which is to say that something said in person, with all the associated vocal emotion, body language and physical human presence, may sound very different when texted and amplified to (potentially) a global audience of millions+ by algorithms that don’t understand human nuance and are great at trampling on context).

“In my view, Twitter should match the laws of the country and really there’s an obligation to do that,” he also said, before circling back to what appears to be his particular beef vis-a-vis the topic of social media ‘censorship’: Invisible algorithmic amplification and/or shadowbanning — aka, proprietary AIs that affect freedom of reach and are not open to external review.

On this topic he may actually find fellow feeling with regulators and lawmakers in Europe — given that the just-agreed DSA requires VLOPs to provide users with “clear, accessible and easily comprehensible” information on the “main parameters” of content ranking systems (the regulation refers to these as “recommender systems”).

The legislation also requires VLOPs to give users some controls over how these ranking and sifting AIs function — in order that users can change the outputs they see, and choose not to receive a content feed that’s based on profiling, for example.

And Musk’s sketched vision of ‘putting Twitter’s AIs on Github’ for nerds to tinker with is, kinda, a tech head’s version of that.

“Having it be unclear who’s making what changes to where; having tweets mysteriously be promoted and demoted with no insight into what’s going on; having a blackbox algorithm promote some things and not other things. I think this can be quite dangerous,” he said in the TED interview, hinting that his time at Twitter might pay rather more attention to algorithmic content-disseminating apparatus — how the platform does or does not create ‘freedom of reach’ — than excessively concerning himself with maximizing the extremis of expression that can be contained within a single tweet.

So there are, on the surface, some striking — and perhaps surprising — similarities of thought in Musk’s stated concerns and EU lawmakers’ focus in the DSA.

He did also say he would probably want to set guidance that would prefer to keep speech up than take it down when expression falls in a grey area between legal and illegal speech — which is where the danger lies, potentially setting up a dynamic that could see Musk give a free pass to plenty of horribly abusive trolling, damaging conspiracy theories and outright disinformation. And that, in turn, could put him back on a collision course with European regulators.

But he also sounded fairly thoughtful on this — suggesting dialling back algorithmic amplification could be an appropriate measure in grey area situations where there is “a lot of controversy”. Which seems quite far from absolutist.

“I think we would want to err on the side of if in doubt, let the speech exist. If it’s a grey area I would say let the tweet exist but obviously in a case where there’s perhaps a lot of controversy you would not want to necessarily promote that tweet,” he suggested. “I’m not saying that I have all the answers here but I do think that we want to be very reluctant to delete things and just be very cautious with permanent bans. Time outs I think are better that permanent bans.”

“It won’t be perfect but I think we just want to have the perception and reality that speech is a free as reasonably possible,” Musk also said, sketching a position that may amount to the equivalent of ‘freedom of speech is not the same as freedom of reach’ (so sure the hateful tweet can stay up but that doesn’t mean the AI will give it any legs). “A good sign as to whether there’s free speech is, is someone you don’t like allowed to say something you don’t like. And if that is the case then we have free speech,” he added.

On the content side, the EU’s incoming regulation mainly concerns itself with harmonizing procedures for tackling explicitly illegal speech — it avoids setting prescriptive obligations for fuzzier ‘perhaps harmful but not literally illegal’ speech, such as disinformation, as regional lawmakers are very wary of being accused of speech policing.

Instead, the bloc has decided to rely on other mechanisms for tackling harms like disinformation — such as a beefed up but still non-legally binding code of practice — where platform signatories agree to “obligations and accountability” but won’t face set penalties, beyond the risk of public naming and shaming if they fail to live up to their claims.

So if Musk does decide to let a tsunami of disinformation rip across Twitter in Europe the DSA may not, on paper, be able to do much about it.

Geese agrees this is a “more complicated” area for EU regulation to tackle. But she notes that VLOPs would still have to do the risk assessments, be subject to independent scrutiny and audits, and provide researchers with access to platform data in order that they can study impacts of disinformation — which creates an investigative surface area where it becomes increasingly hard for platforms not to respond constructively to studied instances of societal harm.

“My guess would be if Twitter goes bananas the interesting European people would leave,” she also suggests. “It would lose influence if its seen as a disinformation platform. But the risk is real.”

“Do you really think that his defence of free speech is including the dissemination of disinformation,” wonders Tang of Musk’s speech position vis-a-vis unfettered amplification of conspiracy theories etc. “I’m not sure on that. I think he’s been — what I have seen — pretty faint or not explicit about it, at least… On dissemination he’s not very clear.”

Tang predicts that if Musk goes ahead with his idea of open sourcing Twitter’s algorithms it could be helpful by showing that Twitter “like other platforms, is basically built on trying to get disagreement because this is what people react to” — giving more evidence to the case for reforming how content gets algorithmically disseminated. “In that sense — that part of the plan — I think is good, or at least interesting,” he suggests.

Another stated “top priority” for Musk — a pledge to kill off the Twitter spambots — looks positive in theory (no one likes spam and a defence of spam on speech grounds is tenuous) but the main question here is what exactly will be his method of execution? His reference to “authenticating all humans” looks worrying, as it would obviously be hugely damaging to Twitter as a platform for free expression if he means he will impose a real names policy and/or require identity authentication simply to have an account.

But, discussing this, Bernal wonders if Musk might not have a more tech-focused feature in mind for slaying spam that could also lead to a positive outcome. “Does he mean real names and verified information, or does he mean using AI to detect bot-like behaviour? If he means real names, he’ll be massively damaging Twitter, without even realising what he’s doing. If he means using AI to detect bots, it could actually be a good thing,” he suggests.

The suspicion has long been that Twitter has never really wanted to kill off the spambots. Because identifying and purging all those fake accounts would decimate its user numbers and destroy shareholder value. But if Musk takes the company private and literally doesn’t care about Twitter’s economics he may actually be in a position to hit the spambot kill switch.

If that’s his plan, Musk would, once again, be closer to the EU’s vision of platform regulation that he might imagine: The bloc has been trying to get platforms to agree to identify bots as part of a strategy to tackle disinformation since its first tentative Code of Practice back in 2018.

Dorsey talked around the topic. How funny would it be if it takes Musk buying Twitter to actually get that done?

But what about an edit button? Every (sane) person thinks that’s a terrible idea. But Musk has repeatedly said it’s coming if he succeeds in owning Twitter.

Asked about it at TED earlier this month — and specifically about the risk of an edit button creating a new vector for disinformation by letting people maliciously change the meaning of their tweets after the fact — Musk managed to sound surprisingly measured and thoughtful there too: “You’d only have the edit capability for a short period of time — and probably the thing to do upon re-edit would be to zero out all retweets and favorites. I’m open to ideas though.”

So, well, maybe Musk was trolling everyone about being “absolutist” all along.

In the latest move by Russia to censor Western Internet services since it started a war by invading Ukraine, Google has confirmed that Russians are having problems accessing its news aggregator service, Google News, in the country.

This follows an earlier report by the Interfax news agency which said the service had been blocked by Russia’s internet censor, Roskomnadzor.

“We’ve confirmed that some people are having difficulty accessing the Google News app and website in Russia and that this is not due to any technical issues on our end,” a Google spokesperson told us, tacitly confirming the news service is being affected by restrictions. (A contact inside Russia also told us the service has been blocked.)

“We’ve worked hard to keep information services like News accessible to people in Russia for as long as possible,” the search giant added. 

Last week Roskomnadzor threatened Google — claiming that YouTube ads were being used to distribute “information attacks” which it said threatened Russian citizens, accusing the company of being engaged in acts “of a terrorist nature” by letting the platform be used in this way.

That threat looked like a precursor to Russia blocking YouTube. However, at the time of writing, the video sharing platform does not appear to have been targeted for technical restriction.

We reached out to YouTube for confirmation of the status of the service in Russia. The platform had not responded at press time — but a Google spokesman suggested the company has not yet seen anything similar affecting YouTube. (Our contact in Russia also told us YouTube is still accessible for them.)

In recent weeks, as the war in Ukraine has ground on, the Kremlin has been tightening local speech censorship — such as by passing a law that carries a penalty of up to 15 years in prison for anyone found spreading “false” information about the Russian military.

This Kremlin crackdown on freedom of expression is making it hugely risky for Russian companies to operate in the news information distribution space — unless they too apply Putin’s censorship of choice to content.

That makes it even more important for Russians to be able to access outside sources of news that aren’t being filtered by the regime. However access to major Western Internet platforms is also being inexorably closed down inside Russia — as technical restrictions are applied to more foreign services.

In recent weeks, Russia has blocked several Western social media platforms following its invasion of Ukraine — shutting down access to Facebook and Instagram.

Roskomnadzor had accused parent entity, Meta, of censoring Russian state affiliated media outlets on Facebook and making a policy change across its social platforms which the regime claimed allowed hate speech that threatens Russian citizens. (Meta denied that — saying it was simply allowing users in Ukrainian to express their strength of feeling about the war.)

Twitter has also faced restrictions since Russia invaded Ukraine.

While VPNs are also being targeted for blocks to try to prevent Russians circumventing service censorship.

Local internet giant Yandex, meanwhile, which runs a similar news aggregator product to Google News — and has sought to operate from inside Russia under a claim of neutrality, even as it argues it must obey local laws — confirmed last week that it plans to sell its media arm and get out of the news ranking business, as we reported earlier.

Yandex’s news aggregator operates under Kremlin rules which control media licensing in the country and thereby determine the list of sources that such services can link to — allowing the regime to shape the underlying narrative that’s then algorithmically amplified.

This is why Yandex has faced criticism for helping to amplify Putin’s propaganda against Ukraine — which contributed to a decision by the EU last week to sanction a key company exec.

Per our sources, one potential front-runner buyer of Yandex News and its blogging and content recommender platform, Zen, is the Russian social media giant VK, which is owned by Mail.ru — a local internet giant that’s reported to have close ties to the Kremlin.

Any prospect that Yandex’s news aggregator might be cut free of Putin’s propaganda sphere — say if it were sold to a foreign buyer — looks all but impossible since the Kremlin has veto powers over key corporate decisions (since a 2019 Yandex corporate restructuring).

Whoever ends up owning the product, with Google News now being restricted inside Russia, the rival Yandex news aggregator will face even less competition for Russian eyeballs — making its AI-driven curation of Kremlin propaganda, and its shaping of the narrative around the war in Ukraine, even more influential on public opinion.

Google ads policy change

In a further Ukraine war development linked to Google, Reuters reported yesterday that parent entity Alphabet has tweaked its ads policy — and will no longer allow ads to be served via its network (and across its web properties, including YouTube) alongside content that seeks to exploit, dismiss or condone the war in Ukraine.

Google confirmed a change without offering much detail — and initially just reiterating its earlier statement:

“We can confirm that we’re taking additional steps to clarify, and in some instances expand our monetization guidelines as they relate to the war in Ukraine. This builds on our current restrictions on Russian state-funded media, as well as our ongoing enforcement against content that incites violence or denies the occurrence of tragic events.”

Reuters’ report also cited an email to publishers it had reviewed in which Google gave an example of the policy clarification — saying ads would not run alongside “claims that imply victims are responsible for their own tragedy or similar instances of victim blaming, such as claims that Ukraine is committing genocide or deliberately attacking its own citizens”.

Existing Google ads & monetization policies do already place restrictions that are intended to prevent ads from running alongside dangerous or harmful content — such as a Dangerous or derogatory content which strictly prohibits:

  • Content that incites violence and disparages or promotes hate against a group of people.
  • Content that denies the existence of tragic events or accuses victims of a tragedy of being crisis actors.

According to Google, the latest update to the ads policy will help clarify and, in some instances, expand existing enforcement on content related to the Ukraine conflict.

It further noted that it has a global sensitive event in place for ads — which blocks ads that are related to the conflict and seek to take advantage of the situation.

Given Roskomnadzor’s recent threat to Google was centered on YouTube ads, the timing of this policy clarification and enforcement expansion looks interesting.

Google may be hoping to appease Russia (and avoid a YouTube block) by limiting how ads can be targeted around the Ukraine war — while offsetting the risk of any critical blowback, i.e. if it were to be accused of bowing to a Kremlin censorship mandate, by selectively emphasizing an example that runs counter to Russian propaganda (videos featuring victim blaming of Ukrainians).

It’s clear that operating in and around Russia is increasingly challenging for Western platforms even when they haven’t been officially blocked by Russia.

On the ad sales front, Google recently announced it was pausing its own ad sales inside Russia, for example, in another step to limit exposure to the market — although it initially continued to allow Russian advertisers to use its tools to serve ads outside Russia. Those ad sales were also subsequently suspended when Google was forced to pause billing for its Play mobile app store in Russia and payment-based YouTube services — a development it blamed on Western sanctions on Russian banks.

Earlier this month, the European Union also applied expansive sanctions to Russian state-affiliated media channels, RT and Sputnik, legally banning distributing their content — including online.

That led to platforms like YouTube implementing blocks on the channels; initially only EU geoblocks — but that was later expanded by Google to a global block on Russia state-affiliated media on YouTube.  (However Google does not appear to have globally blocked RT and Sputnik’s apps on its Play Store; choosing instead to only limiting their access in Europe.)

Russia’s Internet giant Yandex has told investors it’s exploring “strategic options” for its media products — including a potential sale of its news aggregator, Yandex News, and a user-generated-content-recommendation and blogging “infotainment” platform, called Zen.

The disclosure confirms our reporting earlier this week — when sources told us Yandex is in discussions to sell Yandex News and Zen.

Our sources suggested the move is linked to the risks posed by tighter regulations on freedom of expression by the Russian state since it went to war in Ukraine, including a new law which threatens lengthy jail sentences for anyone spreading “false” information about the Russian military (such as by referring to the “war” in Ukraine, rather than using the Kremlin’s preferred phrasing of “special military operation”).

In a statement to its investors today, Yandex writes that it is “exploring different strategic options, including divestment, for its news aggregation service and infotainment platform Zen”.

“The company intends to focus on developing its other technology-related businesses and products (including search, advertising, self-driving and cloud) and transactional services (including ride-hailing, e-commerce, video/audio and streaming), among others,” it added.

A spokesperson for Yandex confirmed it is talks to divest News and Zen.

“We confirm that the company is exploring different strategic options, including divestment, for the news aggregation service and infotainment platform Zen,” they said.

The company has not made public comments about potential buyers for the media products but sources close to the discussions told us earlier that Russian social media giant, VK, is a leading contender.

In forward looking statements to investors, Yandex suggested the divestment process is “at an early stage”, also cautioning its investors that it “can provide no assurance that it will be successful in identifying a buyer, negotiating acceptable terms or closing a transaction”.

The Netherlands-registered Russian company halted halted trading on February 25, when its market cap was at $6.8 billion.

In Russia’s latest swipe at foreign social media giants since it started a land war in Europe by invading Ukraine late last month, the country’s internet censor has fired a warning shot at Google over what it describes as anti-Russian “information attacks” which it claims are being spread via YouTube — accusing the U.S. tech giant of being engaged in acts “of a terrorist nature” by allowing ads on the video-sharing platform to be used to threaten Russian citizens.

In a statement posted on its website today, Roskomnadzor claims YouTube has been serving targeted ads that call for people to disable railway links between Russia and Belarus.

“The actions of the YouTube administration are of a terrorist nature and threaten the life and health of Russian citizens,” the regulator wrote [translated from Russian with machine translation].

“The spread of such appeals clearly demonstrates the anti-Russian position of the American company Google LLC,” it added.

The regulator also warned Google to stop distributing “anti-Russian videos as soon as possible”.

Its statement goes on to accuse U.S. IT companies in general, and tech giants Google and Meta (Facebook’s owner) in particular, of choosing a “path of confrontation” with Russia by launching a targeted campaign of “information attacks” that it says are intended to “discredit the Russian Armed Forces, the media, public figures and the state as a whole”.

“Similar actions by Meta Platforms Inc. and Google LLC not only violate Russian law but also contradict generally accepted norms of morality,” Roskomnadzor added.

YouTube could not immediately be reached for comment on the warning from Roskomnadzor.

The direct warning to Google from the state internet censor could be a precursor to Russia blocking access to YouTube.

In recent days, Facebook and Instagram have both been blocked by Roskomnadzor — as the Kremlin has sought to tighten its grip on the digital information sphere in parallel with its war in Ukraine.

Facebook and Instagram were blocked after Meta said it was relaxing its hate speech policy to allow users in certain regions to post certain kinds of death threats aimed at Russia — which Meta global affairs president, Nick Clegg, defended as a temporary change he said was designed to protect “people’s rights to speech as an expression of self-defense”.

In recent weeks, Roskomnadzor has also put restrictions on Twitter.

But YouTube has escaped any major censorship since the Ukraine invasion, despite the company itself applying some limitations to its service in Russia — such as suspending payment services for users (it took that action as a result of Western sanctions against Russian banks).

In one signal that that could be about to change, a report in Russian press today suggests a block is looming, citing sources close to Roskomnadzor who told it YouTube could be blocked as soon as today or next week. RIA Novosti’s sources told it a block of YouTube is “most likely” by the end of next week.

In what may be another small indicator of the cyber war that’s now fiercely raging between Russia and Ukraine, Roskomnadzor’s website was noticeably slow to load as we were filing this report today. It also appears to have introduced a CAPTCHA request — suggesting it may be trying to prevent and/or mitigate DDoS attacks.

The UK is speeding up the application of powers that could see tech CEOs sent to prison if their businesses fail to comply with incoming safety-focused Internet content legislation, the government confirmed today.

The latest revisions to the draft legislation include a radically reduced timeframe for being able to apply criminal liability powers against senior tech execs who fail to cooperate with information requests from the regulator — down to just two months after the legislation gets passed. (And since the government enjoys a large majority in the House of Commons, the incoming Online Safety regulation — already years in the making — could become law this year.)

While the draft bill, which was published in May 2021, has already seen a string of revisions — with more being announced today — the core plan has remained fairly constant: The government is introducing a dedicated framework to control how social media companies and other content-focused platforms must respond to certain types of problem content (not only illegal content in some cases), which will include a regime of Codes of Practice overseen by the media & comms regulator, Ofcom, in a vastly expanded role, and given hefty powers to fine rule-breakers up to 10% of their global annual turnover.

As the bill’s name suggests, the government’s focus is on a very broad ‘de-risking’ of Internet platforms — which means the bill aims to tackle not just explicitly illegal stuff (such as terrorism or CSAM) but aims to set rules for how the largest Internet platforms need to approach ‘legal but harmful’ online content, such as trolling.

Child safety campaigners especially have been pressing for years for tech firms to be forced to purge toxic content.

The government has gradually and then quickly embraced this populist cause — saying its stated aim for the bill is to make the UK the safest place in the world to go online and loudly banging a child protection drum.

But it has also conceded that there are huge challenges to effective regulation of such a sprawling arena.

The revised draft Bill will be introduced in parliament Thursday — kicking off wider, cross-party debate of what remains a controversial yet populist plan to introduce a ‘duty of care’ on social media companies and other user-generated-content-carrying platforms. Albeit one which enjoys broad (but not universal) support among UK lawmakers.

Commenting on the introduction of the bill to parliament in a statement, digital Secretary Nadine Dorries said:

“The internet has transformed our lives for the better. It’s connected us and empowered us. But on the other side, tech firms haven’t been held to account when harm, abuse and criminal behaviour have run riot on their platforms. Instead they have been left to mark their own homework.

“We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving. Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age. If we fail to act, we risk sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms.

“Since taking on the job I have listened to people in politics, wider society and industry and strengthened the Bill, so that we can achieve our central aim: to make the UK the safest place to go online.”

It’s fair to say there is broad backing inside the UK parliament for cracking the whip over tech platforms when it comes to content rules (MPs surely haven’t forgotten how Facebook’s founder snubbed earlier content questions).

Even as there is diversity of opinion and dispute on the detail of how best to do that. So it will — at least — be interesting to see how parliamentarians respond to the draft as it goes through the legislative scrutiny process in the coming months.

Plenty in and around the UK’s Online Safety proposal still remains unclear, though — not least how well (or poorly) the regime will work in practice. And what its multifaceted requirements will mean for in-scope digital businesses, large and small.

The detail of what exactly will fall into the fuzzier ‘legal but harmful’ content bucket, for example, will be set out in secondary legislation to be agreed by MPs — the latter being another new stipulation the government has announced today, arguing this will avoid the risk of tech giants becoming defacto speech police, which was one early criticism of the plan.

In what looks like a bid to play down further potential for controversy, the government’s press release couches the aims of bill in very vanilla terms — saying it’s is intended to ensure platforms “uphold their stated terms and conditions” (and who could argue with that?) — as well as arguing these are merely “balanced and proportionate” measures (and powers?) that will finally force tech giants to sit up, take notice and effectively tackle illegal and abusive speech. (Or, else, well, their CEO might find themselves banged up in jail… !)

Unsurprisingly, digital rights groups have been quick to seize on this implicitly contradictory messaging — reiterating warnings that the legislation represents a massively chilling attack on freedom of expression. The Open Rights Group (ORG), wasted no time in likening the threat of prison for social media execs to powers being exercised by Vladimir Putin in Russia.

“Powers to imprison social media executives should be compared with Putin’s similar threats a matter of weeks ago,” said ORG’s executive director, Jim Killock, in a statement responding to DCMS’ latest revisions.

“The fact that the Bill keeps changing its content after four years of debate should tell everyone that it is a mess, and likely to be a bitter disappointment in practice,” he added.

“The Bill still contains powers for Ministers to decide what legal content platforms must try to remove. Parliamentary rubber stamps for Ministerial say-so’s will still compromise the independence of the regulator. It would mean state sanctioned censorship of legal content.”

The government’s response to criticism of the potential impact on freedom of speech includes touting requirements in the bill for social media firms to “protect journalism” and “democratic political debate”, as its press release puts it — although it’s rather less clear how (or whether) platforms will/can actually do that.

Instead DCMS reiterates that “news content” (hmm, does that cover anyone online who claims to be a journalist?) has been given a carve out — emphasizing that this particular definition-stretching category is “completely exempt from any regulation under the bill”. (So, well, ‘compliance’ already sounds hella messy*.)

On the headline-grabbing criminal liability risk for senior tech execs — likely a populist measure which the government is probably hoping helps drums up public support to drown out objecting expert voices like ORG’s — the secretary of state for digital, Nadine Dorries, had already signalled during parliamentary committee hearings last fall that she wanted to accelerate the application of criminally liability powers. (Memorably, she wasted no time brandishing the threat of faster jail time at Meta’s senior execs — saying they should focus on safety and forget about the metaverse.)

The original draft of the bill, which predated Dorries’ tenure heading up the digital brief, had deferred the power for at least two years. But that timeframe was criticized by child safety campaigners — who warned that unless the law has real teeth it would be ineffective as platforms will just be able to ignore it. (And a pressing risk of jail time for senior tech executives, such as Meta’s Nick Clegg, a former deputy PM of the UK, could certainly concentrate certain C-suite minds on compliance.)

The speedier jail time power is by no means the first substantial revision of the draft bill, either. As Killock points out there has been a whole banquet of ‘revisions’ at this point — manifested, in recent weeks, as the Department for Digital, Culture, Media and Sport (DCMS) putting out a running drip-feed of announcements that it’s further expanding the scope of the bill and amping up its power.

This has included bringing scam ads and porn websites into scope (in the latter case to force them to use age verification technologies); expanding the list of criminal content added to the face of the bill and introducing new criminal offences — including cyberflashing; and setting out measures to tackle anonymous trolling by leaning on platforms to squeeze freedom of reach.

Two parliamentary committees which scrutinized the original proposal last year went on to warn of major flaws — and urged a series of changes — recommendations that DCMS has said it has taken on board in making these revisions.

There are even more extras today: Including more new offences (information-related ones) being added to the bill — to make in-scope companies’ senior managers criminally liable for destroying evidence; failing to attend or providing false information in interviews with Ofcom; and for obstructing the regulator when it enters company offices.

DCMS notes that it’s breaching these offences that could lead senior execs of major platforms to be sentenced to up to two years in prison or fined.

Another addition, related to what the government describes as “proactive technology” — aka tools for content moderation, user profiling and behavior identification that are intended to “protect users” — arrives in the form of extra provisions being added to allow Ofcom to “set expectations for the use of these proactive technologies in codes of practice and force companies to use better and more effective tools, should this be necessary”.

“Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any technologies they develop meet standards of accuracy and effectiveness required by the regulator,” it adds, also stipulating that Ofcom will not be able to recommend these tools are applied on private messaging or legal but harmful content.

Platforms will also now be required to report CSAM content they detect on their platforms directly to the National Crime Agency, in another change that replaces an existing voluntary reporting regime and which DCMS says “reflects the government’s commitment to tackling this horrific crime”.

“Reports to the National Crime Agency will need to meet a set of clear standards to ensure law enforcement receives the high quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimisation by preventing the ongoing recirculation of illegal content,” it also specifies, adding: “In-scope companies will need to demonstrate existing reporting obligations outside of the UK to be exempt from this requirement, which will avoid duplication of company’s efforts.”

Having made so many revisions to what the government likes to brand “world-leading” legislation, even before formal parliamentary debate kicks off, suggests accusations that the proposal is both overblown and half-baked look hard to shake.

MPs may also identify a lack of coherence being costumed in populist conviction and spy an opportunity to grandstand and press for their own personal pet hates to be rolled into the mix too (as one former minister of state has warned) — with the risk that a born lumpy bill ends up even more unwieldy and laden with impossible asks.

*A line in DCMS’ own press release appears to concede at least one looming mess — and/or the need for even more revisions/measures to be added — noting: “Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets.”

Twitter’s service continues to be partially accessible in Russia, although the social networking platform confirmed today that it is aware of reports of users in the country having “increasing difficulty” accessing its service, adding that it is investigating and working to restore full access.

“We’re aware of reports that people are increasingly having difficulty accessing Twitter in Russia. We’re investigating and working to fully restore access to our service,” a Twitter spokesperson told us.

A source inside Russia told us they’ve been unable to access Twitter’s website since Saturday — but they also said that its mobile app still works.

On Friday reports suggested Twitter’s service had been blocked by Russia’s communications regulator, Roskomnadzor, as Putin’s regime continues to clamp down on the free flow of information in the wake of the invasion of Ukraine.

Although, at the time, Twitter said it has not seen any significant change vs the throttling that’s been affecting its service since the invasion of Ukraine began, and after some Russians took to the streets to protest the war.

Twitter’s line has evolved to a tacit confirm of a partial block now.

There’s no doubt Russia is trying to tighten its grip on the information sphere around the war in Ukraine.

Also Friday, Russia’s parliament passed a draconian new law targeting independent journalists, with the threat of up to 15 years in jail for reporting ‘fake’ information about the military.

The same day the Russian government announced that it would block access to Facebook — a move that was seized upon by Facebook/Meta president, Nick Clegg, to couch the social network as a provider of contrastingly “reliable information”.

Yet Clegg’s irony-free claim came just a few days after Meta disclosed discovery of Russian propaganda spreading on its platform, saying last Monday that it had taken down a network of circa 40 accounts/Pages/Groups on Facebook and Instagram being run out of Russia which had been targeting people in Ukraine with disinformation — just the latest example of state-backed “coordinated inauthentic behavior” (aka disinformation) being hosted on Facebook. So of course it’s never so black & white.

(See also: The role Facebook’s ad targeting platform played in enabling the massive spread of Kremlin-funded election interference targeted at the US 2016 election, for an especially notorious example.)

Following the invasion of Ukraine, Facebook has also been used to amplify public calls for peace within Russia — such as the local IT workers who used the platform to spread an anti-war petition which went on to garner thousands of signatures from the country’s tech community.

Our source inside Russia told us they are still able to access Facebook as of today — but again via its app, not the web.

Russia’s ability to fully block access to Western social media platforms looks doubtful, short of more drastic action to technically disconnect the Internet in Russia from the global Internet and block access to VPNs etc (or else by making it illegal for Russians to access anything other than .ru domains) — given the availability of workarounds to blocks on web domains, such as mobile apps and the use of VPNs or even using Tor.

Russia’s largely unsuccessful attempts a few years ago to block the Telegram app underline some of the technical challenges of trying to block mobile apps.

However degrading the ability of Russians to easily access outside sources of information — while flooding the airwaves with state-controlled propaganda — may have much the same effect for all too many citizens. 

The draconian content legislation Russia’s parliament agreed Friday also led another social network — TikTok — to quickly suspend the ability of users in the country to post new content, citing concerns for its employees and users. So Putin’s regime is using multiple levers in a bid to control more of the narrative online.

It’s also worth noting that, since 2015, Russia has been working on a project to build a national internet, suggesting it has an ambition to be able to fully control the digital information sphere, even if it has not been able to pull that off yet.

The war in Ukraine could supercharge Russia’s effort to create self sufficient digital “segments” — as Putin put it back in 2019, discussing the risk of the West denying Russia’s access to the global Internet (although internal tech development is also likely to be hit hard by Western sanctions).

Fast forward to 2022 and it’s Putin that wants to deny Russians’ access to Western segments of the web which he cannot completely control, as local censorship efforts are dialled up and put on a war footing.

In recent days, Europe has also responded to Putin’s war of aggression in Ukraine by evolving its own response to Russian propaganda — with EU lawmakers agreeing an unprecedented ban on the distribution of Kremlin-backed state media outlets Russia Today (RT) and Sputnik last week.

The ban covers online platforms, like Twitter and Facebook, as well as traditional broadcast media (such as satellite).

The EU has said the prohibition on RT and Sputnik will last as long as Russia continues the war in Ukraine — also stipulating that it will not be lifted until Putin stops spreading propaganda against the EU and its Member States.

 

Twitter has claimed it’s complying with an EU-wide ban on Russia Today (RT) and Sputnik which came into force yesterday as part of the package of sanctions imposed by the bloc on Russia following the invasion of Ukraine, albeit reluctantly.

In a statement the social network has been circulating in response to press requests this week a Twitter spokesperson said:

“The European Union (EU) sanctions will legally require us to withhold certain content in EU member states, and we intend to comply. Our global approach outside of the EU will continue to focus on de-amplifying this type of state-affiliated media content across our service and providing important context through our labels. We continue to advocate for a free and open internet, particularly in times of crisis.”

However it’s not clear how well Twitter is actually complying with the legal order banning the distribution of RT’s content in the EU.

While some users around Europe have reported encountering an “account withheld” notification if they try to access the Kremlin-linked media outlet’s verified Twitter account since the RT ban came into force (see first screenshot below), TechCrunch found it is still possible to view RT’s account from within the EU — without needing to use a VPN to circumvent the geoblocks (see second screengrab).

Testing Twitter’s implementation in the EU, here’s what a user in France encountered when attempting to browse to RT’s Verified Twitter account:

What a Twitter user in France saw when trying to access RT’s verified Twitter account (Screengrab: TechCrunch)

But testing the exact same thing today from Spain we found no block on accessing RT’s account and could also view individual tweets — including the below example which the state-affiliated media entity tweeted out this morning — despite Twitter’s own “account withheld” notice, visible elsewhere in the EU, listing Spain as one of the countries covered by its implementation…

A Twitter user in Spain still able to access RT’s account (Screengrab: TechCrunch)

At the time of writing our tester in Spain was still able to access RT from the Twitter mobile app and via the mobile web.

So Twitter appears to be breaching the EU sanction in this instance.

It’s not clear how wide (or singular) a problem this might with Twitter’s implementation in Spain — or, indeed, across the EU (which has 27 Member States).

Further ‘leakage’ can’t be ruled out. Perhaps especially this soon after the sanction came into force. One interpretation of what’s going on here is that it may be a case of Twitter still ironing out issues with its compliance. (Update: Another, as this reader pointed out, is that the blocks don’t rely on location but user-stated country so can be circumvented simply by the user changing their country in the Twitter settings.)

It is still possible to access RT’s account on Twitter in the UK, too — but that’s by design as the company has elected for the narrowest possible application in order to comply with the EU ban, and the UK is no longer a member of the bloc.

Nor has the UK ordered an equivalent domestic order prohibiting RT’s content from being distributed by online platforms — at least not yet; its media regulator, Ofcom, is currently investigating whether RT has been breaching the country’s broadcast code. (Ofcom is also gearing up for a vastly expanded role overseeing incoming internet content rules ahead of the Online Safety Bill becoming law, so will certainly be getting involved in online content moderation decisions in the future.)

The list of countries where Twitter says it’s geoblocking RT in the wake of the EU prohibition also does not include Iceland, Norway, Liechtenstein and Switzerland — which are in Europe, and in the European Free Trade Area, but are not in the EU.

So, again, Twitter is electing for the narrowest possible implementation of a pan-EU sanction.

That contrasts with Apple and Google — which earlier this week both announced they were blocking access to RT’s apps on their respective mobile stores, with Apple doing so in all international markets (except Russia itself).

Google didn’t go that far but it did implement a slightly broader geoblocking than Twitter — also blocking access to the apps in the aforementioned non-EU European countries (UK, Iceland, Norway, Liechtenstein and Switzerland), as well as EU countries and Ukraine.

On Monday, Microsoft also announced that it was removing RT news apps from its Windows app store, and purging RT and Sputnik content from its Microsoft Start platform (including MSN.com) — in order to “reduce the exposure of Russian state propaganda”, as it put it.

So Twitter’s narrow, EU-only block on the Kremlin-affiliated media outlet is starting to look like an outlier.

That said, the company’s core product is a real-time information service — which explains its general preference for labelling and contextualizing, rather than censoring (and earlier this week it announced extra labels for tweets linking to the Russian media outlets).

Moreover, as Twitter’s statement implies, its network can play an especially heightened role during times of crisis — so having a reluctance to degrade its core utility, even during a war, is understandable.

At the same time, the context in which RT is operating has inexorably shifted with Russia’s invasion of Ukraine. And it’s not clear whether Twitter has reassessed its policies in light of that.

We’re now witnessing a war of aggression by Russia against a sovereign country in Europe — in which foreign-targeted propaganda is playing a key strategic role.

This is why EU leaders very quickly decided that the invasion of Ukraine marked a red line for allowing the continued free flow of Putin’s propaganda through RT and Sputnik, the two main state-affiliated channels which are engaged in foreign information manipulation and clearly attached to his regime.

Nonetheless, Twitter prefers to stay as neutral as it (legally) can, it seems.

Asked about the issue our tester encountered in Spain (an EU country) Twitter declined to provide an explanation of why it is failing to block RT’s content — merely pointing back to its earlier statement when it said it “intend[s] to comply” with the EU sanction.

Pressed further, Twitter’s spokeswoman confirmed: “We have implemented the ban.”

Given that, it looks like the issue we’ve identified is an example of an imperfect implementation of the ban — presumably not an intentional fail by Twitter. And perhaps related to how little time has passed since the ban came into legal force (although the bloc’s president warned it was coming at the weekend).

Considering the company’s long-stated preference for as many tweets as possible to flow, it may also be the case that a flakey implementation of a legal prohibition implies a degree of ‘feature not bug’ to Twitter’s sloppy execution.

We did press Twitter for clarity on the bug. But despite repeated attempts to get straight answers regarding the issue we had flagged, none were forthcoming.

Instead Twitter’s spokeswoman sent us a link to its “country withheld content” (CWC) page — and a pointer to what she described as “the data/explanation from our latest transparency report about which countries are asking us to withhold content and re: what issues”, adding that she “hopes this helps give you a sense of scale”.

The chunk of text she highlighted (see base of post) indicates that there are now 20 countries where Twitter is restricting access to content. So — attempting to read between the lines — it may be trying to suggest that with so many different, country-specific mandates to comply with it’s struggling to execute perfectly.

And ahead of the bloc’s ban on the Kremlin-linked media coming into force yesterday, EU officials did signal that they were expecting a degree of progressivity to implementation of the sanction online, given the challenges of cutting off every last outlet where content might get distributed digitally. (Albeit, blocking verified accounts in specific regions really shouldn’t be that hard.)

Whatever the specific explanation for the issue we found with Twitter’s implementation of the EU’s ban, one thing is clear: It remains all too easy for it to shrug off another policy enforcement failure.

“We have now used CWC in 20 countries in response to legal demands: Argentina, Australia, Brazil, Canada, France, Germany, India, Ireland, Israel, Japan, Netherlands, New Zealand, Russia, Singapore, South Korea, Spain, Turkey, and the United Kingdom. During this reporting period, Twitter withheld content in Indonesia for the first time.”

In its latest strike against online content it doesn’t control Russia is throttling Twitter. State agency Roskomnadzor said today it was taking the action in response to the social media not removing banned content, claiming it had identified more than 3,000 unlawful posts that have not been taken down — and warning it could implement a total block on the service.

However the action by the comms regulator to slow down all Twitter’s mobile traffic and 50% of desktop users in Russia appeared to have briefly taken down Roskomnadzor’s own website earlier today.

Reports also circulated on social media that Russian government websites, including kremlin.ru, had been affected. At the time of writing these sites were accessible but earlier we were unable to access Roskomnadzor’s site.

The stand-off between the state agency and Twitter comes at a time when Russia is trying to clamp down on anti-corruption protestors who are supporters of the jailed opposition leader, Alexei Navalny — who has, in recent weeks, called for demonstrators to take to the streets to ramp up pressure on the regime.

Roskomnadzor’s statement makes no mention of the state’s push to censor political opposition — claiming only that the content it’s throttling Twitter for failing to delete is material relating to minors committing suicide; child pornography; and drug use. Hence it also claims to be taking the action to “protect Russian citizens”. However a draconian application of speech-chilling laws to try to silence political opposition are nothing new in Putin’s Russia.

The Russian regime has sought to get content it doesn’t like removed from foreign-based social media services a number of times in recent years, including — as now — resorting to technical means to limit access.

 

Most notoriously, back in 2018, an attempt by Russia to block access to the messaging service Telegram resulted in massive collateral damage to the local Internet as the block took down millions of (non-Telegram-related) IP addresses — disrupting those other services.

Also in 2018 Facebook-owned Instagram complied with a Russian request to remove content posted by Navalny — which earned it a shaming tweet from the now jailed politician.

Although now behind bars in Russia — Navalny was jailed in February, after Russia claimed he had violated the conditions of a suspended sentence — the prominent Putin critic has continued to use his official Twitter account as a megaphone to denounce corruption and draw attention to the injustice of his detention, following his attempted poisoning last year (which has been linked to Russia’s FSB).

Recent tweets from Navalny’s account include amplification of an investigation by the German newspaper Bild into RT DE, the Russian state-controlled media outlet Russia Today’s German channel — which the newspaper accuses of espionage in German targeting Navalny and his associates (he was staying in a German hospital in Berlin at the time, recovering from the attempted poisoning).

Slowing down access to Twitter is one way for Russia to try to put a lid on Navalny’s critical output on the platform — which also includes a recent retweet of a video claiming that Russian citizen’s taxes were used this winter by Putin and his cronies to fund yachts, whiskey and a Maldivian vacation.

Navalny’s account has also tweeted in recent hours to denounce his jailing by the Russian state following its attempt to poison him — saying: “This situation is called attempted murder”.

At the time of writing Twitter had not responded to requests for comment on Roskomnadzor’s action.

However last month, in a worrying development in India that’s also related to anti-government protests (in that case by farmers who are seeking to reverse moves to deregulate the market), Twitter caved in to pressure from the government — shuttering 500 accounts including some linked to the anti-government protests.

It also agreed to reduce the visibility of certain protest hashtags.

 

The Facebook Oversight Board (FOB) is already feeling frustrated by the binary choices it’s expected to make as it reviews Facebook’s content moderation decisions, according to one of its members who was giving evidence to a UK House of Lords committee today which is running an enquiry into freedom of expression online. 

The FOB is currently considering whether to overturn Facebook’s ban on former US president, Donald Trump. The tech giant banned Trump “indefinitely” earlier this year after his supporters stormed the US capital.

The chaotic insurrection on January 6 led to a number of deaths and widespread condemnation of how mainstream tech platforms had stood back and allowed Trump to use their tools as megaphones to whip up division and hate rather than enforcing their rules in his case.

Yet, after finally banning Trump, Facebook almost immediately referred the case to it’s self-appointed and self-styled Oversight Board for review — opening up the prospect that its Trump ban could be reversed in short order via an exceptional review process that Facebook has fashioned, funded and staffed.

Alan Rusbridger, a former editor of the British newspaper The Guardian — and one of 20 FOB members selected as an initial cohort (the Board’s full headcount will be double that) — avoided making a direct reference to the Trump case today, given the review is ongoing, but he implied that the binary choices it has at its disposal at this early stage aren’t as nuanced as he’d like.

“What happens if — without commenting on any high profile current cases — you didn’t want to ban somebody for life but you wanted to have a ‘sin bin’ so that if they misbehaved you could chuck them back off again?” he said, suggesting he’d like to be able to issue a soccer-style “yellow card” instead.

“I think the Board will want to expand in its scope. I think we’re already a bit frustrated by just saying take it down or leave it up,” he went on. “What happens if you want to… make something less viral? What happens if you want to put an interstitial?

“So I think all these things are things that the Board may ask Facebook for in time. But we have to get our feet under the table first — we can do what we want.”

“At some point we’re going to ask to see the algorithm, I feel sure — whatever that means,” Rusbridger also told the committee. “Whether we can understand it when we see it is a different matter.”

To many people, Facebook’s Trump ban is uncontroversial — given the risk of further violence posed by letting Trump continue to use its megaphone to foment insurrection. There are also clear and repeat breaches of Facebook’s community standards if you want to be a stickler for its rules.

Among supporters of the ban is Facebook’s former chief security officer, Alex Stamos, who has since been working on wider trust and safety issues for online platforms via the Stanford Internet Observatory.

Stamos was urging both Twitter and Facebook to cut Trump off before everything kicked off, writing in early January: “There are no legitimate equities left and labeling won’t do it.”

But in the wake of big tech moving almost as a unit to finally put Trump on mute, a number of world leaders and lawmakers were quick to express misgivings at the big tech power flex.

Germany’s chancellor called Twitter’s ban on him “problematic”, saying it raised troubling questions about the power of the platforms to interfere with speech. While other lawmakers in Europe seized on the unilateral action — saying it underlined the need for proper democratic regulation of tech giants.

The sight of the world’s most powerful social media platforms being able to mute a democratically elected president (even one as divisive and unpopular as Trump) made politicians of all stripes feel queasy.

Facebook’s entirely predictable response was, of course, to outsource this two-sided conundrum to the FOB. After all, that was its whole plan for the Board. The Board would be there to deal with the most headachey and controversial content moderation stuff.

And on that level Facebook’s Oversight Board is doing exactly the job Facebook intended for it.

But it’s interesting that this unofficial ‘supreme court’ is already feeling frustrated by the limited binary choices it’s asked them for. (Of, in the Trump case, either reversing the ban entirely or continuing it indefinitely.)

The FOB’s unofficial message seems to be that the tools are simply far too blunt. Although Facebook has never said it will be bound by any wider policy suggestions the Board might make — only that it will abide by the specific individual review decisions. (Which is why a common critique of the Board is that it’s toothless where it matters.)

How aggressive the Board will be in pushing Facebook to be less frustrating very much remains to be seen.

“None of this is going to be solved quickly,” Rusbridger went on to tell the committee in more general remarks on the challenges of moderating speech in the digital era. Getting to grips with the Internet’s publishing revolution could in fact, he implied, take the work of generations — making the customary reference the long tail of societal disruption that flowed from Gutenberg inventing the printing press.

If Facebook was hoping the FOB would kick hard (and thorny-in-its-side) questions around content moderation into long and intellectual grasses it’s surely delighted with the level of beard stroking which Rusbridger’s evidence implies is now going on inside the Board. (If, possibly, slightly less enchanted by the prospect of its appointees asking it if they can poke around its algorithmic black boxes.)

Kate Klonick, an assistant professor at St John’s University Law School, was also giving evidence to the committee — having written an article on the inner workings of the FOB, published recently in the New Yorker, after she was given wide-ranging access by Facebook to observe the process of the body being set up.

The Lords committee was keen to learn more on the workings of the FOB and pressed the witnesses several times on the question of the Board’s independence from Facebook.

Rusbridger batted away concerns on that front — saying “we don’t feel we work for Facebook at all”. Though Board members are paid by Facebook via a trust it set up to put the FOB at arm’s length from the corporate mothership. And the committee didn’t shy away or raising the payment point to query how genuinely independent they can be?

“I feel highly independent,” Rusbridger said. “I don’t think there’s any obligation at all to be nice to Facebook or to be horrible to Facebook.”

“One of the nice things about this Board is occasionally people will say but if we did that that will scupper Facebook’s economic model in such and such a country. To which we answer well that’s not our problem. Which is a very liberating thing,” he added.

Of course it’s hard to imagine a sitting member of the FOB being able to answer the independence question any other way — unless they were simultaneously resigning their commission (which, to be clear, Rusbridger wasn’t).

He confirmed that Board members can serve three terms of three years apiece — so he could have almost a decade of beard-stroking on Facebook’s behalf ahead of him.

Klonick, meanwhile, emphasized the scale of the challenge it had been for Facebook to try to build from scratch a quasi-independent oversight body and create distance between itself and its claimed watchdog.

“Building an institution to be a watchdog institution — it is incredibly hard to transition to institution-building and to break those bonds [between the Board and Facebook] and set up these new people with frankly this huge set of problems and a new technology and a new back end and a content management system and everything,” she said.

Rusbridger had said the Board went through an extensive training process which involved participation from Facebook representatives during the ‘onboarding’. But went on to describe a moment when the training had finished and the FOB realized some Facebook reps were still joining their calls — saying that at that point the Board felt empowered to tell Facebook to leave.

“This was exactly the type of moment — having watched this — that I knew had to happen,” added Klonick. “There had to be some type of formal break — and it was told to me that this was a natural moment that they had done their training and this was going to be moment of push back and breaking away from the nest. And this was it.”

However if your measure of independence is not having Facebook literally listening in on the Board’s calls you do have to query how much Kool Aid Facebook may have successfully doled out to its chosen and willing participants over the long and intricate process of programming its own watchdog — including to extra outsiders it allowed in to observe the set up.

The committee was also interested in the fact the FOB has so far mostly ordered Facebook to reinstate content its moderators had previously taken down.

In January, when the Board issued its first decisions, it overturned four out of five Facebook takedowns — including in relation to a number of hate speech cases. The move quickly attracted criticism over the direction of travel. After all, the wider critique of Facebook’s business is it’s far too reluctant to remove toxic content (it only banned holocaust denial last year, for example). And lo! Here’s its self-styled ‘Oversight Board’ taking decisions to reverse hate speech takedowns…

The unofficial and oppositional ‘Real Facebook Board’ — which is truly independent and heavily critical of Facebook — pounced and decried the decisions as “shocking”, saying the FOB had “bent over backwards to excuse hate”.

Klonick said the reality is that the FOB is not Facebook’s supreme court — but rather it’s essentially just “a dispute resolution mechanism for users”.

If that assessment is true — and it sounds spot on, so long as you recall the fantastically tiny number of users who get to use it — the amount of PR Facebook has been able to generate off of something that should really just be a standard feature of its platform is truly incredible.

Klonick argued that the Board’s early reversals were the result of it hearing from users objecting to content takedowns — which had made it “sympathetic” to their complaints.

“Absolute frustration at not knowing specifically what rule was broken or how to avoid breaking the rule again or what they did to be able to get there or to be able to tell their side of the story,” she said, listing the kinds of things Board members had told her they were hearing from users who had petitioned for a review of a takedown decision against them.

“I think that what you’re seeing in the Board’s decision is, first and foremost, to try to build some of that back in,” she suggested. “Is that the signal that they’re sending back to Facebook — that’s it’s pretty low hanging fruit to be honest. Which is let people know the exact rule, given them a fact to fact type of analysis or application of the rule to the facts and give them that kind of read in to what they’re seeing and people will be happier with what’s going on.

“Or at least just feel a little bit more like there is a process and it’s not just this black box that’s censoring them.”

In his response to the committee’s query, Rusbridger discussed how he approaches review decision-making.

“In most judgements I begin by thinking well why would we restrict freedom of speech in this particular case — and that does get you into interesting questions,” he said, having earlier summed up his school of thought on speech as akin to the ‘fight bad speech with more speech’ Justice Brandeis type view.

“The right not to be offended has been engaged by one of the cases — as opposed to the borderline between being offended and being harmed,” he went on. “That issue has been argued about by political philosophers for a long time and it certainly will never be settled absolutely.

“But if you went along with establishing a right not to be offended that would have huge implications for the ability to discuss almost anything in the end. And yet there have been one or two cases where essentially Facebook, in taking something down, has invoked something like that.”

“Harm as oppose to offence is clearly something you would treat differently,” he added. “And we’re in the fortunate position of being able to hire in experts and seek advisors on the harm here.”

While Rusbridger didn’t sound troubled about the challenges and pitfalls facing the Board when it may have to set the “borderline” between offensive speech and harmful speech itself — being able to (further) outsource expertise presumably helps — he did raise a number of other operational concerns during the session. Including over the lack of technical expertise among current board members (who were purely Facebook’s picks).

Without technical expertise how can the Board ‘examine the algorithm’, as he suggested it would want to, because it won’t be able to understand Facebook’s content distribution machine in any meaningful way?

Since the Board currently lacks technical expertise, it does raise wider questions about its function — and whether its first learned cohort might not be played as useful idiots from Facebook’s self-interested perspective — by helping it gloss over and deflect deeper scrutiny of its algorithmic, money-minting choices.

If you don’t really understand how the Facebook machine functions, technically and economically, how can you conduct any kind of meaningful oversight at all? (Rusbridger evidently gets that — but is also content to wait and see how the process plays out. No doubt the intellectual exercise and insider view is fascinating. “So far I’m finding it highly absorbing,” as he admitted in his evidence opener.)

“People say to me you’re on that Board but it’s well known that the algorithms reward emotional content that polarises communities because that makes it more addictive. Well I don’t know if that’s true or not — and I think as a board we’re going to have to get to grips with that,” he went on to say. “Even if that takes many sessions with coders speaking very slowly so that we can understand what they’re saying.”

“I do think our responsibility will be to understand what these machines are — the machines that are going in rather than the machines that are moderating,” he added. “What their metrics are.”

Both witnesses raised another concern: That the kind of complex, nuanced moderation decisions the Board is making won’t be able to scale — suggesting they’re too specific to be able to generally inform AI-based moderation. Nor will they necessarily be able to be acted on by the staffed moderation system that Facebook currently operates (which gives its thousand of human moderators a fantastically tiny amount of thinking time per content decision).

Despite that the issue of Facebook’s vast scale vs the Board’s limited and Facebook-defined function — to fiddle at the margins of its content empire — was one overarching point that hung uneasily over the session, without being properly grappled with.

“I think your question about ‘is this easily communicated’ is a really good one that we’re wrestling with a bit,” Rusbridger said, conceding that he’d had to brain up on a whole bunch of unfamiliar “human rights protocols and norms from around the world” to feel qualified to rise to the demands of the review job.

Scaling that level of training to the tens of thousands of moderators Facebook currently employs to carry out content moderation would of course be eye-wateringly expensive. Nor is it on offer from Facebook. Instead it’s hand-picked a crack team of 40 very expensive and learned experts to tackle an infinitesimally smaller number of content decisions.

“I think it’s important that the decisions we come to are understandable by human moderators,” Rusbridger added. “Ideally they’re understandable by machines as well — and there is a tension there because sometimes you look at the facts of a case and you decide it in a particular way with reference to those three standards [Facebook’s community standard, Facebook’s values and “a human rights filter”]. But in the knowledge that that’s going to be quite a tall order for a machine to understand the nuance between that case and another case.

“But, you know, these are early days.”