Steve Thomas - IT Consultant

Outrage fast-followed Facebook’s announcement yesterday that it was making good on its threat to block Australian users’ ability to share news on its platform.

The tech giant’s intentionally broad-brush — call it antisocial — implementation of content restrictions took down a swathe of non-news publishers’ Facebook pages, as well as silencing news outlets’, illustrating its planned dodge of the (future) law.

Facebook took the step to censor a bunch of pages as parliamentarians in Australia are debating a legislative proposal to force Facebook (and Google) to pay publishers for linking to their news content. In recent years the media industry in the country has successfully lobbied for a law to extract payment from the tech giants for monetizing news content when it’s reshared on their platforms — though the legislation is still being drafted.

Last month Google also threatened to close its search engine in Australia if the law isn’t amended. But it’s Facebook that screwed its courage to the sticking place and flipped the chaos switch first.

Last night Internet users in Australia took to Twitter to report local scores of Facebook pages being wiped clean of content — including hospitals, universities, unions, government departments and the bureau of meteorology, to name a few.

 

In the wake of Facebook’s unilateral censorship of all sorts of Facebook pages, parliamentarians in the country accused the tech giant of “an assault on a sovereign nation”.

The prime minister of Australia also said today that his government “would not be intimidated”.

Reached for comment, Facebook confirmed it has applied an intentionally broad definition of news to restrict — saying it has done so to reflect the lack of clear guidance in the law “as drafted”.

So it looks like the collateral damage of Facebook silencing scores of public information pages is at least partly a PR tactic to illustrate potential ‘consequences’ of lawmakers forcing it to pay to display certain types of content — i.e. to ‘encourage’ a rethink while there’s still time.

The tech giant did also say it would reverse pages that are “inadvertently impacted”.

But it did not indicate whether it would be doing the leg work of checking its own homework there, or whether silenced pages must (somehow) petition it to be reinstated.

“The actions we’re taking are focused on restricting publishers and people in Australia from sharing or viewing Australian and international news content. As the law does not provide clear guidance on the definition of news content, we have taken a broad definition in order to respect the law as drafted. However, we will reverse any Pages that are inadvertently impacted,” a Facebook company spokesperson said in the statement.

It’s also not clear how many non-news pages have been affected by Facebook’s self-imposed content restrictions.

If the tech giant was hoping to kick off a wider debate about the merits of Australia’s (controversial) plan to make tech pay for news (including in its current guise, for links to news — not just snippets of content, as under the EU’s recent copyright reform expansion of neighbouring rights for news) — Facebook has certainly succeeded in grabbing eyeballs by blocking regional access to vast swathes of useful, factual information.

However Facebook’s blunt action has also attracted criticism that it’s putting business interests before human rights — given it’s shuttering users’ ability to find what might be vital information, such as from hospitals and government departments. in the middle of a pandemic. (Albeit, being accused of ignoring human rights is hardly a new look for Facebook.)

The Harvard professor Shoshana Zuboff’s academic critique of surveillance capitalism — including that it engages in propagating “epistemic chaos” for profit — has perhaps never felt quite so on the nose. (“We turned to Facebook in search of information. Instead we found lethal strategies of epistemic chaos for profit,” she wrote only last month.)

Facebook’s intentional over-flex has also underscored the vast power of its social monopoly — which will likely only strengthen calls for policymakers and antitrust regulators everywhere to grasp the nettle and rein in big tech. So its local lobbying effort may backfire on the global stage if it further sours public opinion against the scandal-hit company.

Facebook’s rush to censor may even encourage a proportion of its users to remember/discover that there’s a whole open Internet outside its walled garden — where they can freely access public information without having to log into Facebook’s ad-targeting platform (and be stripped of their privacy) first.

As others have noted, it’s also interesting to note how quickly Facebook can pull the content moderation trigger when it believes its bottom line is threatened. And a law to extract payment for sharing news content presents a clear threat.

Compare and contrast Facebook’s rush to silence information pages in Australia with its laid back approach to tackling outrage-inducing hate speech or violent conspiracy nonsense and it’s hard not to conclude that content moderation on (and by) Facebook is always viewed through the prism of Facebook’s global revenue growth goals. (Much like how the tech giant can here be seen in a court filing chainlinking revenue to its self-reported ad metric tools.)

The idea for Capsule started with a tweet about reinventing social media.

A day later cryptography researcher, Nadim Kobeissi — best known for authoring the open source e2e encrypted desktop chat app Cryptocat (now discontinued) — had pulled in a pre-seed investment of $100,000 for his lightweight mesh-networked microservices concept, with support coming from angel investor and former Coinbase CTO Balaji Srinivasan, William J. Pulte and Wamda Capital.

The nascent startup has a post-money valuation on paper of $10M, according to Kobeissi, who is working on the prototype — hoping to launch an MVP of Capsule in March (as a web app), after which he intends to raise a seed round (targeting $1M-$1.5M) to build out a team and start developing mobile apps.

For now there’s nothing to see beyond Capsule’s landing page and a pitch deck (which he shared with TechCrunch for review). But Kobeissi says he was startled by the level of interest in the concept.

“I posted that tweet and the expectation that I had was that basically 60 people max would retweet it and then maybe I’ll set up a Kickstarter,” he tells us. Instead the tweet “just completely exploded” and he found himself raising $100k “in a single day” — with $50k paid in there and then.

“I’m not a startup guy. I’ve been running a business based on consulting and based on academic R&D services,” he continues. “But by the end of the day — last Sunday, eight days ago — I was running a Delaware corporation valued at $10M with $100k in pre-seed funding, which is insane. Completely insane.”

Capsule is just the latest contender for retooling Internet power structures by building infrastructure that radically decentralizes social platforms to make speech more resilient to corporate censorship and control.

The list of decentralized/p2p/federated protocols and standards already out there is very long — even while usage remains low. Extant examples include ActivityPub, Diaspora, Mastodon, p2p Matrix, Scuttlebutt, Solid and Urbit, to name a few.

Interest in the space has been rekindled in recent weeks after mainstream platforms like Facebook and Twitter took decisions to shut down US president Donald Trump’s access to their megaphones — a demonstration of private power that other political leaders have described as problematic

Kobeissi also takes that view, while adding the caveat that he’s not “personally” concerned about Trump’s deplatforming. But he says he is concerned about giant private corporations having unilateral power to shape Internet speech — whether takedown decisions are being made by Twitter’s trust & safety lead or Amazon Web Services (which recently yanked the plug on right-wing social network Parler for failing to moderate violent views).

He also points to a lawsuit that’s been filed in US court seeking damages and injunctive relief from Apple for allowing Telegram, a messaging platform with 500M+ users, to be made available through its iOS App Store — “despite Apple’s knowledge that Telegram is being used to intimidate, threaten, and coerce members of the public” — raising concerns about “the odds of these efforts catching on”.

“That is kind of terrifying,” he suggests.

Capsule would seek to route around the risk of mass deplatforming via “easy to deploy” p2p microservices — starting with a forthcoming web app.

“When you deploy Capsule right now — I have a prototype that does almost nothing running — it’s basically one binary. And you get that binary and you deploy it and you run it, and that’s it. It sets up a server, it contacts Let’s Encrypt, it gets you a certificate, it uses SQLite for the database, which is a server-less database, all of the assets for the web server are within the binary,” he says, walking through the “really nice technical idea” which snagged $100k in pre-seed backing insanely fast.

“There are no other files — and then once you have it running, in that folder when you set up your capsule server, it’s just the Capsule program and a Capsule database which is a file. And that’s it. And that is so self-contained that it’s embeddable everywhere, that’s migratable — and it’s really quite impossible to get this level of simplicity and elegance so quickly unless you go this route. Then, for the mesh federation thing, we’re just doing HTTPS calls and then having decentralized caching of the databases and so on.”

Among the Twitter back-and-forth about how (or whether) Kobeissi’s concept differs to various other decentralized protocols, someone posted a link to this XKCD cartoon — which lampoons the techie quest to resolve competing standards by proposing a tech that covers all use-cases (yet is of course doomed to increase complexity by +1). So given how many protocols already offer self-hosted/p2p social media services it seems fair to ask what’s different here — and, indeed, why build another open decentralized standard?

Kobeissi argues that existing options for decentralizing social media are either: A) not fully p2p (Mastodon is “self-hosted but not decentralized”, per a competitive analysis on Capsule’s pitch deck, ergo its servers are “vulnerable to Parler-style AWS takedowns”); or B) not focused enough on the specific use-case of social media (some other decentralized protocols like Matrix aim to support many more features/apps than social media and therefore can’t be as lightweight is the argument); or C) simply aren’t easy enough to use to be more than a niche geeky option.

He talks about Capsule having the same level of focus on social media as Signal does on private messaging, for example — albeit intending it to support both short-form ‘tweet’ style public posts and long-form Medium-style postings. But he’s vocal about not wanting any ‘bloat’.

He also invokes Apple’s ‘design for usability’ philosophy. Albeit, it’s a lot easier to say you want to design something that ‘just works’ vs actually pulling off effortless mainstream accessibility. But that’s the bar Kobeissi is setting himself here.

“I always imagine Glenn Greenwald when I think of my user,” he says on the usability point, referring to the outspoken journalist and Intercept co-founder who recently left to launch his own newsletter-based offering on Substack. “He’s the person I see setting this up. Basically the way that this would work is he’d be able to set this up or get someone to set it up really easily — I think Capsule is going to offer automated deployments as also a way to make revenue, by the way, i.e. for a bit extra we deploy the server for you and then you’re self-hosting but we also make a margin off of that — but it’s going to be open source, you can set it up yourself as well and that’s perfectly okay. It’s not going to be hindered at all in that sense.

“In the case of Capsule, each content creator has their own website — has their own address, like Capsule.Greenwald.com — and then people go there and their first discovers of the mesh is through people that they’re interested in hearing from.”

Individual Capsules would be decentralized from the risk of platform-level censorship since they’d be beyond the reach of takedowns by a single centralizing entity. Although they would still be being hosted on the web — and therefore could be subject to a takedown by their own web host. That means illegal speech on Capsule could still be removed. However there wouldn’t be a universal host that could be hit up with the risk of a whole platform being taken down at a sweep — as Parler just was by AWS.

“For every takedown it is entirely between that Capsule user and their hosting provider,” says Kobeissi. “Capsule users are going to have different hosting providers that they’re able to choose and then every time that there is a takedown it is going to be a decision that is made by a different entity. And with a different — perhaps — judgement, so there isn’t this centralized focus where only Amazon Web Services decides who gets to speak or only Twitter decides.”

And while the business of web hosting at platform giant level involves just a handful of cloud hosting giants able to offer the required scalability, he argues that that censorship-prone market concentration goes away once you’re dealing with scores of descentralized social media instances.   

“We have the big hosting providers — like AWS, Azure, Google Cloud — but aside from that we have a lot of tiny hosting providers or small businesses… Sure if you’re running a big business you do get to focus on these big providers because they allow you to have these insane servers that are very powerful and deployable very easily but if you’re running a Capsule instance, as a matter of fact, the server resource requirements of running a Capsule instance are generally speaking quite small. In most instances tiny.”

Content would also be harder to scrub from Capsule because the mesh infrastructure would mean posts get mirrored across the network by the poster’s own followers (assuming they have any). So, for example, reposts wouldn’t just vanish the moment the original poster’s account was taken down by their hosting provider.

Separate takedown requests would likely be needed to scrub each reposted instance, adding a lot more friction to the business of content moderation vs the unilateral takedowns that platform giants can rain down now. The aim is to “spare the rest of the community from the danger of being silenced”, as Kobeissi puts it.

Trump’s deplatforming does seem to have triggered a major penny dropping moment for some that allowing a handful of corporate giants to own and operate centalized mass communication machines isn’t exactly healthy for democratic societies as this unilateral control of infrastructure gives them the power to limit speech. (As, indeed, their content-sorting algorithms determine reach and set the agenda of much public debate.)

Current social media infrastructure also provides a few mainstream chokepoints for governments to lean on — amplifying the risk of state censorship.

With concerns growing over the implications of platform power on data flows — and judging by how quickly Kobeissi’s tweet turned heads — we could be on the cusp of an investor-funded scramble to retool Internet infrastructure to redefine where power (and data) lies.

It’s certainly interesting to note that Twitter recently reupped its own decentralized social media open standard push, Bluesky, for example. It obviously wouldn’t want to be left behind any such shift.

“It seems to really have blown up,” Kobeissi adds, returning to his week-old Capsule concept. “I thought when I tweeted that I was maybe the only person who cared. I guess I live in France so I’m not really in tune with what’s going on in the US a lot — but a lot of people care.”

“I am not like a cypherpunk-style person these days, I’m not for full anonymity or full unaccountability online by any stretch,” he adds. “And if this is abused then sincerely it might even be the case that we would encourage — have a guidelines page — for hosting providers like on how to deal with instances of someone hosting an abusive Capsule instance. We do want that accountability to exist. We are not like a full on, crazy town ‘free speech’ wild west thing. We just think that that accountability has to be organic and decentralized — just as originally intended with the Internet.”

YouTube has been the slowest of the big social media platforms to react to the threat of letting president Trump continue to use its platform as a megaphone to whip up insurrection in the wake of the attack on the US capital last week. But it’s now applied a temporary upload ban.

In a short Twitter thread today, the Google-owned service said it had removed new content uploaded to Trump’s YouTube channel “in light of concerns about the ongoing potential violence”.

It also said it’s applied a first strike — triggering a temporary upload ban for at least seven days.

At the time of writing the verified Donald J Trump YouTube channel has some 2.78M subscribers.

“Given the ongoing concerns about violence, we will also be indefinitely disabling comments on President Trump’s channel, as we’ve done to other channels where there are safety concerns found in the comments section,” YouTube adds.

We reached out to YouTube with questions about the content that was removed and how it will determine whether to extend the ban on Trump’s ability to post to its platform beyond seven days.

A spokeswoman confirmed content that was uploaded to the channel on January 12 had been taken down for violating its policies on inciting violence, with the platform saying it perceiving an increased risk of violence in light of recent events and due to earlier remarks by Trump.

She did not confirm the specific content of the video that triggered the takedown and strike.

According to YouTube, platform is applying its standard ‘three strikes’ policy — whereby, within a 90 day period, if a channel receives three strikes it gets permanently suspended. Under this policy a first strike earns around a week’s suspension, a second strike earns around two weeks and a third strike triggers a termination of the channel.

At the time of writing, Trump’s official YouTube channel has a series of recent uploads — including five clips from a speech he gave at the Mexican border wall, where he lauded “successful” completion of the pledge during the 2016 election campaign to ‘build the wall’.

In one of these videos, entitled “President Trump addresses the events of last week”, Trump characterizes supporters who attacked the US capital as a “mob” — and claims his administration “believes in the rule of law, not in violence or rioting” — before segueing into a series of rambling comments about the pandemic and vaccine development.

The clip ends with an entreaty by Trump for “our nation to heal”, for “peace and for calm”, and for respect for law enforcement — with the president claiming people who work in law enforcement form the backbone of the “MAGA agenda”.

An earlier clip of Trump speaking to reporters before he left for the tour of the border wall is also still viewable on the channel.

In it the president attacks the process to impeach him a second time as “a continuation of the greatest witch-hunt in the history of politics”. Here Trump name-checks Nancy Pelosi and Chuck Schumer — in what sounds like a veiled but targeted threat.

“[For them] to continue on this path, I think it’s causing tremendous danger to our country and it’s causing tremendous anger,” he says, before tossing a final caveat at reporters that “I want no violence”. (But, well, if you have to add such a disclaimer what does that say about the sentiments you know you’re whipping up?)

While YouTube has opted for a temporary freeze on Trump’s megaphone, Twitter banned the president for good last week after one too many violations of its civic integrity policy.

Facebook has also imposed what it describes as an “indefinite” suspension — leaving open the possibility that it could in future restore Trump’s ability to use its tools to raise hell.

Up to now, YouTube has managed to avoid being the primary target of ire for those criticizing social media platforms for providing Trump with a carve out from their rules of conduct and a mainstream platform to abuse, bully, lie and (most recently) whip up insurrection.

However the temporary freeze on his account comes after civil rights groups had threatened to organize an advertiser boycott of its platform.

Per Reuters, the Stop Hate for Profit (SHP) campaign — which previously led a major advertisers boycott of Facebook last summer — had demanded that YouTube take down Trump’s verified channel.

“If YouTube does not agree with us and join the other platforms in banning Trump, we’re going to go to the advertisers,” one of SHP’s organizers, Jim Steyer, told the news agency.

In its official comments about the enforcement action against president Trump, YouTube makes no mention of any concern about ramifications from its own advertisers. Though, in recent years, it has faced some earlier boycotts from advertisers over hateful and offensive content.

In background remarks to reporters, YouTube also claims it consistently enforces its policies, regardless of who owns the channel — and says it makes no exceptions for public figures.

However the platform has been known to reverse a three strike termination — recently reinstating the channel of UK broadcaster TalkRadio, for example, after it received a third strike related to coronavirus misinformation.

In that case the channel’s reinstatement was reported to have followed an intervention by TalkRadio’s owner News Corp’s chairman, Rupert Murdoch. UK ministers had also defended the channel’s right to debate the merits of government policy.

In Trump’s case there are a dwindling number of (GOP) politicians willing to ride to his defense in light of the shocking events in Washington last week and continued violent threats being made online by his supporters.

However concern about the massive market power of tech platforms that means they are in a position to be able to take unilateral action and shut down the US president’s ability to broadcast to millions of people is far more widespread.

Earlier this week Germany’s chancellor, Angela Merkel, called Twitter’s ban on Trump “problematic”, while lawmakers elsewhere in Europe have said it must lead to regulatory consequences for big tech.

So whatever his wider legacy, Trump certainly looks set to have a lasting policy impact on the tech giants he is now busy railing at for putting him on mute.

The Cyberspace Administration of China (CAC) announced it has banned 105 mobile apps for violating Chinese internet regulations. While almost all of the apps are made by Chinese developers, American travel booking and review site TripAdvisor is also on the list.

TripAdvisor shares dipped on Nasdaq after the CAC’s announcement, but began recovering in after-hours trading.

While TripAdvisor is based in the United States, like other foreign tech companies, it struck a partnership with a local tech company for its Chinese operations. In TripAdvisor’s case, it entered into an agreement with Trip.com — the Nasdaq-listed Chinese travel titan formerly known as Ctrip — in November 2019 to operate a joint venture called TripAdvisor China. The deal made Trip.com subsidiary Ctrip Investment a majority shareholder in the JV, with TripAdvisor owning 40%.

As part of the deal, TripAdvisor agreed to share content with Trip.com brands, including Chinese travel platforms Ctrip and Qunar, which gained access to the American firm’s abundant overseas travel reviews. That put TripAdvisor in a race with regional players, including Alibaba-backed Qyer and Hong Kong-based Klook, to capture China’s increasingly affluent and savvy outbound tourists.

The CAC is the government agency in charge of overseeing internet regulations and censorship. In a brief statement, the bureau said it began taking action on November 5 to “clean up” China’s internet by removing apps that broke regulations. The 105 apps constituted the first group to be banned, and were targeted after users reported illegal activity or content, the agency said.

Though the CAC did not specify exactly what each app was banned for, the list of illegal activities included spreading pornography, incitements to violence or terrorism, fraud or gambling and prostitution.

In addition, eight app stores were taken down for not complying with review regulations or allowing the download of illegal content.

Such “app cleansing” takes place periodically in China where the government has a stranglehold on information flows. Internet services in China, especially those involving user-generated content, normally rely on armies of censors or filtering software to ensure their content is in line with government guidelines.

The Chinese internet is evolving so rapidly that regulations sometimes fall behind the development of industry players, so the authorities are constantly closing gaps. Apps and services could be pulled because regulators realize they are lacking essential government permits, or they might have published illegal or politically sensitive information.

Foreign tech firms operating in China often find themselves walking a fine line between the “internet freedom” celebrated in the West and adherence to Beijing’s requirements. The likes of Bing.com, LinkedIn, and Apple — the few remaining Western tech giants in China — have all drawn criticism for caving to China’s censorship pressure in the past.

Austria’s Supreme Court has dismissed Facebook’s appeal in a long running speech takedown case — ruling it must remove references to defamatory comments made about a local politician worldwide for as long as the injunction lasts.

We’ve reached out to Facebook for comment on the ruling.

Green Party politician, Eva Glawischnig, successfully sued the social media giant seeking removal of defamatory comments made about her by a user of its platform after Facebook had refused to take down the abusive postings — which referred to her as a “lousy traitor”, a “corrupt tramp” and a member of a “fascist party”. 

After a preliminary injunction in 2016 Glawischnig won local removal of the defamatory postings the next year but continued her legal fight — pushing for similar postings to be removed and take downs to also be global.

Questions were referred up to the EU’s Court of Justice. And in a key judgement last year the CJEU decided platforms can be instructed to hunt for and remove illegal speech worldwide without falling foul of European rules that preclude platforms from being saddled with a “general content monitoring obligation”. Today’s Austrian Supreme Court ruling flows naturally from that.

Austrian newspaper Der Standard reports that the court confirmed the injunction applies worldwide, both to identical postings or those that carry the same essential meaning as the original defamatory posting.

It said the Austrian court argues that EU Member States and civil courts can require platforms like Facebook to monitor content in “specific cases” — such as when a court has identified user content as unlawful and “specific information” about it — in order to prevent content that’s been judged to be illegal from being reproduced and shared by another user of the network at a later point in time with the overarching aim of preventing future violations.

The case has important implications for the limitations of online speech.

Regional lawmakers are also working on updating digital liability regulations. Commission lawmakers have said they want to force platforms to take more responsibility for the content they fence and monetize — fuelled by concerns about the impact of online hate speech, terrorist content and divisive disinformation.

A long-standing EU rule, prohibiting Member States from putting a general content monitoring obligation on platforms, limits how they can be forced to censor speech. But the CJEU ruling has opened the door to bounded monitoring of speech — in instances where it’s been judged to be illegal — and that in turn may influence the policy substance of the Digital Services Act which the Commission is due to publish in draft early next month.

In a reaction to last year’s CJEU ruling, Facebook argued it “opens the door to obligations being imposed on internet companies to proactively monitor content and then interpret if it is ‘equivalent’ to content that has been found to be illegal”.

“In order to get this right national courts will have to set out very clear definitions on what ‘identical’ and ‘equivalent’ means in practice. We hope the courts take a proportionate and measured approach, to avoid having a chilling effect on freedom of expression,” it added.

It’s more than four years since major tech platforms signed up to a voluntary pan-EU Code of Conduct on illegal hate speech removals. Yesterday the European Commission’s latest assessment of the non-legally binding agreement lauds “overall positive” results — with 90% of flagged content assessed within 24 hours and 71% of the content deemed to be illegal hate speech removed. The latter is up from just 28% in 2016.

However the report cards finds platforms are still lacking in transparency. Nor are they providing users with adequate feedback on the issue of hate speech removals, in the Commission’s view.

Platforms responded and gave feedback to 67.1% of the notifications received, per the report card — up from 65.4% in the previous monitoring exercise. Only Facebook informs users systematically — with the Commission noting: “All the other platforms have to make improvements.”

In another criticism, its assessment of platforms’ performance in dealing with hate speech reports found inconsistencies in their evaluation processes — with “separate and comparable” assessments of flagged content that were carried out over different time periods showing “divergences” in how they were handled.

Signatories to the EU online hate speech code are: Dailymotion, Facebook, Google+, Instagram, Jeuxvideo.com, Microsoft, Snapchat, Twitter and YouTube.

This is now the fifth biannual evaluation of the code. It may not yet be the final assessment but EU lawmakers’ eyes are firmly tilted toward a wider legislative process — with commissioners now busy consulting on and drafting a package of measures to update the laws wrapping digital services.

A draft of this Digital Services Act is slated to land by the end of the year, with commissioners signalling they will update the rules around online liability and seek to define platform responsibilities vis-a-vis content.

Unsurprisingly, then, the hate speech code is now being talked about as feeding that wider legislative process — while the self-regulatory effort looks to be reaching the end of the road. 

The code’s signatories are also clearly no longer a comprehensive representation of the swathe of platforms in play these days. There’s no WhatsApp, for example, nor TikTok (which did just sign up to a separate EU Code of Practice targeted at disinformation). But that hardly matters if legal limits on illegal content online are being drafted — and likely to apply across the board. 

Commenting in a statement, Věra Jourová, Commission VP for values and transparency, said: “The Code of conduct remains a success story when it comes to countering illegal hate speech online. It offered urgent improvements while fully respecting fundamental rights. It created valuable partnerships between civil society organisations, national authorities and the IT platforms. Now the time is ripe to ensure that all platforms have the same obligations across the entire Single Market and clarify in legislation the platforms’ responsibilities to make users safer online. What is illegal offline remains illegal online.”

In another supporting statement, Didier Reynders, commissioner for Justice, added: The forthcoming Digital Services Act will make a difference. It will create a European framework for digital services, and complement existing EU actions to curb illegal hate speech online. The Commission will also look into taking binding transparency measures for platforms to clarify how they deal with illegal hate speech on their platforms.”

Earlier this month, at a briefing discussing Commission efforts to tackle online disinformation, Jourová suggested lawmakers are ready to set down some hard legal limits online where illegal content is concerned, telling journalists: “In the Digital Services Act you will see the regulatory action very probably against illegal content — because what’s illegal offline must be clearly illegal online and the platforms have to proactively work in this direction.” Disinformation would not likely get the same treatment, she suggested.

The Commission has now further signalled it will consider ways to prompt all platforms that deal with illegal hate speech to set up “effective notice-and-action systems”.

In addition, it says it will continue — this year and next — to work on facilitating the dialogue between platforms and civil society organisations that are focused on tackling illegal hate speech, saying that it especially wants to foster “engagement with content moderation teams, and mutual understanding on local legal specificities of hate speech”

In its own report last year assessing the code of conduct, the Commission concluded that it had contributed to achieving “quick progress”, particularly on the “swift review and removal of hate speech content”.

It also suggested the effort had “increased trust and cooperation between IT Companies, civil society organisations and Member States authorities in the form of a structured process of mutual learning and exchange of knowledge” — noting that platforms reported “a considerable extension of their network of ‘trusted flaggers’ in Europe since 2016.”

“Transparency and feedback are also important to ensure that users can appeal a decision taken regarding content they posted as well as being a safeguard to protect their right to free speech,” the Commission report also notes, specifying that Facebook reported having received 1.1 million appeals related to content actioned for hate speech between January 2019 and March 2019, and that 130,000 pieces of content were restored “after a reassessment”.

On volumes of hate speech, the Commission suggested the amount of notices on hate speech content are roughly in the range of 17-30% of total content, noting for example that Facebook reported having removed 3.3M pieces of content for violating hate speech policies in the last quarter of 2018 and 4M in the first quarter of 2019.

“The ecosystems of hate speech online and magnitude of the phenomenon in Europe remains an area where more research and data are needed,” the report added.

While a French online hate speech law has just been derailed by the country’s top constitutional authority on freedom of expression grounds, Germany is beefing up hate speech rules — passing a provision that will require platforms to send suspected criminal content directly to the Federal police at the point it’s reported by a user.

The move is part of a wider push by the German government to tackle a rise in right wing extremism and hate crime — which it links to the spread of hate speech online.

Germany’s existing Network Enforcement Act (aka the NetzDG law) came into force in the country in 2017, putting an obligation on social network platforms to remote hate speech within set deadlines as tight as 24 hours for easy cases — with fines of up to €50M should they fail to comply.

Yesterday the parliament passed a reform which extends NetzDG by placing a reporting obligation on platforms which requires them to report certain types of “criminal content” to the Federal Criminal Police Office.

A wider reform of the NetzDG law remains ongoing in parallel, that’s intended to bolster user rights and transparency, including by simplifying user notifications and making it easier for people to object to content removals and have successfully appealed content restored, among other tweaks. Broader transparency reporting requirements are also looming for platforms.

The NetzDG law has always been controversial, with critics warning from the get go that it would lead to restrictions on freedom of expression by incentivizing platforms to remove content rather than risk a fine. (Aka, the risk of ‘overblocking’.) In 2018 Human Rights Watch dubbed it a flawed law — critiquing it for being “vague, overbroad, and turn[ing] private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal”.

The latest change to hate speech rules is no less controversial: Now the concern is that social media giants are being co-opted to help the state build massive databases on citizens without robust legal justification.

A number of amendments to the latest legal reform were rejected, including one tabled by the Greens which would have prevented the personal data of the authors of reported social media posts from being automatically sent to the police.

The political party is concerned about the risk of the new reporting obligation being abused — resulting in data on citizens who have not in fact posted any criminal content ending up with the police.

It also argues there are only weak notification requirements to inform authors of flagged posts that their data has been passed to the police, among sundry other criticisms.

The party had proposed that only the post’s content would be transmitted directly to police who would have been able to request associated personal data from the platform should there be a genuine need to investigate a particular piece of content.

The German government’s reform of hate speech law follows the 2019 murder of a pro-refugee politician, Walter Lübcke, by neo nazis — which it said was preceded by targeted threats and hate speech online.

Earlier this month police staged raids on 40 hate speech suspects across a number of states who are accused of posting “criminally relevant comments” about Lübcke, per national media.

The government also argues that hate speech online has a chilling effect on free speech and a deleterious impact on democracy by intimidating those it targets — meaning they’re unable to freely express themselves or participate without fear in society.

At the pan-EU level, the European Commission has been pressing platforms to improve their reporting around hate speech takedowns for a number of years, after tech firms signed up to voluntary EU Code of Conduct on hate speech.

It is also now consulting on wider changes to platform rules and governance — under a forthcoming Digital Services Act which will consider how much liability tech giants should face for content they’re fencing.

In October, TikTok tapped corporate law firm K&L Gates to advise the company on its moderation policies and other topics afflicting social media platforms. As a part of those efforts, TikTok said it would form a new committee of experts to advise the business on topics like child safety, hate speech, misinformation, bullying, and other potential problems. Today, TikTok is announcing the technology and safety experts who will be the company’s first committee members.

The committee, known as the TikTok Content Advisory Council, will be chaired by Dawn Nunziato, a professor at George Washington University Law School and co-director of the Global Internet Freedom Project. Nunziato specializes in free speech issues and content regulation — areas where TikTok has fallen short.

“A company willing to open its doors to outside experts to help shape upcoming policy shows organizational maturity and humility,” said Nunziato, of her joining. “I am working with TikTok because they’ve shown that they take content moderation seriously, are open to feedback, and understand the importance of this area both for their community and for the future of healthy public discourse,” she added.

TikTok says it plans to grow the committee to around a dozen experts in time.

According to the company, other committee members include:

Rob AtkinsonInformation Technology and Innovation Foundationbrings academic, private sector, and government experience as well as knowledge of technology policy that can advise our approach to innovation

Hany FaridUniversity of California, Berkeley Electrical Engineering & Computer Sciences and  School of Information, is a renowned expert on digital image and video forensics, computer vision, deep fakes, and robust hashing

Mary Anne FranksUniversity of Miami Law School, focuses on the intersection of law and technology and will provide valuable insight into industry challenges including discrimination, safety, and online identity

Vicki HarrisonStanford Psychiatry Center for Youth Mental Health and Wellbeing, is a social worker at the intersection of social media and mental health who understands child safety issues and holistic youth needs

Dawn Nunziato, chair, George Washington University Law School, is an internationally recognized expert in free speech and content regulation

David Ryan PolgarAll Tech Is Human, is a leading voice in tech ethics, digital citizenship, and navigating the complex challenge of aligning societal interests with technological priorities

Dan SchnurUSC Annenberg Center on Communication and UC Berkeley Institute of Governmental Studies, brings valuable experience and insight on political communications and voter information

Nunziato’s view of TikTok — of a company being open and willing to change — is a charitable one, it should be said.

The company is dangerous territory here in the U.S., despite its popularity among Gen Z and millennial users. TikTok today is facing a national security review and a potential ban on all government workers’ phones. In  addition, the Dept. of Defense’s suggested the app should be blocked on phones belonging to U.S. military personnel. Its 2017 acquisition of U.S.-based Musical.ly may even come under review.

Though known for its lighthearted content — like short videos of dances, comedy, and various other creative endeavors — TikTok has also been accused of things like censoring the Hong Kong protests and more, which contributed to U.S. lawmakers’ fears that the Chinese-owned company may have to comply with “state intelligence work.” 

TikTok has also been accused of having censored content from unattractive, poor, or disabled persons as well as videos from users identified as LGBTQ+. The company explained in December these guidelines are no longer used as they were an early and misguided attempt to protect users from online bullying. TikTok had limited the reach of videos where such harassment could occur. But this suppression was done in the dark, unasked for by the “protected” parties — and it wasn’t until exposed by German site NetzPolitik that anyone knew these rules had existed.

In light of the increased scrutiny of its platform and its ties to China, TikTok has been taking a number of steps in an attempt to change its perception. The company released new Community Guidelines and published its first Transparency Report a few months ago. It also hired a global General Counsel and expanded its Trust & Safety hubs in the U.S., Ireland, and Singapore. And it just announced a Transparency Center open to outside experts who want to review its moderation practices.

TikTok’s new Advisory Council will meet with the company’s U.S. leadership to focus on the key topics of importance starting at the end of the month, with an early focus on creating policies around misinformation and election interference.

“All of our actions, including the creation of this Council, help advance our focus on creating an entertaining, genuine experience for our community by staying true to why users uniquely love the TikTok platform. As our company grows, we are focused on reflection and learning as a part of company culture and committed to transparently sharing our progress with our users and stakeholders,” said TikTok’s U.S. General Manager, Vanessa Pappas. “Our hope is that through thought-provoking conversations and candid feedback, we will find productive ways to support platform integrity, counter potential misuse, and protect the interests of all those who use our platform,” she added. 

TikTok, the popular social media app owned by Chinese tech company ByteDance, has been under a national security investigation by U.S. lawmakers who have raised concerns about the company’s access to U.S. user data and whether it was censoring content at the behest of the Chinese government. Today, TikTok tries to combat these concerns with the opening of a “Transparency Center” that will allow outside experts to examine and verify TikTok’s practices.

The new facility in TikTok’s L.A. office will allow outside experts to view how TikTok’s teams operate day-to-day, the company explains, as staff moderates content on the platform. This includes how moderators apply TikTok’s content Guidelines to review the content its technology automatically flagged for review, as well as other content the technology may have missed.

In addition, the experts will be shown how users and creators are able to bring concerns to TikTok and how those concerns are handled. TikTok will also explain how the content on the platform aligns with its Guidelines, the company says.

This center mainly aims to address the censorship concerns the U.S. has with TikTok, which, as a Chinese-owned company may have to comply with “state intelligence work,” according to local laws, experts have said. TikTok has long denied that’s the case, claiming that no governments — foreign or domestic — have directed its content moderation practices.

That being said, The Washington Post reported last year that searches on TikTok revealed far fewer videos of the Hong Kong protests than expected, prompting suspicions that censorship was taking place. The Guardian, meanwhile, came across a set of content guidelines for TikTok that appeared to advance Chinese foreign policy through the app. TikTok said these guidelines were older and no longer used.

Today, TikTok’s moderation practices are still being questioned, however. In November, it removed a video that criticized China’s treatment of Muslims, for example. The video was restored after press coverage, with TikTok citing a “human moderation error” for the removal.

While the larger concern to U.S. lawmakers is potential for China’s influence through social media, TikTok at times makes other moderation choices that don’t appear to be in line with U.S. values. For example, singer Lizzo recently shaded TikTok for removing videos of her wearing a bathing suit, even as TikTok stars posted videos of themselves dancing in their bathing suits. (The deleted video was later restored after press coverage). The BBC also reported that transgender users were having their posts or sounds removed by TikTok, and the company couldn’t properly explain why. And The Guardian reported on bans of pro-LGBT content. Again, TikTok said the guidelines being referenced in the article were no longer in use.

TikTok says the new transparency center will not only allow the experts to watch but also provide input about the company’s moderation practices.

“We expect the Transparency Center to operate as a forum where observers will be able to provide meaningful feedback on our practices. Our landscape and industry is rapidly evolving, and we are aware that our systems, policies and practices are not flawless, which is why we are committed to constant improvement,” said TikTok  U.S. General Manager, Vanessa Pappas. “We look forward to hosting experts from around the world and continuing to find innovative ways to improve our content moderation and data security systems,” she added.

The Center will open in early May, initially with a focus on moderation. Later, TikTok says it will open up for insight into its source code and efforts around data privacy and security. The second phase will be led by TikTok’s newly appointed Chief Information Security Officer, Roland Cloutier, who starts next month.

The company notes it has taken many steps to ensure its business can continue to operate in the U.S. This includes the release of its new Community Guidelines and the publishing of its first Transparency Report a few months ago. TikTok has also hired a global General Counsel and expanded its Trust & Safety hubs in the U.S., Ireland, and Singapore, it said.

 

The Chinese government has been removing criticism of its coronavirus response from apps like Weibo, the local equivalent of Twitter. But before it can, that content is being saved, decentralized, and highlighted thanks to Arweave’s Permaweb. Today it’s announcing another $8.3 million in funding from Andreessen Horowitz, Union Square Ventures, and Coinbase Ventures.

Arweave has developed a new type of blockchain based on Moore’s Law of the declining cost of data storage. Users pay up front for a hundreds of years of storage at less than a cent per megabyte, and the interest that accrues will cover the dwindling storage cost forever.

That’s allowed for the creation of Perma apps like WeiBlocked, which crawls Weibo for content likely to be censored. It indexes these posts and decentralizes them in the storage of hundreds of Arweave nodes operated around the world. WeiBlocked later checks back to see if the content has been censored, and then highlights them on its Permaweb site you can access from a standard web browser. “By censoring it, it puts it out of the control of the censor”, says Arweave founder Sam Williams. 

It’s like the Streisand Effect in product form. The act of censorship actually causes the sensitive content to become increasingly visible. The more the Chinese government tries to hide information about Dr Li Wenliang, an early coronavirus whistleblower who was pressured into silence by Chinese police and later died of the sickness, the more attention it receives. Williams tells me he’s excited that WeiBlocked is “Putting the censorship protection of the network into practice.”

The potential to become the unmutable layer of the internet attracted the new $8.3 million in funding just four months after Arweave raised its last $5 million from Andreessen Horowitz, USV, and Multicoin Capital. Along with video chat apps, Arweave is of the startups benefitting from the unfortunate ripple effects of the tragic coronavirus.

Rather than providing traditional equity in exchange for cash, Arweave sold investors some of its cache of its blockchain’s tokens. These are what users spend to store data on the Arweave Permaweb. There’s only a finite number in the market, so as demand for everlasting storage increases, so does the value of the tokens. Investors could later sell their stake to generate returns.

Arweave founder Sam Williams

But what’s especially interesting is how Arweave is employing these token economics to build out its developer ecosystem. “We can invest fiat dollars into developers, increasing usage of the network, thereby increasing the value of the tokens” Williams explains. “That makes it sustainable so we can do it in the future, endlessly investing in the ecosystem.” As long as investments in developers cause Arweave’s token stash to accrue more value than the size of the investment, it will always have more to deploy. “We can make it recurring, indefinitely.”

With over 500 nodes in operation, Arweave supports decentralized blogging platforms, indestructible documents, and apps that can keep running even if their owners go out of business. Unlike Bitcoin where miners are rewarded for storing or verifying just the latest block, Arweave’s blockchain incentivizes storage of old blocks on unused server space.

Williams believes Arweave has hit a tipping point, with a functioning economy that means the network will keep running even without his company’s involvement. “I find it increasingly easy to sleep at night. We’re just focused on pushing adoption and the question is ‘how fast’ not ‘if’. It’s a truly decentralized network now.”

There’s always the risk of some yet-undiscovered code problems, or another permanent approach to the web undercutting Arweave. But with countries like Russia pushing new attempts to wall themselves off from the outside internet, there’s increasing need for Arweave’s network. “Activity is exploding, expectably around where censorship resistance can be valuable.”

Next, the Permaweb community wants to safeguard itself from even a disruption of Internet connectivity itself. There’s an initiative to make Arweave work over high-frequency radio. Through a morse code-like system, sensitive content could be smuggled out of a country via radio, indexed, and kept accessible forever.

TikTok today released a new set of safety videos designed to playfully inform users about the app’s privacy controls and other features — like how to filter comments or report inappropriate behavior, among other things. One video also addresses TikTok’s goal of creating a “positive” social media environment, where creativity is celebrated and harassment is banned.

This particular value — that TikTok is for “fun” — is cited whenever the Beijing-based company is pressured about the app’s censorship activity. Today, TikTok hides under claims that it’s all about being a place for lighthearted, positive behavior. But in reality, it’s censoring topics China doesn’t want its citizens to know about — like the Hong Kong protests, for example. Meanwhile, it doesn’t appear to take action on political issues in the U.S., where hashtags like #dumptrump or #maga have millions of views.

To figure out its approach to moderation, TikTok recently hired corporate law firm, K&L Gates, to advise it on how to create policies that won’t have it coming under the eye of U.S. regulators.

In the meantime, TikTok is tackling the job of crafting the sort of community it wants through these instructive videos. But it’s not just issuing its commands from the top-down — TikTok partners with its own creators to participate in the videos and then promote them to fans. The first set of videos, released in February, featured a dozen TikTok creators, for example.

This time around, the company has pulled in a dozen more, including: @nathanpiland@d_damodel@juniortvine@Stevenmckell@supershaund@ourfire@thedawndishsoap@katjaglieson@mahoganylox@chanydakota@shreksdumpster, and @christinebarger.

This is a much different approach to community-setting, compared with Twitter, Facebook or Instagram. Those platforms took years before they addressed users’ basic needs for privacy, security and anti-harassment features, like filtering comments, blocking and muting, and more. In the meantime, social media became a haven for trolls and abuse.

TikTok is approaching the problem from a different standpoint — by consciously creating a community where users are knowledgable and feel empowered to kick out the bad elements from disrupting their fun.

The only problem is that TikTok’s definition of what’s “fun” and appropriate has a political bent.

Creativity and art aren’t only meant for expressing positive sentiments. And given that TikTok is already enforcing China’s censorship of topics like Tiananmen Square, Hong Kong, and Taiwan to its over 500M+ global monthly users, it wouldn’t be a leap to find the company one day censoring all sorts of political speech and other social issues  — effectively becoming a tool for China to spread its government’s views to the wider world. And that’s far less fun.

 

 

Welcome back to This Week in Apps, the Extra Crunch series that recaps the latest OS news, the applications they support, and the money that flows through it all.

The app industry in 2018 saw 194 billion downloads and more than $100 billion in purchases. Just in the past quarter, consumer spending exceeded $23 billion and installs topped 31 billion. It’s a fact: we spend more time on our phones than we do watching TV.

This week, the only thing on everyone’s minds was App Store censorship and Apple’s capitulation to the Chinese government. We also looked at the launch of a high-profile Catalyst app’s launch, and delved into a new analysis of Q3 trends.

Apple caves to China’s demands on App Store censorship

App Store censorship is a hot topic again this week, as Apple made the disappointing decision to cave to demands from Chinese officials to pull the HKmap app, which was being used by pro-democracy protestors in Hong Kong to crowdsource information about police presence and street closures. Apple originally banned the app, then changed its mind and allowed it back in the App Store, which prompted criticism by the Chinese government — which led Apple to pull the app down again.