Steve Thomas - IT Consultant

Happy New Year! It’s been another wild and wacky ride of a year in the tech world: breakthroughs and disgraces, triumphs and catastrophes, cryptocurrencies and starships, the ongoing rise of utopian clean energy and dystopian cyberpunk societies, and most of all, the ongoing weirding of the whole wide world.

In other words it was another perfect year for The Jons, the annual award which celebrates dubious tech-related achievements, named, in an awe-inspiring fit of humility, after myself. We’ve got quite a lineup for you this year, folks. So let’s get to it! With very little further ado, I give you: the fuftg annual Jon Awards for Dubious Technical Achievement!

(The Jons 2015) (The Jons 2016) (The Jons 2017) (The Jons 2018)

THE CATLIKE FINANCIAL REFLEXES AWARD FOR LANDING ON YOUR FEET AFTER UNMITIGATED DISASTER

To Adam Neumann, who presided over the spectacular rise and even more spectacular fall from grace of WeWork, which proudly launched its proposed IPO this year and promptly saw most of its valuation (and its cash) disintegrate in a sea of eyebrow-raising stories about delusional irresponsibility and the harsh realities of actual business. However, give Neumann credit: stories may have made him sound like a potsmoking surfer dude who lived in a hallucinatory fantasyland, but — unlike his employees, whose dreams of IPO wealth were suddenly and completely shattered — he managed to walk away from the business he drove nearly into the ground with a reported $1.7 billion windfall.

THE EVERYBODY’S BEST FRIEND AWARD FOR INSPIRING NOSEBLEED VALUATIONS AND ASPIRATIONAL POSTERS EVERYWHERE

To Masayoshi Son, whose widely announced dreams of a $108 billion Vision Fund II turned into the relative nightmare of something “far smaller” — but still has his surreal, dreamlike slide decks to fall back on. After all, “SoftBank works to comfort people in their sorrow.”

THE WE MAY AS WELL JUST GIVE HIM A LIFETIME ACHIEVEMENT AWARD FOR ELON DOING HIS ELON THING

To — obviously — Elon Musk, who actually had a really good year: Tesla stock got ‘so high‘ it brushed the price at which he previously announced he would take it private (he didn’t); SpaceX launched Starlink, a “very big deal“; and he was acquitted of defamation for calling a complete stranger a pedophile on twitter. OK, so he also announced Starship should reach orbit by this coming March, and smashed the Cybertruck’s allegedly unbreakable windows onstage at its unveiling, but still, a good year! See you in 2020, Elon.

THE IF AT FIRST YOU DON’T CONVINCE, TELL AN EVEN MORE RIDICULOUS TALE AWARD FOR RISIBLE SATOSHI NAKAMOTO CLAIMS

To Craig Wright, who has long claimed in the face of mocking industrywide disbelief to be Satoshi Nakamoto, the creator of Bitcoin, and especially for his claims that, now work with me here, the keys 1 million of Satoshi’s bitcoin were put in a “Tulip Trust” by a long-deceased collaborator and will be delivered to him by a “bonded courier” on January 1st 2020, i.e. a few days from now. The judge he told this to was, unsurprisingly, spectacularly unconvinced, saying “Dr. Wright’s demeanor did not impress me as someone who was telling the truth” and also reproached him for his “willful and bad faith pattern of obstructive behavior.” You don’t say.

THE DEAD MEN TELL NO TALES, BUT ONLY IF THEY’RE ACTUALLY DEAD AWARD FOR LEAVING A TRAIL OF CRYPTOCURRENCY CHAOS IN ONE’S WAKE

To my fellow Canadian Gerald William Cotten, the founder of QuadrigaCX, who apparently stole and/or lost essentially all of his customers’ money, spending much of it on “luxury goods and real estate,” before his death in Mumbai last year. “But Jon,” you say, “how does this quality for a 2019 Jon Award?” Because the many thousands who lost money are now demanding an exhumation to determine that the body in Cotten’s grave is, in fact, Cotten. As for the surviving founder, he’s “a reported ex-con who served 18 months in a federal U.S. prison for identity theft, bank fraud and credit card fraud.” Is this the end of this crazy story? …Well, probably yes. But in the world of cryptocurrencies, which reliably gives us the most jawdropping Jons, who can say for sure?

THE I’VE SEEN THE FUTURE BABY AND IT’S PRETTY CRAZY AWARD FOR EPITOMIZING OUR CYBERPUNK PRESENT

To Lil Nas X, a previously unknown queer black American teenager who made a country-trap song with a beat he purchased for $30 from a Dutch producer, which sampled an obscure Nine Inch Nails deep cut, recorded it in less than an hour for $20, crafted a hundred memes to publicize it on a new Chinese-owned video-snippet social network, and then saw it go viral courtesy of a Yeehaw Challenge meme, hit first country and then crossover success, and become the longest-reigning Billboard No. 1 single of all time. Does it even get more postmodern cyberpunk than that? Lil Nas X, this is your world (well, and Billie Eilish’s) — we just live in it.

THE POWER TO DRIVE BABY BOOMERS COMPLETELY MAD AWARD FOR BEING SENSIBLY UPSET ABOUT THINGS

To Greta Thunberg, another teenager, who is an angry advocate of doing something about climate change and for some reason frequently drives a whole lot of apparently lucid people, as well as the President of the United States, completely insane, prompting them to level ludicrous and deeply attacks at a sixteen-year-old autistic girl. It is truly mystifying, and yet revelatory. Maybe they’re just upset that she’s so good at Twitter?

THE SOMEONE MUST BE TO BLAME, THIS IS SOMEONE, THEY MUST BE TO BLAME AWARD FOR LASHING OUT IN THE WRONG DIRECTIONS

To the mass media, for the techlash: the backlash against tech in which they blame the tech industry not only for its actual sins and problems, which are admittedly not hard to find, but also for essentially everything that is wrong with the world’s political and financial systems. Politics is somehow the fault of Facebook, rather than venal politicians and their ability to manipulate, er, the mass media like a Stradivarius. Inequality is somehow the fault of the tech industry, rather than City / Wall Street parasitism, regulatory capture, and, again, the politicians who actually write the laws which enact inequality. Again, the tech industry has real problems — but the fact that it has devoured the advertising and classifieds income that long propped up the media seems to have caused otherwise sober and thoughtful journalists to instinctively knee-jerk blame it for every ill, while letting their actual architects off lightly. Sadly I fear this one is going to be a perennial.

THE WHO NEEDS HUMAN FACES OR WORDS AWARD FOR SIMULATING THE DEEP INSIGHTS OF INTERNET DISCOURSE

To StyleGAN 2 and GPT-2, neural networks from Nvidia and OpenAI respectively, which generate fully convincing fake human faces, and close-enough-for-the-Internet convincing fake human comment sections, respectively. I feel certain that somewhere out there on the Internet, bots with StyleGAN avatars and GPT-2-sourced texts are already waging war against one another in befuddling comment sections: battles which have no end, no point, and no room for any actual humanity. The more things change, eh?

THE POP GOES THE IPO AWARD FOR MAKING LOCKUP PERIODS MEANINGFUL AGAIN

To Slack, Lyft, and Uber, all of whom went public this year and, despite being extremely high-profile tech companies, promptly saw their stock prices crater and stay there, while their most recent employees presumably saw their lockup period come and go while remaining resolutely underwater. All this while big, boring tech companies like Google and Microsoft saw their stock climb to new highs nearly every week. Maybe joining a rocket ship isn’t always such a great idea after all…

THE WHAT’S A FEW BILLION DOLLARS BETWEEN FRIENDS AWARD FOR JAM YESTERDAY, JAM TOMORROW, BUT NEVER JAM TODAY

To Ron Abovitz of Magic Leap, whose technology demos over the last decade have been, by all accounts, truly breathtaking and mindboggling, but whose actual shipped technology, despite ten years and nearly $3 billion in funding, has been, by all accounts, deeply disappointing. Now Magic Leap is hemmorhaging high-profile board members, signing over patents as collatoral to JP Morgan Chase while desperately trying to raise funding, and it next headset is reportedly still years away from launch. But look, those demos were amazing.

THE A SINGLE SACRIFICIAL LAMB FRANKLY ISN’T ENOUGH AWARD FOR A DEEP AND SYSTEMIC CATASTROPHE

To Boeing and its 737 MAX debacle, in which, among numerous other stunning derelictions of fundamental engineering duties, crucial safety features were sold as profitable optional extras — and yet it took not one but two crashes, killing hundreds, for them to admit any problems. Their CEO has resigned, but the company’s failures are clearly deep and systemic rather than individual; their once famously engineer-driven corporate culture is clearly no more. Their example of the decline of American capitalism in general is almost a little too on-the-nose, but then, that’s 2019 for you.

Congratulations, of a sort, to all the winners of the Jons! All recipients shall receive a bobblehead of myself made up as a Blue Man, as per the image on this post, which will doubtless become coveted and increasingly valuable collectibles. (And needless to say, sometime next year they will become redeemable for JonCoin.) And, of course, all winners shall be remembered by posterity forevermore.


1Bobbleheads shall only be distributed if and when available and convenient. The eventual existence of said bobbleheads is not guaranteed or indeed even particularly likely. Not valid on days named after Norse or Roman gods. All rights reserved, especially those rights about which we have reservations.

In tech, this was the smartphone decade. In 2009, Symbian was still the dominant ‘smartphone’ OS, but 2010 saw the launch of the iPhone 4, the Samsung Galaxy S, and the Nexus One, and today Android and iOS boast four billion combined active devices. Smartphones and their apps are a mature market, now, not a disruptive new platform. So what’s next?

The question presupposes that something has to be next, that this is a law of nature. It’s easy to see why it might seem that way. Over the last thirty-plus years we’ve lived through three massive, overlapping, world-changing technology platform shifts: computers, the Internet, and smartphones. It seems inevitable that a fourth must be on the horizon.

There have certainly been no shortage of nominees over the last few years. AR/VR; blockchains; chatbots; the Internet of Things; drones; self-driving cars. (Yes, self-driving cars would be a platform, in that whole new sub-industries would erupt around them.) And yet one can’t help but notice that every single one of those has fallen far short of optimistic predictions. What is going on?

You may recall that the growth of PCs, the Internet, and smartphones did not ever look wobbly or faltering. Here’s a list of Internet users over time: from 16 million in 1995 to 147 million in 1998. Here’s a list of smartphone sales since 2009: Android went from sub-1-million units to over 80 million in just three years. That’s what a major platform shift looks like.

Let’s compare each of the above, shall we? I don’t think it’s an unfair comparison. Each has had champions arguing it will, in fact, be That Big, and even people with more measured expectations have predicted growth will at least follow the trajectory of smartphones or the Internet, albeit maybe to a lesser peak. But in fact…

AR/VR: Way back in 2015 I spoke to a very well known VC who confidently predicted a floor of 10 million devices per year well before the end of this decade. What did we get? 3.7M to 4.7M to 6M, 2017 through 2019, while Oculus keeps getting reorg’ed. A 27% annual growth rate is OK, sure, but a consistent 27% growth rate is more than a little worrying for an alleged next big thing; it’s a long, long way from “10xing in three years.” Many people also predicted that by the end of this decade Magic Leap would look like something other than an utter shambles. Welp. As for other AR/VR startups, their state is best described as “sorry.”

Blockchains: I mean, Bitcoin’s doing just fine, sure, and is easily the weirdest and most interesting thing to have happened to tech in the 2010s; but the entire rest of the space? I’m broadly a believer in cryptocurrencies, but if you were to have suggested in mid-2017 to a true believer that, by the end of 2019, enterprise blockchains would essentially be dead, decentralized app usage would still be measured in the low thousands, and no real new use cases would have arisen other than collateralized lending for a tiny coterie — I mean, they would have been outraged. And yet, here we are.

Chatbots: No, seriously, chatbots were celebrated as the platform of the future not so long ago. (Alexa, about which more in a bit, is not a chatbot.) “The world is about to be re-written, and bots are going to be a big part of the future” was an actual quote. Facebook M was the future. It no longer exists. Microsoft’s Tay was the future. It really no longer exists. It was replaced by Zo. Did you know that? I didn’t. Zo also no longer exists.

The Internet of Things: let’s look at a few recent headlines, shall we? “Why IoT Has Consistently Fallen Short of Predictions.” “Is IoT Dead?” “IoT: Yesterday’s Predictions vs. Today’s Reality.” Spoiler: that last one does not discuss about how reality has blown previous predictions out of the water. Rather, “The reality turned out to be far less rosy.”

Drones: now, a lot of really cool things are happening in the drone space, I’ll be the first to aver. But we’re a long way away from physical packet-switched networks. Amazon teased Prime Air delivery way back in 2015 and made its first drone delivery way back in 2016, which is also when it patented its blimp mother ship. People expected great things. People still expect great things. But I think it’s fair to say they expected … a bit more … by now.

Self-driving cars: We were promised so much more, and I’m not even talking about Elon Musk’s hyperbole. From 2016: “10 million self-driving cars will be on the road by 2020.” “True self-driving cars will arrive in 5 years, says Ford“. We do technically have a few, running in a closed pilot project in Phoenix, courtesy of Waymo, but that’s not what Ford was talking about: “Self-driving Fords that have no steering wheels, brake or gas pedals will be in mass production within five years.” So, 18 months from now, then. 12 months left for that “10 million” prediction. You’ll forgive a certain skepticism on my part.

The above doesn’t mean we haven’t seen any successes, of course. A lot of new kinds of products have been interesting hits: AirPods, the Apple Watch, the Amazon Echo family. All three are more new interfaces than whole new major platforms, though; not so much a gold rush as a single vein of silver.

You may notice I left machine learning / AI off the list. This is in part because it definitely has seen real qualitative leaps, but a) there seems to be a general concern that we may have entered the flattening of an S-curve there, rather than continued hypergrowth, b) either way, it’s not a platform. Moreover, the wall that both drones and self-driving cars have hit is labelled General Purpose Autonomy … in other words, it is an AI wall. AI does many amazing things, but when people predicted 10M self-driving cars on the roads next year, it means they predicted AI would be good enough to drive them. In fact it’s getting there a lot slower than we expected.

Any one of these technologies could define the next decade. But another possibility, which we have to at least consider, is that none of them might. It is not an irrefutable law of nature that just as one major tech platform begins to mature another must inevitably start its rise. We may well see a lengthy gap before the next Next Big Thing. Then we may see two or three rise simultaneously. But if your avowed plan is that this time you’re totally going to get in on the ground floor — well, I’m here to warn you, you may have a long wait in store.

Neo-Pentecostal gangs in Brazil, driving out other faiths at gunpoint. A mob of 100 lawyers attacking a hospital in Pakistan to revenge themselves on violent doctors there. Anti-vaxxers, neo-Nazis, and red-pillers. Sometimes it seems like the world has fragmented into a jagged kaleidoscope of countless mobs and subcultures, each more disconcerting than the last.

Much of this is selection bias: if it bleeds, it leads, in both mass media and social-media algorithms. But it does seem plausible that the Internet is contributing to this kaleidoscope, to this growth in worrisome fringe subcultures, in three separate ways: complexity, information, and connectivity.

Connectivity is the most obvious avenue. The Internet empowers everyone to find their like-minded people, and this is as true of the hateful and vengeful as it is of the dispossessed and downtrodden. Furthermore, the power of a group increases nonlinearly with its size. The hateful views of one man in a community of 100 people are unlikely to make a huge collective difference; worst case, they become a fabled missing stair. But 1% of a nation of 100 million? That’s a million people. That’s a movement ready to join, to march, to pay tithes, to reinforce one another.

Information is more subtle. There’s a fascinating quote from the recent New Yorker profile of William Gibson: “Gibson noticed that people with access to unlimited information could develop illusions of omniscience.” You can get any kind of information you want on the Internet. You can find what appear at first glance to be closely argued and well-supported claims that global warming will kill off all but half a billion people by the end of this century, and also, if you prefer, that global warming is an authoritarian hoax.

No wonder people increasingly act as if the truth is something you choose from a buffet rather than a fact that will eventually bite you, hard, if you refuse to believe in it. As Philip K. Dick put it, “Reality is that which, when you stop believing in it, doesn’t go away.” But if all your information comes from the Internet, and is never testable, you never have to stop believing in it … until it’s far too late.

Complexity is, I think, the saddest. It has nothing to do with being led astray by evil companions or disinformation. It’s just that our modern world has become so complicated, such an endless buzz of noise and events and obligations, that lashing back against it, fixating on a simple solution to all the world’s problems, makes people feel strong. This delightful article about schisms among believers in a flat Earth includes the telling quote: “When you find out the Earth is flat … then you become empowered.”

Is the world going to get weirder yet, with new and more bizarre and inexplicable subcultures erupting from the Internet with every passing year? Have we hit the plateau of an S-curve? Or are we in a local minimum, and as we get better at dealing with connectivity and complexity, will we look back on these as the crazy years? My money’s on door number two … but I’ve been outweirded before.

Every so often a story comes along which is unremarkable on its face but erupts into wider attention because it seems to represent some larger social fracture zone. …And then there’s the recent story of mismanagement and malfeasance at Away, which has caught the tech world’s attention because it seems a shibboleth for all the industry’s fault lines.

This story is whatever you want it to be. It’s a tale of exploitation of the poor and struggling by executives born rich and privileged; of the unfair, disproportionately harsh and negative scrutiny that women CEOs get; of the inherent cultural toxicity of constant surveillance (Away banned emails and DMs, insisting that all communication took place in public Slack channels); of the need for tech workers fo unionize; of the need for young workers to toughen up and live in the real world, which sometimes has asshole bosses.

Fine, I’ll take a paragraph break, but I’m not done: a tale of how not to apologize (clue: don’t try to exercise draconian control over your employees’ personal social media accounts on the same day you’re publicly apologizing for your previous draconian mistreatment of them); of the sacrifices required to build a startup; of how the real problem boils down to mismanagement and misaligned incentives, and the rest is noise; of how what previous generations considered shitty but acceptable boss behavior is now judged as completely unacceptable toxic abuse.

It is, in short, the perfect Rorschach test for today. Like most Rorschach tests, the panoply of reactions to it is much more interesting than the story itself. This is especially true because of the widespread suspicion that there was a disparity between public responses and private thoughts — that people who didn’t agree that Away’s executives should be lambasted were reluctant to say so. That’s right, it’s also a story about social media, public shaming, cancel culture, and the intolerant left! Seriously, this little morality play has everything.

So, to their eternal credit, the semi-satirical VC Starter Kit account performed a Twitter experiment: “If you’re a VC, founder or journalist, DM me your thoughts on the Away piece and I’ll anonymously post your response here,” and then posted a summary of the responses to (of course!) their Substack.

Interestingly, the results do indeed seem to suggest a far more massive cultural divide than the public responses do. I encourage you to go read them. To my mind, and I concede this is probably pretty idiosyncratic, they ultimately condense down to one of two views: 1. startups are hard, and there are always going to be points where you have to choose between startup success and treating people well, and success comes first; 2. startups are hard, but if you get to the point where you have to choose between startup success and treating people well, you have already royally fucked up, and if you then choose the former, you should be both privately and publicly ashamed of it.

To an extent I think this is generational. It seems that behavior that Gen Xers like myself might stereotypically respond to with “what an asshole, but that’s the way bosses are sometimes, so it goes,” is to Gen Zers “this is completely unacceptable toxic abuse that no one should ever experience.” This is probably almost entirely a good thing. Spreading the notion that it is important to treat other people better than we once did leads a lot more directly to the fabled “better world” than most any of the companies which claim to be doing so.

Granted, on the other hand, if we get to a point where we let the 1% of the most sensitive members of our society, prone to the most negative interpretations of any and all complexity and nuance, dictate what is acceptable, that would be a kind of bizarre form of unacceptable tyranny in and of itself. To be clear I don’t think we’re collectively anywhere remotely near any risk of that; rather, we’re finally beginning to appreciate that “you should be tougher than that” is about as useful to most victims of bullying, misogyny, bigotry, etc. as it is to victims of a stabbing. But it’s important to recognize that the perception of such an endgame, however skewed it is, makes a lot of people uneasy.

Either way, though, I find myself subscribing to theory number two: startups are hard, but if you get to the point where you have to choose between startup success and treating people well, you have already royally fucked up. Just because Steve Jobs was an asshole doesn’t mean that being an asshole is a necessary requirement of CEOdom, much less a sufficient one. If you’ve screwed up to the point that you face that choice, and then go all in on the startup, well, you won’t be the first, or even the millionth … but you may want to take a long hard look at what that word success really means.

Instagram dodges child safety laws. By not asking users their age upon signup, it can feign ignorance about how old they are. That way, it can’t be held liable for $40,000 per violation of the Child Online Privacy Protection Act. The law bans online services from collecting personally identifiable information about kids under 13 without parental consent. Yet Instagram is surely stockpiling that sensitive info about underage users, shrouded by the excuse that it doesn’t know who’s who.

But here, ignorance isn’t bliss. It’s dangerous. User growth at all costs is no longer acceptable.

It’s time for Instagram to step up and assume responsibility for protecting children, even if that means excluding them. Instagram needs to ask users’ age at sign up, work to verify they volunteer their accurate birthdate by all practical means, and enforce COPPA by removing users it knows are under 13. If it wants to allow tweens on its app, it needs to build a safe, dedicated experience where the app doesn’t suck in COPPA-restricted personal info.

Minimum Viable Responsibility

Instagram is woefully behind its peers. Both Snapchat and TikTok require you to enter your age as soon as you start the sign up process. This should really be the minimum regulatory standard, and lawmakers should close the loophole allowing services to skirt compliance by not asking. If users register for an account, they should be required to enter an age of 13 or older.

Instagram’s parent company Facebook has been asking for birthdate during account registration since its earliest days. Sure, it adds one extra step to sign up, and impedes its growth numbers by discouraging kids to get hooked early on the social network. But it also benefits Facebook’s business by letting it accurately age-target ads.

Most importantly, at least Facebook is making a baseline effort to keep out underage users. Of course, as kids do when they want something, some are going to lie about their age and say they’re old enough. Ideally, Facebook would go further and try to verify the accuracy of a user’s age using other available data, and Instagram should too.

Both Facebook and Instagram currently have moderators lock the accounts of any users they stumble across that they suspect are under 13. Users must upload government-issued proof of age to regain control. That policy only went into effect last year after UK’s Channel 4 reported a Facebook moderator was told to ignore seemingly underage users unless they explicitly declared they were too young or were reported for being under 13. An extreme approach would be to require this for all signups, though that might be expensive, slow, significantly hurt signup rates, and annoy of-age users.

Instagram is currently on the other end of the spectrum. Doing nothing around age-gating seems recklessly negligent. When asked for comment about how why it doesn’t ask users’ ages, how it stops underage users from joining, and if it’s in violation of COPPA, Instagram declined to comment. The fact that Instagram claims to not know users’ ages seems to be in direct contradiction to it offering marketers custom ad targeting by age such as reaching just those that are 13.

Instagram Prototypes Age Checks

Luckily, this could all change soon.

Mobile researcher and frequent TechCrunch tipster Jane Manchun Wong has spotted Instagram code inside its Android app that shows it’s prototyping an age-gating feature that rejects users under 13. It’s also tinkering with requiring your Instagram and Facebook birthdates to match. Instagram gave me a “no comment” when I asked about if these features would officially roll out to everyone.

Code in the app explains that “Providing your birthday helps us make sure you get the right Instagram experience. Only you will be able to see your birthday.” Beyond just deciding who to let in, Instagram could use this info to make sure users under 18 aren’t messaging with adult strangers, that users under 21 aren’t seeing ads for alcohol brands, and that potentially explicit content isn’t shown to minors.

Instagram’s inability to do any of this clashes with it and Facebook’s big talk this year about its commitment to safety. Instagram has worked to improve its approach to bullying, drug sales, self-harm, and election interference, yet there’s been not a word about age gating.

Meanwhile, underage users promote themselves on pages for hashtags like #12YearOld where it’s easy to find users who declare they’re that age right in their profile bio. It took me about 5 minutes to find creepy “You’re cute” comments from older men on seemingly underage girls’ photos. Clearly Instagram hasn’t been trying very hard to stop them from playing with the app.

Illegal Growth

I brought up the same unsettling situations on Musically, now known as TikTok, to its CEO Alex Zhu on stage at TechCrunch Disrupt in 2016. I grilled Zhu about letting 10-year-olds flaunt their bodies on his app. He tried to claim parents run all of these kids’ accounts, and got frustrated as we dug deeper into Musically’s failures here.

Thankfully, TikTok was eventually fined $5.7 million this year for violating COPPA and forced to change its ways. As part of its response, TikTok started showing an age gate to both new and existing users, removed all videos of users under 13, and restricted those users to a special TikTok Kids experience where they can’t post videos, comment, or provide any COPPA-restricted personal info.

If even a Chinese app social media app that Facebook CEO has warned threatens free speech with censorship is doing a better job protecting kids than Instagram, something’s gotta give. Instagram could follow suit, building a special section of its apps just for kids where they’re quarantined from conversing with older users that might prey on them.

Perhaps Facebook and Instagram’s hands-off approach stems from the fact that CEO Mark Zuckerberg doesn’t think the ban on under-13-year-olds should exist. Back in 2011, he said “That will be a fight we take on at some point . . . My philosophy is that for education you need to start at a really, really young age.” He’s put that into practice with Messenger Kids which lets 6 to 12-year-olds chat with their friends if parents approve.

The Facebook family of apps’ ad-driven business model and earnings depend on constant user growth that could be inhibited by stringent age gating. It surely doesn’t want to admit to parents it’s let kids slide into Instagram, that advertisers were paying to reach children too young to buy anything, and to Wall Street that it might not have 2.8 billion legal users across its apps as it claims.

But given Facebook and Instagram’s privacy scandals, addictive qualities, and impact on democracy, it seems like proper age-gating should be a priority as well as the subject of more regulatory scrutiny and public concern. Society has woken up to the harms of social media, yet Instagram erects no guards to keep kids from experiencing those ills for themselves. Until it makes an honest effort to stop kids from joining, the rest of Instagram’s safety initiatives ring hollow.

In the waning years of the last millennium, at my university, one of the cause célèbres of the progressive left was a concept known as “Manufacturing Consent,” the title of a book and film, by and starring Noam Chomsky. Its central thesis was that US mass media “are effective and powerful ideological institutions that carry out a system-supportive propaganda function, by reliance on market forces, internalized assumptions, and self-censorship.”

It’s fair to say that history has been pretty kind to this theory. Consider the support drummed up by mass media for the invasion of Iraq in 2003. To quote the public editor of the New York Times, “To anyone who read the paper between September 2002 and June 2003, the impression that Saddam Hussein possessed, or was acquiring, a frightening arsenal of W.M.D. seemed unmistakable. Except, of course, it appears to have been mistaken.” Consider the September 2002 dossier published by the UK government “to bolster support for war” which turned out to be full of spectacularly incorrect information, and the media’s failure to interrogate those claims.

It’s hard to overstate just how cataclysmic these errors were. If the mass media had pushed back against the false claims of weapons of mass destruction, we might have avoided the Iraq war, which killed hundreds of thousands and cost trillions of dollars. Saddam Hussein was not exactly a tough act to follow, but the US still managed to follow its falsely motivated war with a botched occupation which turned Iraq, and arguably the larger Middle East to this day, into a bloodbath.

An interesting question is: what would have happened if today’s social media had been around in 2003? Today, if a wrong assertion is promoted by the mass media, it doesn’t take long for subject-matter experts to appear on Facebook and Twitter, correcting them, and either going viral themselves or becoming the subjects of countervailing media stories.

This doesn’t necessarily mean catastrophe would have been averted. But at least a possible corrective to the collective hysteria of the mass media would have existed, unlike in 2002-3. (Yes, those were the days of Blogspot and LiveJournal, but they didn’t have anything like the reach or significance of today’s social media.)

Consider a more recent event: the 2016 American presidential election. It has become an article of faith, in certain quarters, that it was won and lost by the diabolical use of Facebook ads, especially in conjunction with the psychographic superscience of Cambridge Analytica. This is ridiculous. First, no one credible thinks CA’s purported ability to mind control Facebook users by showing them “psychographically” targeted ads was anything other than snake-oil nonsense.

Second, as Nate Silver points out, the impact of social-media ads was enormously less than the impact of mass media. Remember the months of hysteria about Hillary Clinton’s emails? Remember how it turned out to be a complete non-story? Doesn’t this remind you of Iraq’s WMDs?

“The media’s coverage of Hillary Clinton’s email scandal was probably literally 50 times more important to the outcome of the 2016 election than Trump ads on Facebook.” Perhaps, my fellow mass media, the fault lies not in our psychographic bullshit artists, but in ourselves

Social media has many downsides. You don’t have to go particularly deeply into my own back catalog to discover that I am a harsh critic of Facebook myself. But let’s not pretend that mass media, just because it’s older, is therefore perfect. It has its own catastrophic failure modes itself. In fact — whisper it — maybe we’re a lot better off, net, with social media and mass media, in that each can act as a counterbalancing corrective on the other’s flaws and failure modes.

The progressive left may have gone from “mass media is the enemy” to “Big Tech social media is the enemy,” but maybe, and I know this sounds crazy because it’s on the Internet, but hear me out here, maybe there’s room for a little nuance; maybe they both have good and bad aspects, and could possibly balance one another out. If you don’t think mass media needs a corrective, let me remind you once again of the Iraq War and But Her Emails, to name but two of many, many examples. Maybe there exists a future in which social and mass media are each a cure for what ails the other.

The International Energy Agency published its annual World Energy Outlook ten days ago. In this era of climate crisis, that outlook includes, as you would expect, stern warnings of catastrophic warming. But it also includes interesting nuggets of hope and optimism — and they aren’t alone. Global warming is a slow-motion in-progress planetary train-wreck, true; but you don’t have to look too hard to find evidence that new technology might yet, eventually, after enormous expense and had work, get us halfway back on non-catastrophic rails.

Consider the dreaded coal mine. Coal mines are really, really bad. How bad? New research suggests that methane leakage from coal mines, alone — without even considering burning the coal after it’s mined! — has “a greater warming impact than aviation and shipping combined.” (Italics mine.) Fly less and drink from paper straws if it makes you feel better, but if you really want to fight global warming, help close coal mines and/or prevent new ones from opening.

The WEO projects a long plateau in our collective reliance on coal over the next decades. That may seem surprising, but: “rising demand in India is one of the key factors holding global coal use steady, despite rapid falls in developed economies.” However, in India, “510GW of new coal has been cancelled since 2010 due to competition from cheaper renewables, financial distress at utility firms and public opposition” while Indian “coal power generation shows a declining trend since August 2019.” (Again, italics mine.) This is because of a decrease in demand, but it’s one that’s especially well-timed …

…because at the same time, renewables are on a tear in India, and around the world. They just keep getting cheaper. The IEA is infamous for drastically, comically underestimating how fast solar power capacity will grow around the world. (Here’s a paper which tries to explain why.) Bewilderingly, they are sticking to this, despite having been proven spectacularly wrong every year for the last decade:

It’s very easy to envision a scenario in which solar continues to skyrocket, coal diminishes faster than the IEA currently projects, and we emit significantly less methane and carbon dioxide than expected. (Oh, and reap massive public health benefits, too.) Yes, renewables will eventually run into significant unsolved, intermittency problems, but as Ramez Naam puts it, “these problems are distant.”

In the shorter term, even if the IEA’s impressively pessimistic projections are correct, they are still actually reason for relative optimism. The famous IPCC Fifth Assessment report on climate change gave us four scenarios. The worst case is known as RPC8.5 (RPC for Representative Concentration Pathway, a name only a bureaucrat could love, and 8.5 for the watts per meter squared of radiative forcing, i.e. the difference between energy received from the sun and that radiated back out to space.) The second-worse is RPC6.0. And the IEA’s World Energy Outlook seems to indicate that we’re currently tracking better than either of those cases;

Again, this is relative optimism: it’s by no means “everything is going to be fine,” but it is “thanks to the spectacular growth of renewable energy, we do not seem to be on course for the IPCC’s worst or even second-worst projection.” Of course this is all estimation. Models are complex and comparisons are hard. For instance, the IEA projections do not include cement:

But speaking of cement, there’s a recent potential breakthrough there, too. Cement is responsible for some 8% of global carbon emissions, and 40% of those come from simply heating limestone to over 1,000 degrees. Heliogen’s new solar thermal plant can do that with sunlight — using machine learning.

Of course what we ultimately want is carbon capture. But wait! There’s a recent potential breakthrough there, as well. A few years ago the cost of capturing carbon from the air was estimated at hundreds of dollars per ton. But that is on a steep decline, with estimates for new technologies now as low as $50/ton. (A typical car releases about 5 tons per year.)

Hockey-sticking renewable energies. Solar thermal cement. Cheaper carbon capture. In what may often seem like the forthcoming wasteland of the climate crisis, there are a surprising number of green shoots. Of course not all of them may grow. There’s many a slip ‘twixt breakthrough proof-of-concept and actual production at scale. And there’s always the chance that better data and models may undercut apparent (relatively) good news.

But at the same time, in addition to the apocalypticists who seem to take a grim glee in oncoming catastrophe, and the hairshirt moralizers who seem to believe that suggesting anything other than “we’re all doomed, unless we go back to living in carbon-neutral caves!” is dangerous, there is another narrative. One which says “we, as a species, have a huge amount of incredibly expensive work to do, yes, but despair is not the only thing on the menu.” It’s true that politicians seem unlikely to save us from a climate disaster. Technology, however, still might.

Something strange is afoot in the world of cryptocurrencies. For the first time since Satoshi dropped Bitcoin on us like a benevolent bomb, this painfully new, highly bizarre field has become … well … boring. The true believers will tell you that great strides are being made, and the mainstream breakthrough is just around the corner, but they’ve been saying that for long enough that it’s beginning to seem reasonable to start wondering if these wolves were ever real.

I know, I know, it seems especially weird to be saying this at the same time that the President of China and CEO of Facebook have both become blockchain advocates. But China’s cryptocurrency, if it happens, will be a panopticoin, a tool to centralize monetary control even more firmly in the hands of the Communist Party, nothing like the decentralized censorship-resistant programmable money that the crypto community is theoretically all about; and Facebook’s, while making technical progress, keeps losing partners and gaining enemies.

The crypto community is currently all agog about “DeFi,” for decentralized finance, a movement which basically expands cryptocurrencies from “censorship-resistant money” to “censorship-resistant financial instruments,” such as collateralized loans and interest-bearing investments, along with “staking” (not really DeFi, but often treated as it.) Inside the crypto world, this seems like a revolution which will one day replace Wall Street. Outside the crypto world, it seems … a little like monks debating how many angels can dance on the end of a pin, one that no one is actually using and nobody outside the monastery cares about.

It’s easy to get the impression the cryptocurrency world has sacrificed technical engineering in favor of financial engineering. It’s easy to see them as having abandoned “banking the unbanked,” the alleged initial noble goal of many, to “offering sophisticated financial instruments to the unbanked,” long before any of those famous unbanked have actually been, you know, banked. And I’m sorry to report that you wouldn’t be entirely wrong.

But there are real technical advances being made. It’s just that they’re mostly slow and behind the scenes, and in the interim, the community’s “MOPs and sociopaths” have seized on DeFi.

There is some visible progress. ZCash is making apparent breakthroughs in important, foundational cryptographic research. Tezos continues to upgrade its governance algorithms — modify its code constitution, basically — successfully.

On the application layer, I’m interested in Vault12, which uses “friends and family to safeguard crypto assets” — basically, instead of entrusting the secret keys which control your cryptocurrencies to a third party like an exchange, something not particularly different from traditional banking, you protect them among people you trust, so that some number of them can collaborate with you to recover your keys if they’re lost, using a cryptographic protocol known as Shamir’s Secret Sharing. Luminaries such as Vitalik Buterin and Christopher Allen have argued for “social key recovery” for some time, and it’s interesting to see it offered by a slick new Valley startup.

But a lot of what’s happening is more fundamental, in search of the ability to support many more transactions than today’s blockchains. The entire foundation of today’s second-leading cryptocurrency, Ethereum, is being torn apart and replaced wholesale, in search of “Ethereum 2.0.” Bitcoin remains much more stable and conservative, but a whole new story is being added to its foundations, the Lightning Network. Both make me uneasy. A fundamental rewrite is always worrying. Lightning may scale, but it is if anything even more user-hostile than Bitcoin, basically the cryptocurrency equivalent of a hard-to-use prepaid credit card. Still, the permissionless equivalent of prepaid credit cards would be good for the unbanked that everyone’s clearly so worried about, right?

I’m also uneasy because almost all blockchain scaling solutions — Lightning, sharding, Plasma, optimistic rollup, etc. — turn fundamental blockchain security from something relatively passive (check the hashes and use the chain with the most computational power) to something active (“watchtowers,” “fraud proofs.”) This seems to me to increase the security attack surface a lot.

All these issues may yet be solved. Sure. But at the same time, it feels like dissonance between the attitude inside the crypto bubble and that of mundanes may never have been greater. Meanwhile, the dark spectre of Tether hangs over the entire industry. OK, circumstantial evidence is inadmissible for good reason … but there sure is a lot of it.

I’ve argued before that “ongoing associations with a cloud of crazy scandal and hangers-on snake-oil salespeople — all of which would be catastrophic signs for, say, a traditional new startup — can actually be indicators of the strength, not weakness” of the cryptocurrency movement …

…but at some point, your religion — or “brain virus,” as Naval Ravikant once called cryptocurrencies — has to begin to appeal to people who do not actually live on your compound, or else you are going to be remain a cult and wither out. When is that going to happen? Is that going to happen? The answer remains no clearer than it was five years ago.

My MacBook Pro is three years old, and for the first time in my life, a three-year-old primary computer doesn’t feel like a crisis which must be resolved immediately. True, this is partly because I’m waiting for Apple to fix their keyboard debacle, and partly because I still cannot stomach the Touch Bar. But it is also because three years of performance growth ain’t what it used to be.

It is no exaggeration to say that Moore’s Law, the mindbogglingly relentless exponential growth in our world’s computing power, has been the most significant force in the world for the last fifty years. So its slow deceleration and/or demise are a big deal, and not just because the repercussions are now making their way into every home and every pocket.

We’ve all lived in hope that some other field would go exponential, giving us another, similar, era, of course. AI/machine learning was the great hope, especially the distant dream of a machine-learning feedback loop, AI improving AI at an exponential pace for decades. That now seems awfully unlikely.

In truth it always did. A couple of years ago I was talking to the CEO of an AI company who argued that AI progress was basically an S-curve, and we had already reached its top for sound processing, were nearing it for image and video, but were only halfway up the curve for text. No prize for guessing which one his company specialized in — but it seems to have been entirely correct.

Earlier this week OpenAI released an update to their analysis from last year regarding how the computing power used by AI1 is increasing. The outcome? It “has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore’s Law had a 2-year doubling period). Since 2012, this metric has grown by more than 300,000x (a 2-year doubling period would yield only a 7x increase).”

That’s … a lot of computing power to improve the state of the AI art, and it’s clear that this growth in compute cannot continue. Not “will not”; can not. Sadly, the exponential growth in the need for computing power to train AI has happened almost exactly contemporaneously with the diminishment of the exponential growth of Moore’s Law. Throwing more money at the problem won’t help — again, we’re talking about exponential rates of growth here, linear expense adjustments won’t move the needle.

The takeaway is that, even if we assume great efficiency breakthroughs and performance improvements to reduce the rate of doubling, AI progress seems to be increasingly compute-limited at a time when our collective growth in computing power is beginning to falter. Perhaps there’ll be some sort of breakthrough, but in the absence of one, it sounds a whole lot like we’re looking at AI/machine-learning progress leveling off, not long from now, and for the foreseeable future.


1It measures “the largest AI training runs,” technically, but this seems trend-instructive.

At last, it is here! The truly self-driving car, no human behind the wheel! For the public! …A few hundred of them, in a closed beta, in a small corner of sun-drenched (never snow-drenched, almost never water-drenched) suburban Phoenix, five years later than some people were predicting six years ago.

Few new technologies have ever been more anticipated and more predicted than the self-driving car. Anyone who drives cannot help but imagine not having to drive any more. It has been said that they will change our cities, our homes, our commerce, even our fundamental way of life.

But at the same time, the actual progress has seemed … well … glacial, to the casual driver’s eye. We’re mostly talking about software, after all. OK, and LIDAR, and cameras, but the software is the key. People couldn’t help but expect a roll-out like that of smartphones, where the launch of the iPhone in 2007 led to adoption by every tech-savvy person by 2010, and the vast majority of the developed world by 2013.

People couldn’t help but expect a mass market push. In 2014, the optimistic attitude was, maybe your next car is electric; then your next one — or even that same one, courtesy of an OTA software update — will be self-driving! Set the controls for the heart of Los Angeles, or Boston, or both, and lie back and snooze, baby.

That’s not how it’s going to happen. Waymo’s closed beta is a huge yep, yes, but it is also a tiny incremental iteration. We aren’t going to see a Big Bang moment, when suddenly you buy your next car and it will carry you unaided from Vancouver to Halifax, or even Vancouver to Whistler. Instead we’re going to see a series of tiny steps forward, measured over years, frequently in industrial or commercial settings rather than personal ones.

First they drive the broad, sunny streets of Phoenix; then highways; then in more complex situations, such as airports and downtowns; then in heavy rain; then amid detours and road closures; then in rough, winding country roads prone to landslides and flooding; then (some considerable time from now, says your Canadian correspondent) in snow and ice…

And even then, how can a truly self-driving car handle anomalous situations, when the car doesn’t know what to do and screeches to a halt? Even more importantly, how will it know it’s in an anomalous situation and it doesn’t know what to do? Will cars be drivable remotely, in such cases? If so, how will we secure that process? What about adversarial attempts to manipulate the neural networks behind the figurative wheel, by feeding them misleading inputs that they respond to but the naked human eye might not notice?

I suppose we have to talk about the so-called “trolley problem,” too. I’d rather not. It is by far the silliest and most overanalyzed question about self-driving, since in 99.9% of problematic situations the solution is simply “stop.” Anything like the trolley problem will only come up in the edgiest of edge cases — but, if only to satisfy the public, those cases will have to be publicly hashed out as well.

The larger issue brought up by the “trolley problem problem” is that we have no collective social understanding of how to judge the risks posed by self-driving cars, and what risks we should accept. On paper, if all of America moved to self-driving cars overnight and they started killing 100 people every single day … America should rejoice, because the death rate from car crashes will have fallen!

In practice, however, he understated, it seems likely that America, or at least American media, will not rejoice. Rather the opposite.

When you step into a self-driving vehicle, you will be taking a risk, just as you do whenever you step into a human-driven vehicle. But it will be harder to measure this new risk, and even if/when we can, we won’t weigh it the same way that we do the old risk. Such is human nature. Liability alone will be a giant can of worms.

We have an entire infrastructure of regulation built around the old risk. It will change only slowly to manage this new risk, and it will have great difficulty sloughing off old preconceptions which no longer apply. Dream of cars with no steering wheels all you like, for example, but my guess is that in many jurisdictions, self-driving cars will have to include a legal driver among their passengers at all times.

When you consider the combination of the technological challenges, the social challenges, and the regulatory challenges, all of which are seriously nontrivial — it seems apparent that we are going to creep, rather than bound, into the self-driving future.

And so: self-driving vehicles will slowly, quietly, take over closed industrial / commercial settings. Waymo’s self-driving taxis, followed (apparently at some distance) by others, will very gradually expand their beachhead from Phoenix, bit by bit and clime by clime, with occasional setbacks. Personal cars will continue to increase their self-driving capabilities one situation at a time: parallel parking, stop-and-go highway traffic, parking garages, certain patches of quiet suburban territory.

This means there will almost certainly be no point at which you suddenly have a self-driving car. Self-driving isn’t a product, an event, or a feature; it’s an aspirational limit to which we will asymptotically approach. We’re collectively already on that curve — which is exciting! — but it seems apparent that its climb will be much more gradual than almost everyone, including me, thought not so long ago.

Submit campaign ads to fact checking, limit microtargeting, cap spending, observe silence periods, or at least warn users. These are the solutions Facebook employees put forward in an open letter pleading with CEO Mark Zuckerberg and company leadership to address misinformation in political ads.

The letter, obtained by the New York Times’ Mike Isaac, insists that “Free speech and paid speech are not the same thing . . . Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for.” The letter was posted to Facebook’s internal collaboration forum a few weeks ago.

The sentiments echo what I called for in a TechCrunch opinion piece on October 13th calling on Facebook to ban political ads. Unfettered misinformation in political ads on Facebook lets politicians and their supporters spread inflammatory and inaccurate claims about their views and their rivals while racking up donations to buy more of these ads.

The social network can still offer freedom of expression to political campaigns on their own Facebook Pages while limiting the ability of the richest and most dishonest to pay to make their lies the loudest. We suggested that if Facebook won’t drop political ads, they should be fact checked and/or use an array of generic “vote for me” or “donate here” ad units that don’t allow accusations. We also criticized how microtargeting of communities vulnerable to misinformation and instant donation links make Facebook ads more dangerous than equivalent TV or radio spots.

Mark Zuckerberg Hearing In Congress

The Facebook CEO, Mark Zuckerberg, testified before the House Financial Services Committee on Wednesday October 23, 2019 Washington, D.C. (Photo by Aurora Samperio/NurPhoto via Getty Images)

Over 250 employees of Facebook’s 35,000 staffers have signed the letter, that declares “We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.” It suggests the current policy undermines Facebook’s election integrity work, confuses users about where misinformation is allowed, and signals Facebook is happy to profit from lies.

The solutions suggested include:

  1. Don’t accept political ads unless they’re subject to third-party fact checks
  2. Use visual design to more strongly differentiate between political ads and organic non-ad posts
  3. Restrict microtargeting for political ads including the use of Custom Audiences since microtargeted hides ads from as much public scrutiny that Facebook claims keeps politicians honest
  4. Observe pre-election silence periods for political ads to limit the impact and scale of misinformation
  5. Limit ad spending per politician or candidate, with spending by them and their supporting political action committees combined
  6. Make it more visually clear to users that political ads aren’t fact-checked

A combination of these approaches could let Facebook stop short of banning political ads without allowing rampant misinformation or having to police individual claims.

Zuckerberg Elections 1

Zuckerberg had stood resolute on the policy despite backlash from the press and lawmakers including Representative Alexandria Ocasio-Cortez (D-NY). She left him tongue-tied during a congressional testimony when she asked exactly what kinds of misinfo were allowed in ads.

But then Friday Facebook blocked an ad designed to test its limits by claiming Republican Lindsey Graham had voted for Ocasio-Cortez’s Green Deal he actually opposes. Facebook told Reuters it will fact-check PAC ads

One sensible approach for politicians’ ads would be for Facebook to ramp up fact-checking, starting with Presidential candidates until it has the resources to scan more. Those fact-checked as false should receive an interstitial warning blocking their content rather than just a “false” label. That could be paired with giving political ads a bigger disclaimer without making them too prominent looking in general and only allowing targeting by state.

Deciding on potential spending limits and silent periods would be more messy. Low limits could even the playing field and broad silent periods especially during voting periods could prevent voter suppression. Perhaps these specifics should be left to Facebook’s upcoming independent Oversight Board that acts as a supreme court for moderation decisions and policies.

fb arbiter of truth

Zuckerberg’s core argument for the policy is that over time, history bends towards more speech, not censorship. But that succumbs to utopic fallacy that assumes technology evenly advantages the honest and dishonest. In reality, sensational misinformation spreads much further and faster than level-headed truth. Microtargeted ads with thousands of variants undercut and overwhelm the democratic apparatus designed to punish liars, while partisan news outlets counter attempts to call them out.

Zuckerberg wants to avoid Facebook becoming the truth police. But as we and employees have put forward, there a progressive approaches to limiting misinformation if he’s willing to step back from his philosophical orthodoxy.

The full text of the letter from Facebook employees to leadership about political ads can be found below, via the New York Times:

We are proud to work here.

Facebook stands for people expressing their voice. Creating a place where we can debate, share different opinions, and express our views is what makes our app and technologies meaningful for people all over the world.

We are proud to work for a place that enables that expression, and we believe it is imperative to evolve as societies change. As Chris Cox said, “We know the effects of social media are not neutral, and its history has not yet been written.”

This is our company.

We’re reaching out to you, the leaders of this company, because we’re worried we’re on track to undo the great strides our product teams have made in integrity over the last two years. We work here because we care, because we know that even our smallest choices impact communities at an astounding scale. We want to raise our concerns before it’s too late.

Free speech and paid speech are not the same thing.

Misinformation affects us all. Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for. We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.

Allowing paid civic misinformation to run on the platform in its current state has the potential to:

— Increase distrust in our platform by allowing similar paid and organic content to sit side-by-side — some with third-party fact-checking and some without. Additionally, it communicates that we are OK profiting from deliberate misinformation campaigns by those in or seeking positions of power.

— Undo integrity product work. Currently, integrity teams are working hard to give users more context on the content they see, demote violating content, and more. For the Election 2020 Lockdown, these teams made hard choices on what to support and what not to support, and this policy will undo much of that work by undermining trust in the platform. And after the 2020 Lockdown, this policy has the potential to continue to cause harm in coming elections around the world.

Proposals for improvement

Our goal is to bring awareness to our leadership that a large part of the employee body does not agree with this policy. We want to work with our leadership to develop better solutions that both protect our business and the people who use our products. We know this work is nuanced, but there are many things we can do short of eliminating political ads altogether.

These suggestions are all focused on ad-related content, not organic.

1. Hold political ads to the same standard as other ads.

a. Misinformation shared by political advertisers has an outsized detrimental impact on our community. We should not accept money for political ads without applying the standards that our other ads have to follow.

2. Stronger visual design treatment for political ads.

a. People have trouble distinguishing political ads from organic posts. We should apply a stronger design treatment to political ads that makes it easier for people to establish context.

3. Restrict targeting for political ads.

a. Currently, politicians and political campaigns can use our advanced targeting tools, such as Custom Audiences. It is common for political advertisers to upload voter rolls (which are publicly available in order to reach voters) and then use behavioral tracking tools (such as the FB pixel) and ad engagement to refine ads further. The risk with allowing this is that it’s hard for people in the electorate to participate in the “public scrutiny” that we’re saying comes along with political speech. These ads are often so micro-targeted that the conversations on our platforms are much more siloed than on other platforms. Currently we restrict targeting for housing and education and credit verticals due to a history of discrimination. We should extend similar restrictions to political advertising.

4. Broader observance of the election silence periods

a. Observe election silence in compliance with local laws and regulations. Explore a self-imposed election silence for all elections around the world to act in good faith and as good citizens.

5. Spend caps for individual politicians, regardless of source

a. FB has stated that one of the benefits of running political ads is to help more voices get heard. However, high-profile politicians can out-spend new voices and drown out the competition. To solve for this, if you have a PAC and a politician both running ads, there would be a limit that would apply to both together, rather than to each advertiser individually.

6. Clearer policies for political ads

a. If FB does not change the policies for political ads, we need to update the way they are displayed. For consumers and advertisers, it’s not immediately clear that political ads are exempt from the fact-checking that other ads go through. It should be easily understood by anyone that our advertising policies about misinformation don’t apply to original political content or ads, especially since political misinformation is more destructive than other types of misinformation.

Therefore, the section of the policies should be moved from “prohibited content” (which is not allowed at all) to “restricted content” (which is allowed with restrictions).

We want to have this conversation in an open dialog because we want to see actual change.

We are proud of the work that the integrity teams have done, and we don’t want to see that undermined by policy. Over the coming months, we’ll continue this conversation, and we look forward to working towards solutions together.

This is still our company.

As I write this, massive fires are erupting all over California, and massive protests are erupting all over the world. Is the former a facet of the climate crisis? Is the latter a symptom of hyperpolarization caused by hyperconnectivity? Yes, I mean no, I mean it’s impossible to say. That’s what it means to live in a stochastic age.

This is an era of stochastic terrorism: “The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random.” It is also an era of climate crisis as a stochastic disaster, causing a whole spectrum of ‘random’ natural disasters to become ever more probable and terrible.

Is ours also an era of stochastic political strife? Does the world’s increased connectivity, aided by social media’s inherent amplification of outrage, have second-, third-, or fourth-order effects which heat rhetoric and protest, triggering secession movements and massive rejection of the status quo? Is our hyperconnectivity the political equivalent of global warming?

If so, it would explain a lot. The baffling and horrifying rise of neo-Nazis and white supremacy around the world. The increasing political polarization of seemingly every polity. The growing dearth of anything like a political middle ground. The huge protests scattered across the globe, against almost every form of government.

But let’s not be too quick to diagnose this. This might be somehow periodic: terrorism and protests were both more common (per capita) in the late 60s and early 70s than they are today. It might just be a symptom of, and backlash against, a global trend of neoliberalism-morphing-towards-antidemocratic-oligarchy, which, sadly, is the recent economic / political history of much of the world.

The hypothesis is that this stochastic strife has something to do with technology and hyperconnectivity, that across the world we’re experiencing the political equivalent of global warming. Intriguing, but far from proven. How might we test or measure it?

The obvious test is to introduce a control group, A/B across a representative slice of the planet — but that seems pretty unlikely, and I’m not aware of any reliable quantitative measures of political strife, and either way it suffers from the inevitable problem that it’s impossible to tease out just one of the myriad factors which accumulate (or not) into political fury and protest.

— At least it’s impossible at any given moment. But we do know that connectivity is likely to just keep increasing, especially across the developing world, and that averaged across nations it is likely to change faster than almost any other factor at play.

So if this hypothesis is correct, we ain’t seen nothin’ yet. Political outrage, massive protests, and secession movements will continue to grow worldwide, eventually at a pace which makes California wildfires seem leisurely.

Let’s hope that either the hypothesis is proved wrong, or that we find a new way, transcending traditional nation-states, to distribute political power … before all those eruptions turn into conflagrations.