Steve Thomas - IT Consultant

Wasting time every night debating with yourself or your partner about what to watch on Netflix is a drag. It burns people’s time and good will, robs great creators of attention, and leaves Netflix vulnerable to competitors who can solve discovery. Netflix itself says the average user spends 18 minutes per day deciding.

To date, Netflix’s solution has been its state-of-the-art artificial intelligence that offers personalized recommendations. But that algorithm is ignorant of how we’re feeling in the moment, what we’ve already seen elsewhere, and if we’re factoring in what someone else with us wants to watch too.

Netflix is considering a Shuffle button. [Image Credit: AndroidPolice]

This week Netflix introduced one basic new approach to discovery: a shuffle button. Click on a show you like such as The Office, and it will queue up a random episode. But that only works if you already know what you want to watch, it’s not a movie, and it’s not a linear series you have to watch in order.

Here are three much more exciting, applicable, and lucrative ways for Netflix (or Hulu, Amazon Prime Video, or any of the major streaming services) to get us to stop browsing and start chilling:

Netflix Channels

For the history of broadcast television, people surfed their way to what to watch. They turned on the tube, flipped through a few favorite channels, and jumped in even if a show or movie had already started. They didn’t have to decide between infinite options, and they didn’t have to commit to starting from the beginning. We all have that guilty pleasure we’ll watch until the end whenever we stumble upon it.

Netflix could harness that laziness and repurpose the concept of channels so you could surf its on-demand catalog the same way. Imagine if Netflix created channels dedicated to cartoons, action, comedy, or history. It could curate non-stop streams of cherry-picked content, mixing classic episodes and films, new releases related to current events, thematically relevant seasonal video, and Netflix’s own Original titles it wants to promote.

For example, the comedy channel could run modern classic films like 40-Year Old Virgin and Van Wilder during the day, top episodes of Arrested Development and Parks And Recreation in the afternoon, a featured recent release film like The Lobster in primetime, and then off-kilter cult hits like Monty Python or its own show Big Mouth in the late night slots. Users who finish one video could get turned on to the next, and those who might not start a personal favorite film from the beginning might happily jump in at the climax.

Short-Film Bundles

There’s a rapidly expanding demographic of post-couple pre-children people desperately seeking after-work entertainment. They’re too old or settled to go out every night, but aren’t so busy with kids that they lack downtime.

But one big shortcoming of Netflix is that it can be tough to get a satisfying dose of entertainment in a limited amount of time before you have to go to bed. A 30-minute TV show is too short. A lot of TV nowadays is serialized so it’s incomprehensible or too cliffhanger-y to watch a single episode, but sometimes you can’t stay up to binge. And movies are too long so you end up exhausted if you manage to finish in one sitting.

Netflix could fill this gap by bundling three or so short films together into thematic collections that are approximately 45 minutes to an hour in total.

Netflix could commission Originals and mix them with the plethora of untapped existing shorts that have never had a mainstream distribution channel. They’re often too long or prestigious to live on the web, but too short for TV, and it’s annoying to have to go hunting for a new one every 15 minutes. The whole point here is to reduce browsing. Netflix could create collections related to different seasons, holidays, or world news moments, and rebundle the separate shorts on the fly to fit viewership trends or try different curational angles.

Often artful and conclusive, they’d provide a sense of culture and closure that a TV episode doesn’t. If you get sleepy you could save the last short, and there’s a feeling of low commitment since you could skip any short that doesn’t grab you.

The Nightly Water Cooler Pick

One thing we’ve lost with the rise of on-demand video are some of those zeitgeist moments where everyone watches the same thing the same night and can then talk about it together the next day. We still get that with live sports, the occasional tent pole premier like Game Of Thrones, or when a series drops for binge-watching like Stranger Things. But Netflix has the ubiquity to manufacture those moments that stimulate conversation and a sense of unity.

Netflix could choose one piece of programming per night per region, perhaps a movie, short arc of TV episodes, or one of the short film bundles I suggested above and stick it prominently on the home page. This Netflix Zeitgeist choice would help override people’s picky preferences that get them stuck browsing by applying peer pressure like, “well, this is what everyone else will be watching.”

Netflix’s curators could pick content matched with an upcoming holiday like a Passover TV episode, show a film that’s reboot is about to debut like Dune or Clueless, pick a classic from an actor that’s just passed away like Luke Perry in the original Buffy movie, or show something tied to a big event like Netflix is currently doing with Beyonce’s Coachella concert film. Netflix could even let brands and or content studios pay to have their content promoted in the Zeitgeist slot.

As streaming service competition heats up and all the apps battle for the best back catalog, it’s not just exclusives but curation and discovery that will set them apart. These ideas could make Netflix the streaming app where you can just turn it on to find something great, be exposed to gorgeous shorts you’d have never known about, or get to participate in a shared societal experience. Entertainment shouldn’t have to be a chore.

Well, it was surreal while it lasted, by which I mean the 2017-18 cryptocurrency bubble. For a while there, Coinbase was #1 in the App Store, Bitcoin was above $10K, and there were more notional crypto zillionaires out there than you could shake a Merkle tree at.

Those were the crazy days. Now, though, a rude awakening has come. Now Bitcoin is down to $3200 and counting, other cryptocurrencies are down well over 90%, and worst of all, none of the billions of dollars which poured into cryptocurrencies during the bubble have led to anything even remotely like a killer app. Instead the crypto space remains a giant casino of penny stocks, with little to no utility outside of financial speculation. Don’t kid yourself — this is nothing like the dot-com crash.

What comes next? Not much, at least not soon. I am sorry to report that we have entered the crypto winter, as the estimable Michael Casey puts it, and, like that in Game of Thrones, it’s likely to be a long one. Herein please find your guide to the icy landscape ahead, and some predictions of what we’ll find there:

The Business Side

We’re going to see sizable numbers of both cryptocurrencies, and the businesses built on them, simply collapse. In fact we’re seeing that already: Steemit has laid off 70% of its staff, and even mighty Consensys has cut 13%. Of the more than 2000 cryptocurrencies tracked by CoinMarketCap, hundreds upon hundreds will wither into disuse until their liquidity turns to ice and their price to zero. Meanwhile, many who run their own blockchains will find themselves increasingly vulnerable to 51% attacks. In the winter, only the strong survive; the weak are culled.

We’ll also see more infighting. The schism within a schism which has marked Bitcoin Cash of late is only the beginning. A rising tide has room for many ships, but they’ll have to fight to survive this ebb. Which blockchain will become the default for smart contracts — Ethereum, EOS, or Tezos? It’s hard to see all three remaining relevant. (My money’s on the first and last.) Which will be the privacy-preserving cryptocurrency: Monero, ZCash, or an upgraded Bitcoin? Here it’s easier to see room for all three, but it’s by no means guaranteed.

Meanwhile, as the winter leads to widespread losses, regulators will grow ever more intrusive, trying to minimize or stop future losses due to fraud or negligence. We’ll see more regulatory tightening, more fines, more bans, and, I predict, at least one case of serious criminal fraud by a major player in the cryptocurrency world. Will it be Tether? Will it be an exchange? Who can say? But I’d be extremely surprised if that didn’t happen.

Let’s look to the brighter side. I predict we’ll also see two welcome new interesting developments; at least one interesting and unexpected use case for cryptocurrencies in the developing world, and at least one more from a major tech player. (Facebook would be a pretty good bet, but it’s not the only one.) These will not lead to a massive upswing in the whole space though. Which is good, because the way all cryptocurrencies trade in lockstep is one of the most compelling proofs that they’re not currently not even close to a real market.

That said, trading will continue to thrive, because traders love volatility — but exchanges will shrink their short-term aspirations as their fees plateau and/or decrease. What’s more, trading will increasingly focus on a smaller number of cryptocurrencies with real tech/biz differentiation, eg ZCash, Monero, Tezos, and Binance Coin. (Say what you like about Binance — I don’t like them much either — but their token, unlike almost all tokens, has an actual business model.)

While this all happens we’ll see increasing Bitcoin dominance, as a “flight to quality” continues; clearly, if only one cryptocurrency were to survive, it would be that one. Meanwhile, its hashrate will continue to decrease, which is good for the world, as that means less electricity consumption.

Businesses will not adopt private blockchains en masse, or really at all, because if you want replicated write-once-read-many databases whose contents are cryptographically signed, it’s easier to just … use replicated write-once-read-many databases whose contents are cryptographically signed, rather than a spectacularly inefficient blockchain. What makes blockchains interesting is their permissionlessness.

Conversely, ether will continue to shrink in value until/unless a dapp actually takes off, which seems unlikely in the near future. I know this sounds harsh, and technically I’m a fan of Ethereum — my own pet crypto projects are built on it — but its value proposition is built around dapps, and no dapp hits means no value. Unlikely, but not impossible; we’re seeing green shoots of on-chain security tokens, the most likely near-term prospect for actual meaningful usage of Ethereum smart contracts. I predict that at some point during the crypto winter some bright startup will make its own equity, and its own cap table, into an on-chain Ethereum security token.

The Technical Side

Technically, the crypto winter will consist of a lot of grotty, important work being done underneath the snowbanks: infrastructure, scaling, privacy, usability, identity, etcetera. I predict that Ethereum’s transition to Proof-of-Stake will be slow and hesitant: it’s essentially a whole new consensus algorithm, and one which substantially more complex (and therefore with a broader attack surface) than Proof-of-Work. I also predict that even the most interesting and useful dapps (eg FOAM, Grid+, and Augur) will see slow if any growth until their fundamental usability issues are solved.

I do think that will sort of happen — that a de facto, painful, hard-to-use but viable “crypto suite” of tools for true believers, especially digital nomads, will arise. This will include a “sovereign identity” protocol, a social network, a decentralized exchange which includes peer-to-peer fiat-to-crypto, data storage, maybe even email — all decentralized, all relatively hard to use, but adopted by a tiny hardcore minority. I furthermore predict that this suite will be roughly evenly split between “built on Ethereum” and “built on Blockstack.”

I also believe there’ll be a great deal of technically fascinating cross-chain (eg Cosmos, Polkadot) and second-layer or off-chain (eg Lightning, Plasma, Celer) work done, laying the groundwork for future connectivity and scalability. This will happen along with decentralized work which is not actually crypto-related, eg Scuttlebutt and IPFS, and that which is only tangentially related, eg Blockstack. In general there will be a great and welcome increase in projects’ code-to-prose ratio now that empty prose is no longer rewarded by lucrative ICOs.

And, my final prediction: cryptocurrencies will become seen as a weird alternative space for the 1% of hardcore traders, believers and techies, like Linux desktop users … until we finally emerge from the crypto winter. When will that happen? Not next year, and probably not the year after that. What will cause that emergence to happen? Here’s my most outré prediction of all: something entirely new, something so weird and unexpected that we can hardly even imagine it right now.

Atlassian’s JIRA began life as a bug-tracking tool. Today, though, it has become an agile planning suite, “to plan, track, and release great software.” In many organizations it has become the primary map of software projects, the hub of all development, the infamous “source of truth.”

It is a truism that the map is not the territory. Alas, this seems especially true of JIRA. Its genesis as a bug tracker, and its resulting use of “tickets” as its fundamental, defining unit, have made its maps especially difficult to follow. JIRA1 is all too often used in a way which makes it, inadvertently, an industry-wide “antipattern,” i.e. “a common response to a recurring problem that is usually ineffective and risks being highly counterproductive.”

One thing that writing elegant software has in common with art: its crafters should remain cognizant of the overall macro vision of the project, at the same time they are working on its smallest micro details. JIRA, alas, implicitly teaches everyone to ignore the larger vision while focusing on details. There is no whole. At best there is an “Epic” — but the whole point of an Epic is to be decomposed into smaller pieces to be worked on independently. JIRA encourages the disintegration of the macro vision.

What’s more, feature-driven JIRA does not easily support the concept of project-wide infrastructure which does not map to individual features. A data model used across the project. A complex component used across multiple pages. A caching layer for a third-party interface. A background service providing real-time data used across multiple screens. Sure, you can wedge those into JIRA’s ticket paradigm … but the spiderweb of dependencies which result don’t help anyone.

Worst of all, though, is the endless implicit pressure for tickets to be marked finished, to be passed on to the next phase. Tickets, in the JIRA mindset, are taken on, focused on until complete, and then passed on, never to be seen again. They have a one-way lifecycle: specification; design; development; testing; release. Doesn’t that sound a little … um … waterfall-y? Isn’t agile development supposed to be fundamentally different from waterfall development, rather than simply replacing one big waterfall with a thousand little ones?

Here’s an analogy. Imagine a city-planning tool which makes it easy to design city maps which do include towers, residential districts, parks, malls, and roads … but which doesn’t easily support things like waterworks, sewers, subway tunnels, the electrical grid, etc., which can only be wedged in through awkward hacks, if at all.

Now imagine this tool is used as a blueprint for construction, with the implicit baked-in assumption that a) the neighborhood is the fundamental unit of city construction b) cities are built one neighborhood at a time, and neighborhoods one block at a time. What’s more, one is incentivized to proceed to the next only when the last is absolutely complete, right down to the flowers growing in the median strips.

Now imagine that the city’s developers, engineers, and construction workers are asked to estimate and report progress purely in terms of how many neighborhoods and blocks have been fully completed, and how far along each one is. Does that strike you as a particularly effective model of urban planning? Do you think you would like to live in its result? Or, in practice, do you think that the best way to grow a city might be just a little more organic?

Let’s extend that metaphor. Suppose you began to build the city more organically, so that, at a certain significant point, you have a downtown full of a mix of temporary and permanent buildings; the skyscrapers’ foundations laid (i.e. technical uncertainty resolved); much of the core infrastructure built out; a few clusters of initial structures in the central neighborhoods, and shantytowns in the outskirts; a dirt airstrip where the airport will be; and traffic going back and forth among all these places. In other words, you have built a crude but functioning city-in-the-making, its skeleton constructed, ready to be fleshed out. Well done!

But if measured by how many blocks and neighborhoods are absolutely finished, according to the urban planners’ artistic renditions, what is your progress? By that measure, your progress is zero.

So that is not how JIRA incentivizes you to work. That would look like a huge column of in-progress tickets, and zero complete ones. That would look beyond terrible. Instead JIRA incentivizes you to complete an entire block, and then the next; an entire neighborhood, and then the next; to kill off as many different tickets as possible, to mark them complete and pass them on, even if splicing them together after the fact is more difficult than building them to work together in the first place,.

(If you prefer a smaller-scale model, just transpose: city → condo building, neighborhood → floor, block → unit, etc.)

And so people take tickets, implement them as written, pass them off to whoever is next in the workflow, consider their job well done, even if working on scattered groups of them in parallel might be much more effective … and without ever considering the larger goal. “Implement the Upload button” says the ticket; so that is all that is done. The ticket does not explain that the larger goal of the Upload button is to let users back up their work. Perhaps it would actually be technically easier to automatically upload every state change, such that the user gets automatic buttonless backups plus a complete undo/redo stack. But all the ticket says is: “Implement the Upload button.” So that is all that is done.

All too often, the only time anyone worries about the vision of the project as a whole is at the very beginning, when the overworked project manager(s) initially deal(s) with the thankless task of decomposing the entire project into a forest of tickets. But the whole point of agile development is to accept that the project will always be changing over time, and — albeit to a lesser extent — for multiple people, everyone on the team, to help contribute to that change. JIRA has become a tool which actually works against this.

(And don’t even get me started on asking engineers to estimate a project that someone else has broken down, into subcomponents whose partitioning feels unnatural, by giving them about thirty seconds per feature during a planning meeting, and then basing the entire project plan on those hand-waved un-researched off-the-top-of-the-head half-blind guesses, without ever revisiting them or providing time for more thoughtful analysis. That antipattern is not JIRA’s fault … exactly. But JIRA’s structure contributes to it.)

I’m not saying JIRA has no place. It’s very good when you’re at the point where breaking things down into small pieces and finishing them sequentially does make sense. And, unsurprisingly given its history, it’s exceedingly good at issue tracking.

Let me reiterate: to write elegant software, you must keep both the macro and the micro vision in your mind simultaneously while working. JIRA is good at managing micro pieces. But you need something else for the macro. (And no, a clickable prototype isn’t enough; those are important, but they too require descriptive context.)

Allow me to propose something shocking and revolutionary: prose. Yes, that’s right; words in a row; thoughtfully written paragraphs. I’m not talking about huge requirements documents. I’m talking about maybe a ten-page overview describing the vision for the entire project in detail, and a six-page architectural document explaining the software infrastructure — where the city’s water, sewage, power, subways, and airports are located, and how they work, to extend the metaphor. When Amazon can, famously, require six-page memos in order to call meetings, this really doesn’t seem like too much to ask.

Simply ceasing to treat JIRA as the primary map and model of project completion undercuts a great deal of its implicit antipatternness. Use it for tracking iterative development and bug fixes, by all means. It’s very good at that. But it is a tool deeply ill-suited to be the map of a project’s overall vision or infrastructure, and it is never the source of truth — the source of truth is always the running code. In software, as in art, the micro work and the macro vision should always be informed by one another. Let JIRA map the micro work; but let good old-fashioned plain language describe the macro vision, and try to pay more attention to it.


1Atlassian seems to have decapitalized JIRA between versions 7.9 and 7.10, but descriptively, all-caps still seems more common.

In 1990, Kleiner Perkins rejected 99.4% of the proposals it received, while investing in 12 new companies a year. Those investees made Kleiner Perkins “the most successful financial institution in the history of the world,” boasting “returns of about 40 percent per year, compounded, for coming up on thirty years.”

Nowadays, the Valley’s VC poster child is now Y Combinator, who invest in more like 250 companies annually. They’re famously selective, accepting something like 1.5% of applicants, but still noticeably less selective than Kleiner Perkins in its heyday. They invest less money (though not necessarily that much less; KP bought 25% of Netscape for a mere $5 million back in 1994) in more companies.

In 1995, three networks controlled essentially all American television, and made only enough to fill the week; nowadays there is so much TV that you could bingewatch a new scripted series every day of the year. In 1995, the top ten movies of the year were responsible for 14% of the total box office. So far in 2018, the top ten have claimed a full 25% of the total gross. Something similar happened in publishing; the so-called “midlist” was largely replaced by a “bestseller or bust” attitude.

In 1995, if you were a journalist, your readership was dictated almost entirely by who published you. No matter how compelling your piece in the Halifax Daily News may have been, the same number of people would glance at your headline as at the others in that issue, and that number would be drastically smaller than that of any article, no matter how buried, in the New York Times. Now, the relative readership of any article, both between and within publications, is determined mostly by social media sharing, and inevitably follows a power-law curve, such that a surprisingly small number of pieces attract the lion’s share of readers.

What do these fields have in common? The number of “hits” has remained relatively constant, while their value has grown, and the number of “swings” has grown to the point where it is difficult for any person, or even any group, to pay close attention to them all. And the outcomes inevitably follow a power law. So it doesn’t make sense to focus on individual outcomes any more; instead you focus on cohorts, and you think stochastically.

“Stochastic” means “randomly determined,” and your initial inclination may be to recoil — of course producers and investors and publishers aren’t acting randomly! They put enormous amounts of analysis, effort, and intelligence into what they do! Which is true. But I put it to you that as gatekeepers’ power has diminished, and the number of would-be directors, CEOs, and pundits has skyrocketed, while the costs of trying have shrunk — randomness has become a more and more important factor.

It’s easy to cite anecdotes. What if Excite had bought Google when it was offered to them for $1 million? How far were we, really, from a world in which Picplz succeeded and Instagram failed? Any honest success story will include elements of luck, which is, in this context, another word for randomness. My contention is that the world’s larger trends — greater interconnectedness, faster speed, democratized access to technology — make randomness an ever-more-important factor.

This is not automatically a good thing. People talk about “stochastic terrorism,” a.k.a. “The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random.” Think of killers who dedicate their attack to ISIS after the fact despite no previous communication with them, or, more generally and contentiously, political violence promoted by broadcasting hatred and extremism.

And it seems that climate change is increasingly a stochastic disaster. Warmer weather means more energy in the atmosphere, which means more volatile behavior, which means more catastrophes like droughts, wildfires, hurricanes. Does climate change cause those things? Not directly. It increases the probability of them happening. It means both more and bigger hits, if you will.

This doesn’t apply to every field of human endeavor. But it seems to apply to essentially every field driven by unusual, extreme successes or failures — to Extremistan, to use Nassim Taleb’s term. Extremistan seems to everywhere be growing more extreme, and there’s no end in sight.

What can we learn from DETECTIVE CHINATOWN 2? Quite a lot, actually. The 11th biggest box-office hit of the year, it vastly outgrossed the likes of SOLO: A STAR WARS STORY, A STAR IS BORN, and CRAZY RICH ASIANS. You may never have heard of it, though; like OPERATION RED SEA, the 10th biggest hit of the year, it made all of its money in China.

What can a slapstick-meets-Sherlock-Holmes comedy tell us about technology? Quite a lot, if we read its subtext. One striking thing: it’s hard to think of any big recent American movie in which smartphone apps are so woven into the plot. The movie’s characters are brought together, and constantly reference, are brought together by one smartphone app; when imprisoned, our young genius Chinese detective laments, most of all, the loss of his phone; and when its blue-haired woman hacker asks the protagonist, “add me on WeChat?” early on, it seems like a cute throwaway line, but their WeChat conversations are fundamental to the plot as the movie progresses.

It’s also striking that Western tech has become so hegemonic that it actually seems slightly jarring to see characters using a chat app which is not iMessage / FB Messenger / WhatsApp. The Internet proper may be globally pervasive, at the TCP/IP level, but we live in two different online worlds; one of Facebook / Google / Amazon, and one of Tencent / Baidu / Alibaba, with Apple as the only company which seems to be bridge both worlds. This extends to payments, too; I was in China earlier this year, and was struck by in how many places — including McDonald’s! — Visa and Mastercard were useless, and the only viable Western-origin payment method was Apple Pay.

It’s easy and obviously somewhat correct to blame the Great Firewall for this. (Although the West is not without its own firewalls; I’m in Paris as I write this, and my attempt to read some back-home San Francisco news was just met with the “451 Unavailable for Legal Reasons” response pictured above, presumably courtesy of the GDPR.) But Chinese apps, and Chinese hardware, have long since transcended being knockoff copies of Western technology; they do their own things now, and they often do them better.

It’s been a remarkable rise. Back in 2011, while traveling in Ethiopia, I noticed with surprised curiosity that my hotel’s wi-fi was all run by a stack of Chinese hardware and software; seven years later, here in Paris, I keep passing glittering posters twenty feet tall extolling Huawei’s latest smartphone.

As China rises in power, it and America seem to increasingly see one another as a threat. (Again, it’s just a movie, but you can learn a lot about default cultural assumptions from movies, and OPERATION RED SEA, which is basically “BLACK HAWK DOWN meets chest-beating hagiography of the Chinese military,” ends with a cliffhanger standoff between the Chinese and American navies.) And certainly China’s government does horrifying things beyond its infamous censorship, such as interning an estimated million Muslims essentially because they are Muslim.

But from a purely technical point of view, Western online technology has — unexpectedly — become so hegemonic itself that the rampant growth of a whole different stack of apps and services is an interesting development in and of itself. For now, China’s parallel universe exists primarily within China, and doesn’t affect the rest of the world — but that’s already beginning to change, as this WeChat-in-San-Francisco-politics story indicates. As China engages more and more with the West, we’re going to see its tech overlap with ours in curious ways. May you live in interesting times, indeed.

Two weeks from now, the Swahilipot Hub, a hackerspace / makerspace / center for techies and artists in Mombasa, Kenya, is hosting a Pwani Innovation Week, “to stimulate the innovation ecosystem in the Pwani Region.” Some of its organizers showed me around Mombasa’s cable landing site some years ago; they’re impressive people. The idea of the Hub and its forthcoming event fills me with unleavened enthusiasm, and optimism … and a bleak realization that it’s been a while since I’ve felt this way about a tech initiative.

What happened? How did we go from predictions that the tech industry would replace the hidebound status quo with a new democratized openness, power to the people, now that we all carry a networked supercomputer in our pocket … to widespread, metastasizing accusations of abuse of power? To cite just a few recent examples: Facebook being associated with genocide and weaponized disinformation; Google with sexual harassment and nonconsensual use of patients’ medical data; and Amazon’s search for a new headquarters called “shameful — it should be illegal” by The Atlantic.

To an extent some of this was inevitable. The more powerful you become, the less publicly acceptable it is to throw your increasing weight around like Amazon has done. I’m sure that to Google, subsuming DeepMind is a natural, inevitable corporate progression, a mere structural reshuffling, and it’s not their fault that the medical providers they’re working with never got explicit consent from their patients to share the provided data. Facebook didn’t know it was going to be a breeding ground for massive disinformation campaigns; it was, and remains, a colossal social experiment in which we are all participating, despite the growing impression that its negatives may outweigh its positives. And at both the individual and corporate levels, as a company grows more powerful, “power corrupts” remains an inescapable truism.

But let’s not kid ourselves. There’s more going on here than mischance and the natural side effects of growth, and this is particularly true for Facebook and Twitter. When we talk about loss of faith in tech, most of the time, I think, we mean loss of faith in social media. It’s true that we don’t want them to become censors. The problem is that they already are, as a side effect, via their algorithms which show posts and tweets with high “engagement” — i.e. how vehemently users respond. The de facto outcome is to amplify outrage, and hence disinformation.

It may well be true, in a neutral environment, that the best answer to bad speech is more speech. The problem is that Facebook and Twitter are anything but neutral environments. Their optimization for “engagement” is a Brobdingnagian thumb on their scales, tilting their playing fields into whole Himalayas of advantages for bad faith, misinformation, disinformation, outrage and hate.

This optimization isn’t even necessary for their businesses to be somewhat successful. In 2014, Twitter had a strict chronological timeline, and recorded a $100 million profit before stock-based compensation — with relatively primitive advertising infrastructure, compared to today. Twitter and Facebook could kill the disinformation problem tomorrow, with ease, by switching from an algorithmic, engagement-based timeline back to a strict chronological one.

Never going to happen, of course. It would hurt their profits and their stock price too much. Just like Google was never going to consider itself bound to DeepMind’s cofounder’s assurance two years ago that “DeepMind operates autonomously from Google.” Just like Amazon was never going to consider whether siphoning money from local governments at its new so-called “co-headquarters” was actually going to be good for its new homes. Because while technology has benefited individuals, enormously, it’s really benefited technology’s megacorporations, and they’re going to follow their incentives, not ours.

Mark Zuckerberg’s latest post begins: “Many of us got into technology because we believe it can be a democratizing force for putting power in people’s hands.” I agree with that statement. Many of us did. But, looking back, were we correct? Is it really what the available evidence show us? Has it, perhaps, put some power in people’s hands — but delivered substantially more to corporations and governments?

I fear that the available evidence seems to confirm, instead, the words of tech philosopher-king Maciej Ceglowski. His most relevant rant begins with a much simpler, punchier phrase: “Technology concentrates power.” Today it seems harder than ever to argue with that.

How’s this for eyebrow-raising? In London, for the last year and a half, a team of lawyers, cryptographers, software engineers, and/or former military consultants have been brewing a bizarre and/or brilliant plan for a bridge between the blockchain and the real world — a system whose success is directly proportional to the extent to which it achieves legal title over every physical object in the world.

Wait. Let me explain. Their name is Mattereum, and they are not Bond villains seeking to conquer the planet. Rather, they are trying to bridge the gap between programmable blockchain “smart contracts” and actual legal contracts. As you might imagine, that gap consists of an almost infinitely knotty tangle of legal precedents, gray areas, and jurisdictions.

Mattereum has came up with a remarkable way to sever this Gordian knot. A legal concept universal across almost all jurisdictions is that assets have owners, who can decide (within legal limits) how to dispose of them. If registrars of the Mattereum network are granted legal title over assets, the thinking goes, they can then establish on-chain smart contracts with which physical assets can be programmatically bought, sold, rented, assigned, and partitioned — and use their ownership of these assets to resolve disputes and enforce those resolutions.

That all sounds pretty abstract. Let’s talk about some real-world examples. Their flagship object right now is a Stradivarius violin valued at $9 million. Assigning legal title over that violin to one of Mattereum’s registrars (say, Mattereum itself) which then licenses control via a set of smart contracts (say, on the Ethereum blockchain) means the violin instantly becomes not just a physical asset but a digital one, which can now be programmatically tokenized and sold to multiple investors — and also means that other contractual restrictions regarding its use can be required and enforced, such as that it be played for the public X times a year in Y countries, rather than locked perpetually into a vault.

Similarly, artists could sell their work to consortiums of investors with the enforceable requirement that their art be displayed in galleries open to the public for at least 25% of every year, built-in digital interfaces for galleries and museums to book that time, income percentages allocated for foundations and to the artist themselves, and so forth. And, of course, in passing, the art’s provenance is maintained, too.

Could you do all this already, via a whole lot of legal paperwork followed by a whole lot of glacially-paced lawsuits when disputes inevitably arose? Sure. But to quote their summary white paper (PDF), which is very worth reading:

Clichés like “data is the new oil” conceal a fundamental truth: efficient discovery of availability and price radically changes the value of assets. Auctions on eBay gave value to enormous seas of illiquid assets … Some classes of illiquid assets, such as idle cars and briefly empty flats, found markets through Uber and Airbnb, liberating billions in value … the pattern is very clear: assets with a proper digital interface and history are more valuable than assets without them.

Mattereum boasts an impressive team led by Vinay Gupta, the mad-or-visionary-depending-on-who-you-talk-to global resilience guru turned Ethereum launch coordinator turned CEO, and including Ian Grigg, inventor of the Ricardian contract. And, of course, a lawyer or three. Obviously whole clouds of question marks still hover around them, notably regarding just how these smart contracts will intersect with the existing legal world(s), and how to establish trust in asset registrars to not abuse their legal title, or neglect their responsibilities — a trust which will need to rival or exceed that which accountholders have in banks.

But, like Ethereum itself, this is still a wildly ambitious, genuinely innovative, and deeply weird project; that’s much to admire. Restructuring the concept of physical asset ownership to include programmatic contracts may seem like a subtle change, but it’s one which could conceivably have massive structural repercussions and unlock enormous amounts of currently untapped potential. It’s early baby-iterative-steps yet, of course, with a panoply of pitfalls on all sides; but it’s big and bold and hopeful, and it just might do a lot of good.

Last week more than 20,000 Google employees walked out of their workplace to protest, and demand major changes in, how the company handles harassment and discrimination. Mass employee organization, demands made of management — doesn’t that all vaguely remind you of some kind of old-fashioned twentieth-century concept? What was it called again? The name’s on the tip of my tongue, I swear…

A participant said, “Just the threat of us walking out was enough for the company to remove DeVaul from the payroll,” referring to Richard DeVaul, the Google executive who resigned this week amid a swirling storm of accusations of sexual harassment and worse. Meanwhile, an organizer of the walkout said, plaintively, “I hope I still have a career in Silicon Valley after this” … while other organizers declined to go on the record.

If only there were some formal, structured way in which tech employees could bring grievances to their management, and negotiate with them as a group, via, say, elected representatives, for whom protection from retaliation could be established. Surely the disruptors and out-of-the-box thinkers of Silicon Valley could come up with some revolutionary new system for that. Imagine — and I know this sounds like science fiction but bear with me — one day, such a structure might even achieve some kind of special legal recognition and protections.

But what would they call it? I don’t know. Maybe, and I’m just spitballing here, they could call it a “union” or something crazy like that. After the set operation, get it? They like math entendres down at Google, after all…

I am not necessarily saying such a concept is all benefit and no cost. I’m not even saying I think it would necessarily be a net good idea. But it is striking to me how nobody seems to have even publicly considered the possibility, under the circumstances.

I get that the Valley idea of a union is basically that of a terrible hidebound boa constrictor which chokes off all hope of speed and/or innovation. That might well be true, if one were to simply try to apply twentieth-century union notions, ones which tried to control hours worked and physical conditions, because those were the primary concerns of the manufacturing and/or otherwise physical laborers of that era. But it surprises me greatly that nobody seems to even be talking about a “Union 2.0” concept, one built for 21st-century software engineers rather than 20th-century auto workers, one which didn’t necessarily sacrifice speed, flexibility, or openness to experimentation.

Not least because such could have important repercussions beyond the companies in question. Big tech companies have become among the most powerful entities in the world, at least indirectly. An argument about government regulation of tech is often that it is both heavy-handed and so slow that by the time it finally happens it applies to the tech of several years ago and is already essentially obsolete. This is often a fair criticism, and often applies to legal restitution too.

So what are the checks and balances for Big Tech? What forces and people can keep them honest? The obvious answer is: their employees. Google and Facebook and company fear unrest among their tens of thousands of employees far more than they do the anger of a few hundred million users around the world. It’s by no means a perfect lever of influence, but it’s better than nothing. Whether you want to call it a union or not, I can’t help but think that a more formal structure for grievances, collective negotiation, and protection from reprisal, among employees of major tech companies, might just be a pretty good idea for everyone.

Building web services and smartphone apps, which is most of what I’ve been doing professionally at HappyFunCorp1 for the last decade or so used to be pretty straightforward. Not easy, but straightforward, especially when the client was a consumer startup, which so many of them were.

The more we did the better we got at it. Design and write two native apps, usually iOS first and Android second. Don’t skimp on the design. Connect them to a JSON API, usually written in Ruby on Rails, which also powered the web site. There’s always a web site; consumers might only see the side which is a minimal billboard for the app, but there’s essentially always also an admin site, to control features and aspects of the app.

Design isn’t as important for the admin site, so you can build that in something crude but effective like ActiveAdmin; why roll your own? Similarly, authentication is tricky and easy to get wrong, so use something like Devise, which comes with built-in hooks to Facebook and Twitter login. Design your database carefully. Use jQuery for dynamic in-browser manipulation since raw Javascript is such a nightmare. Argue about whether to use Rspec or Minitest for your server tests.

All there? OK, roll it out to your Heroku scaling environment, so you can simply “git push” to push to staging and production, with various levels of Postgres support, autoscaling, pipelines, Redis caching, Resque worker jobs, and so forth. If it’s a startup, keep them on Heroku to see if they catch on, if they find the fabled product-market fit, not least because it helps you iterate faster. If so, at some point you have to graduate them to AWS, because Heroku only scales so far and it does so very expensively. If not, well, “fail fast,” right?

Those were the days, my friends, those halcyon, long-gone days of (checks notes) five years ago. The days of a lot of grief, sure, but very little decision complexity. The smartphone boom was on, and the web boom was settling down, and everyone was still surfing those two tidal waves.

Today? Well, today we still are, neither of those waves have broken, per se, software is still eating the world, but things are … different. More of the world is being eaten, but it’s also happening more slowly, like growing 50% a year from a $1 billion base rather than 500% from $1 million. There are fewer starry-eyed founders with an app idea that they’re sure will change the world and funding enough to give it a shot. Those are still out there, sure, and more power to them, but the landscape is more complex, now.

Instead we see more big businesses, media and industrial and retail alike, realizing they must adapt and be devoured, experimenting with new tech projects with a combination of excitement and trepidation. Or requisitioning custom apps for very specific — but very useful — purposes, and requiring them to interface with their awkward pre-existing custom middleware just so. Or tech companies, even big household-name ones, outsourcing ancillary tools and projects in order to focus their in-house teams purely on their core competencies and business models. Our mix of clients has definitely shifted more towards enterprise in the last few years.

Which is not to say that startups don’t still come through our doors with bright ideas and inspiring PowerPoints on a fairly regular basis. As do super starry-eyed blockchain founders (granted, I’m sometimes a bit starry-eyed about blockchains myself) replacing the consumer-app founders of yore. I doubt we’re alone in having had a spate of blockchain startup projects late last year and early this, which has diminished to only a couple active at the moment. (Not least because the tooling is still so crude it reminds me of 90s command-line hacking.) But I strongly doubt that sphere is going away.

We haven’t dealt with as many AI projects as I would have expected by now, probably partly because AI talent is still so scarce and highly valued, and partly because it turns out a lot of seeming “AI” work can be done with simple linear regressions rather than by building and training and tuning deep-learning neural networks… although if you do those linear regressions with TensorFlow, it’s still “AI” buzzword-compliant, right? Right?

Most of all, though, the tools we use have changed. Nowadays when you want to build an app, you have to ask yourself: really native? (Java or Kotlin? Objective-C or Swift?) Or React Native? Or Xamarin? Or Google’s new Flutter thing? When you want to build a web site, you have to think: traditional? Or single-page, with React or Angular or Vue? As for the server — Go is a lot faster than Rails, you know, and oh, that elegant concurrency handling, but, oh, where is my map/filter/reduce? Javascript is still a clumsy language, but there are certain advantages to having one language across the stack, and Node is powerful and package-rich these days. And of course you’ll want it all containerized, because while Docker definitely adds another layer or two of configuration complexity, it’s usually worth it.

Unless you want to go fully “serverless,” at least for aspects, with Amazon Lambda or Google Firebase? Even if you don’t use Firebase for a datastore, how about for authentication, huh? And if you’re all containerized, and Kubernetized if/as appropriate, though maybe let’s not go the many-microservices route until you’re sure your product-market fit justifies it, then where do you want to roll it out, AWS or Azure or Google Cloud or Digital Ocean? Or do you want to use one of their PaaS services, like App Engine or Beanstalk, which, like Heroku, sorta kinda live between “serverless” and “bare metal virtual machines”?

I oversimplify, but you get my point. We’ve never had more options, as developers, more tools available to us … and we’ve never had to struggle more with analysis paralysis, because it’s awfully hard to determine which of the possible toolsets is the best one for any particular situation. Sometimes — often — we have to be happy with just selecting a good one. And that selection problem doesn’t look like it’s going to get easier anytime soon, I’m afraid. It’s a strange time to be a coder. We live and work all tangled up in an embarrassment of riches.


1Yes, that’s really our name. No, this TC column isn’t a full-time gig. (Which is something people frequently assume, because it’s so much more visible and to some people writing a column every week sounds like a lot of work, but no, I’m really a CTO.)

Snapchat needs a sugar daddy. Its cash reserves dwindling from giant quarterly losses. Poor morale from a battered share price and cost-cutting measures sap momentum. And intense competition from Facebook is preventing rapid growth. With just $1.4 billion in assets remaining at the end of a brutal Q3 2018 and analysts estimating it will lose $1.5 billion in 2019 alone, Snapchat could run out of money well before it’s projected to break even in 2020 or 2021.

So what are Snap’s options?

A long and lonely road

Snap’s big hope is to show a business turnaround story like Twitter, which saw its stock jump 14 percent this week despite losing monthly active users by deepening daily user engagement and producing profits. But without some change that massively increases daily time spent while reducing costs, it could take years for Snap to reach profitability. The company has already laid off 120 employees in March, or 7 percent of its workforce. And 40 percent of the remaining 3,000 employees plan to leave — up 11 percentage points from Q1 2018 according to internal survey data attained by Cheddar’s Alex Heath.

Snapchat is relying on the Project Mushroom engineering overhaul of its Android app to speed up performance, and thereby accelerate user growth and retention. Snap neglected the developing world’s Android market for years as it focused on iPhone-toting US teens. Given Snapchat is all about quick videos, slow load times made it nearly unusable, especially in markets with slower network connections and older phones.

Looking at the competitive landscape, WhatsApp’s Snapchat Stories clone Status has grown to 450 million daily users while Instagram Stories has reached 400 million dailies — much of that coming in the developing world, thereby blocking Snap’s growth abroad as I predicted when Insta Stories launched. Snap actually lost 3 million daily users in Q2 2018. Snap Map hasn’t become ubiquitous, Snap’s Original Shows still aren’t premium enough to drag in tons of new users, Discover is a clickbait-overloaded mess, and Instagram has already copied the best parts of its ephemeral messaging.

SAN FRANCISCO, CA – SEPTEMBER 09: Evan Spiegel of Snapchat attends TechCruch Disrupt SF 2013 at San Francisco Design Center on September 9, 2013 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)

As BTIG’s Rich Greenfield points out, CEO Evan Spiegel claims Snapchat is the fastest way to communicate, but it’s not for text messaging, and the default that chats disappear makes it unreliable of utilitarian chat. And if WhatsApp were to add an ephemeral messaging feature of its own, growth for Snapchat could get even tougher. Snap will have to hope it can hold on to its existing users and squeeze more cash out of them to keep reducing losses.

All those product missteps and and market neglect have metastasized into a serious growth problem for Snapchat. It lost another 2 million users this quarter, and expects to sink further in Q4. Even with the Android rebuild, Spiegel’s assurances for renewed user growth in 2019 seem spurious. That means it’s highly unlikely that Snapchat will achieve Speigel’s goal of hitting profitability in 2019. It needs either an investor or acquirer to come to its aid.

A bailout check

Snap could sell more equity to raise money. $500 million to $1 billion would probably give it the runway necessary to get into the black. But from where? With all the scrutiny on Saudi Arabia, Snap might avoid taking money from the kingdom. Saudi’s Prince Al-Waleed Talal already invested $250 million to buy 2.5 percent of Snap on the open market.

Snap’s best bet might be to take more money from Chinese internet giant Tencent. The massive corporation already spent around $2 billion to buy a 12 percent stake in Snap from the open market. The WeChat owner has plenty of synergies with Snapchat, especially since it runs a massive gaming business and Snap is planning to launch a third-party developer gaming platform.

Tencent could still be a potential acquirer for Snap, but given President Trump’s trade war with China, he might push regulators to block a sale. The state of American social networks like Twitter and Facebook that are under siege by foreign election interference, trolls, and hackers might make the US government understandably concerned about a Chinese giant owning one of the top teen apps.

Regardless of who would invest, they’d likely demand real voting rights — something Snap has denied investors through a governance structure. Spiegel and his co-founder Bobby Murphy both get 10 votes per share. That’s estimated to amount to 89 percent of the voting rights. Shares issued in the IPO came with zero voting rights.

Evan Spiegel and Bobby Murphy, developers of Snapchat (Photo by J. Emilio Flores/Corbis via Getty Images)

But that surely wouldn’t sit well with any investor willing to pour hundreds of millions of dollars into the beleaguered company. Spiegel has taken responsibility for pushing the disastrous redesign early this year that coincided with a significant drop in its download rank. It also inspired a tweet from mega-celebrity Kylie Jenner bashing the app that shaved $1.3 billion off the company’s market cap.

Between the redesign flop, stagnant product innovation, and Spiegel laughing off Facebook’s competition only to be crushed by it, the CEO no longer has the sterling reputation that allowed him to secure total voting control for the co-founders. That means investors will want assurance that if they inject a ton of cash, they’ll have some recourse if Spiegel mismanages it. He may need to swallow his pride, issue voting shares, and commit to milestones he’s required to hit to retain his role as chief executive.

A Soft Landing Somewhere Else

Snap could alternatively surrender as an independent company and be acquired by a deep-pocketed tech giant. Without having to worry about finances or short-term goals, Snap could invest in improving its features and app performance for the long-term. Social networks are tough to kill entirely, so despite competition, Snap could become lucrative if aided through this rough spot.

Combine that with the $637 million bonus Spiegel got for taking Snap public, and he has little financial incentive or shareholder pressure compelling him to sell. Even if the company was bleeding out much worse than it is already, Spiegel could ride it into the ground.

Again, the biggest barrier to this path is Spiegel. Combine totalitarian voting control with the $637 million bonus Spiegel got for taking Snap public, and he has little financial incentive or shareholder pressure compelling him to sell. Even if the company was bleeding out much worse than it is already, Spiegel could ride it into the ground. The only way to get a deal done might be to make Spiegel perceive it as a win.

Selling to Disney could be spun as a such. It hasn’t really figured out mobile amidst distraction from super heroes and Star Wars. Its core tween audience are addicted to YouTube and Snap even if they shouldn’t be on them. They’re both LA companies. And Disney already ponied up $350 million to buy kids desktop social networking game Club Penguin. Becoming head of mobile or something like that for the most iconic entertainment company ever could a vaulted-enough position to entice Spiegel. I could see him being a Disney CEO candidate one day.

What about walking in the footsteps of Steve Jobs? Apple isn’t social. It failed so badly with efforts like its Ping music listeners network that it’s basically abdicated the whole market. iMessage and its cutesy Animoji are its only stakes. Meanwhile, it’s getting tougher and tougher to differentiate with mobile hardware. Each new iPhone seems closer to the last. Apple has resorted to questionable decisions like ditching the oft-missed headphone jack and reliable TouchID to keep the industrial design in flux.

Increasingly, Apple must rely on its iOS software to compete for customers with Android headsets. But you know who’s great at making interesting software? Snapchat. You know who has a great relationship with the next generation of phone owners? Snapchat. And do you know whose CEO could probably smile earnestly beside Tim Cook announcing a brighter future for social media unlocked by two privacy-focused companies joining forces? Snapchat. Plus, think of all the fun Snapple jokes?

There’s a chance to take revenge on Facebook if Snapchat wanted to team up with Mark Zuckerberg’s old arch nemesis Google . After Zuck declared “Carthage must be destroyed”, Google+ flopped and its messaging apps became a fragmented mess. Alphabet has since leaned away from social networking. Of course it still has the juggernaut that is YouTube — a perennial teen favorite alongside Snapchat and Instagram. And it’s got the perfect complement to Snap’s ephemerality in the form of Google Photos, the best-in-class permanent photo archiving tool. With the consume side of Google+ shutting down after accidentally exposing user data, Google still lacks a traditional social network where being a friend comes before being a fan.

What Google does have is a reputation for delivering the future. From Waymo’s self-driving cars to Calico’s plan to make you live forever, Google is an inventive place where big ideas come to fruition. Spiegel could frame Google as aligned with its philosophy of creating new ways to organize and consume information that adapt to human behavior. He surely wouldn’t mind being lumped in with Internet visionaries like Larry Page and Sergei Brin. Google’s Android expertise could reinvigorate Snap in emerging markets. And together they could take a stronger swing at Facebook.

But there are problems with all of these options. Buying Snap would be a massive bet for Disney, and Snap’s lingering bad rap as a sexting app might dissuade Mickey Mouse’s overlords. Apple rarely buys such late-stage public companies. CEO Tim Cook has been able to take the moral high ground because Apple makes its money from hardware rather than off of  personal info through ad targeting. If Apple owned Snap, it’d be in the data exploitation business just like everyone else.

And Google’s existing dominance in software might draw the attention of regulators. The prevailing sentiment is that it was a massive mistake to let Facebook acquire Instagram and WhatsApp, as it centralized power and created a social empire. With Google already owning YouTube, the government might see problems with it buying one of the other most popular teen apps.

That’s why I think Netflix could be a great acquirer for Snap. They’re both video entertainment companies at the vanguard of cultural relevance, yet have no overlap in products. Netflix already showed its appreciation for Snapchat’s innovation by adopting a Stories-like vertical video clip format for discovering and previewing what you could watch. The two could partner to promote Netflix Originals and subscriptions inside of Snapchat. Netflix could teach Snap how to win at exclusive content while gaining a place to distribute video that’s under 20 minutes long.

With a $130 billion market cap, Netflix could certainly afford it. Though since Netflix already has $6 billion in debt from financing Originals, it would have to either sell more debt or issue Netflix shares to Snapchat’s owners. But given Netflix’s high-flying performance, massive market share, and cultural primacy, the big question is whether Snap would drag it down.

So how much would it potentially cost? Snap’s market cap is hovering around $8.8 billion with a $6.28 share price. That’s around its all-time low and just over a quarter of its IPO pop share price high. Acquiring Snap would surely require paying a premium above the market cap. Remember, Google already reportedly offered to acquire Snap for $30 billion prior to its final funding round and IPO. But that was before Snap’s growth rate sunk and it started losing the Stories War to Facebook. A much smaller offer could look a lot prettier now.

Social networks are hard to kill. If Snap can cut costs, fix its product, improve revenue per users, and score some outside investment, it could survive and slowly climb. If Twitter is any indication, aging social networks can reflower into lucrative businesses given enough time and product care. But if Snapchat wants to play in the big leagues and continue having a major influence on the mobile future, it may have to snap out of the idea that it can win on its own.

Some years ago an investor I met at a TechCrunch event invited me out for a coffee. This happens a lot; as a weekly columnist here I am deemed an official Media Influencer, and people in turn want to influence me, until they realize I’m just going to ignore them and write about whatever weird idea comes into my head instead. I accepted this invitation, though, because this guy’s job was unusually interesting, in a bad way — he represented a venture fund affiliated with the Kremlin.

This was before Russia was the democracy-manipulating enemy it is today, but just after Russia passed its “anti-gay law,” so angry anti-Russian sentiment was exceptionally strong. It was fascinating to me watching this man squirm around the topic: I’m a Bay Area guy, he told me, I’m pro gay rights, pro gay marriage, but we have to accept that every country becomes enlightened at its own speed and its own way, and the best way for us to encourage that, to promote our values, is to engage with them, to show them the right way of doing things.

Needless to say this is a column about Saudi Arabia.

It’s kind of amazing that it’s taken the murder of Jamal Khashoggi to wake people up to that nation’s brutality. For three years now Saudi Arabia has been slaughtering thousands of Yemenis in a needless conflict wherein, to quote Bloomberg quoting the UN, “especially a Saudi Arabian-led coalition and the Yemeni government it backs, have shown a disregard for civilian life possibly amounting to war crimes.” It has long been a totalitarian absolute monarchy allied with what was once a radical interpretation of Islam, Wahhabism, which T.E. Lawrence described a hundred years ago as an obscure “fanatical heresy” — and which has since been mainstreamed with disastrous global consequences as a result of this alliance.

And, of course, it has long been an intimate international ally and partner of the United States. America’s financial / military / consulting / industrial / oil complexes have been in bed with the Saudis for a very, very long time, as have its politicians. Let’s not pretend that Saudi money in the tech industry is in any way exceptionally bad or different. Bad, yes, but as bad as, well, the rest of American society. For a long time the US attitude towards Saudi Arabia seems to have been: “sure, they’re an oppressive dictatorship, but they’re our oppressive dictatorship, and their royal family is very nice and very generous and they control so much oil.”

Now, though, at long last, that attitude seems to be changing. Not that the US is going to stop buying oil from them. Not that the US is going to stop selling weapons to them. But, despite occasional hesitant steps into the twentieth (but definitely not the twenty-first) century, nobody is going to pretend Saudi Arabia is anything other than a brutally oppressive state from here on in. (Shout-out to my homeland for being ahead of the curve on this one.) Which is progress, I guess, of a sort?

You can make a realpolitik case for continuing to engage with Saudi Arabia. Just like my coffee companion five years ago did for continuing to engage with Russia. See how well that turned out, how since then Russia has become so much more enlightened, so progressive, such a glorious contributor to the commonwealth of nations? …Oh. Saudi Arabia is different, yes, but in a worse way; it’s so sensitive to criticism, overreacts so wildly and violently, because it is fundamentally a fragile state. Nassim Taleb, who predicted the collapse of Syria and its civil war before it happened, has predicted a similar fate for Saudi Arabia.

I don’t think the Trump administration is going to continue its support for Saudi Arabia’s new and erratic leadership for fear of the human or economic consequences if they do otherwise. “Trump’s razor:” the stupidest reason is most likely to be correct. Here, that means the administration doesn’t want to walk back their Saudi support because they think that will make them look weak. Similarly, who are we kidding, VCs who take money from Saudi LPs aren’t doing so in order to help prop up the Pax Americana; it’s purely because they want the money, and nobody else is prepared to throw around $45 billion in cash.

Right now, though, and for the foreseeable future, sovereign Saudi money is tainted, poisoned, blood money. If you accept it you have to consider the consequences of publicly contravening our new, post-Khashoggi social morality, and the angry criticism which will follow. Will that last? Who can say? Even if it doesn’t, though, you’ll have to consider the consequences of privately contravening your own ethics, if you have any. That was also true last year, and it will still be true next year, no matter how much money we’re talking about.

Hacked Facebook users still don’t know which 15 recent searches and 10 latest checkins were exposed in the company’s massive breach it detailed last week. The company merely noted that those were amongst the data sets stolen by the attackers. That creates uncertainty about how sensitive or embarrassing the scraped data is, and whether it could possibly be used to blackmail and stalk them.

Much of the scraped data from the 14 million most-impacted users out of 30 million total people hit by the breach was biographical and therefore relatively static, such as their birth date, religion or hometown. While still problematic because it could be used for unconsented ad targeting, scams, hacking attempts or social engineering attacks, at least users likely know what was illicitly grabbed.

Thankfully, some of the most sensitive data fields, such as sexual orientation, were not accessed, Facebook confirms to me. But the exposure of recent searches and checkins could threaten users in different ways.

Given the attack was so broad and impacted a wide variety of users, unlike say a targeted attack on the Democratic National Convention, there’s no evidence that blackmailing or stalking individual users was the purpose of the hack. For the average user hit by the breach, the likelihood of this kind of follow-up attack may be low.

But given that public figures, including Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg, were victims of the attack, as well as many reporters (myself included), there remains a risk that the perpetrators paw through the data seeking high-profile people to exploit.

Stolen data on “the 15 most recent searches you’ve entered into the Facebook search bar” could contain embarrassing or controversial topics, competitive business research or potential infidelity. Many users might be mortified if their searches for racy content, niche political viewpoints or their ex-lovers were published in association with their real name. Hackers could potentially target victims with blackmail scams threatening to reveal this info to the world, especially since the hack included user contact info, including phone numbers and email addresses.

Scraped checkins could power real-world stalking or attacks. Users’ exact GPS coordinates were not accessible to the hackers, but they did grab 14 million people’s “10 most recent locations you’ve checked in to or been tagged in. These locations are determined by the places named in the posts, such as a landmark or restaurant, not location data from a device,” Facebook writes. If users checked in to nearby coffee shops, their place of work or even their home if they’ve given it a cheeky name as some urban millennials do, their history of visiting those locations is now in dangerous hands.

If users at least knew what searches or checkins of theirs were stolen, they could choose if or how they should modify their behavior or better protect themselves. That’s why amongst Facebook’s warnings to users about whether they were hacked and what types of data were accessed, it should also consider giving those users the option to see the specific searches or checkins that were snatched.

When asked by TechCrunch, a Facebook spokesperson declined to comment on its plans here. It is understandable that the company might be concerned that disclosing the particular searches and checkins could unnecessarily increase fear and doubt. But if it’s just trying to limit the backlash, it forfeited that right when it prioritized growth and speed over security.

As Facebook tries to recover from the breach and regain the trust of its audience of 2.2 billion, it should err on the side of transparency. If hackers know this information, shouldn’t the hacked users too?