Steve Thomas - IT Consultant

It’s never a good sign when, in order to discuss the near future of technology, you first have to talk about epidemiology–but I’m afraid that’s where we’re at. A week ago I wrote “A pandemic is coming.” I am sorry to report, in case you hadn’t heard, events since have not exactly proved me wrong.

The best current estimates are that, absent draconian measures like China’s, the virus will infect 40-70% of the world’s adults over the next year or so. (To be extra clear, though, a very sizable majority of cases will be mild or asymptomatic.)

This obviously leads to many questions. The most important is not “can we stop it from spreading?” The answer to that is already, clearly, no. The most important is “will its spread be fast or slow?” The difference is hugely important. To re-up this tweet/graph from last week:

A curve which looks like a dramatic spike risks overloading health care systems, and making everything much worse, even though only a small percentage of the infected will need medical care. Fortunately, it seems likely (to me, at least) that nations with good health systems, strong social cohesion, and competent leadership will be able to push the curve down into a manageable “hill” distribution instead.

Unfortunately, if (like me) you happen to live in the richest country in the world, none of those three conditions apply. But let’s optimistically assume America’s sheer wealth helps it dodge the bad-case scenarios. What then?

Then we’re looking at a period measured in months during which the global supply chain is sputtering, and a significant fraction of the population is self-isolating. The former is already happening:

It’s hard to imagine us avoiding a recession in the face of simultaneous supply and demand shocks. (Furthermore, if the stock markets keep dropping a couple percent every time there’s another report of spreading Covid-19, we’ll be at Dow 300 and FTSE 75 in a month or two–I expect a steady, daily drip-feed of such news for some time. Presumably traders will eventually figure that out.) So what happens to technology, and the tech industry, then?

Some obvious conclusions: technology which aids and enables remote work / collaboration will see growth. Biotech and health tech will receive new attention. More generally, though, might this accelerate the pace of technological change around the world?

A little over a year ago I wrote a piece entitled “Here comes the downturn” (predicting “Late 2019 or early 2020, says the smart money.”) To quote, er, myself:

The theory goes: every industry is becoming a technology industry, and downturns only accelerate the process. It’s plausible. It’s uncomfortable, given how much real human suffering and dismay is implicit in the economic disruption from which we often benefit. And on the macro scale, in the long run, it’s even probably true. Every downturn is a meteor that hits the dinosaurs hardest, while we software-powered mammals escape the brunt.

Even if so, though, what’s good for the industry as a whole is going to be bad for a whole lot of individual companies. Enterprises will tighten their belts, and experimental initiatives with potential long-term value but no immediate bottom-line benefit will be among the first on the chopping block. Consumers will guard their wallets more carefully, and will be ever less likely to pay for your app and/or click on your ad. And everyone will deleverage and/or hoard their cash reserves like dragons, just in case.

None of that seems significantly less true of a recession caused by a physical shock rather than a mere economic one. My guess is it will be relatively short and sharp, and this time next year both pandemic and recession will essentially be behind us. In the interim, though, it seems very much as if we’re looking at one of the most disconcertingly interesting years in a very long time. Let’s hope it doesn’t get too much moreso.

What happens if a Covid-19 coronavirus pandemic hits? It’s time to at least start asking that question. What will the repercussions be, if the virus spreads worldwide? How will it change how we live, work, socialize, and travel?

Don’t get all disaster-movie here. Some people seem to have the notion that a pandemic will mean shutting down borders, building walls, canceling all air travel, and quarantining entire nations, indefinitely. That is entirely incorrect. Containment attempts can slow down an outbreak and buy time to prepare, but if a pandemic hits, by definition, containment has failed, and further attempts will be pointless if not counterproductive. Rather:

The focus will switch from containment to mitigation, i.e. slowing down how fast the virus spreads through a population in which it has taken root. Mitigation can occur via individual measures, such as frequent hand washing, and collective measures, such as “social distancing” — cancellations of mass events, closures, adopting remote work and remote education wherever possible, and so forth.

The slower the pandemic moves, the smoother the demands on health-care systems will be; the less risk those systems will have of becoming overloaded; the more they can learn about how best to treat the virus; and the greater the number of people who may ultimately benefit from a vaccine, if one is developed. I recommend the whole thread above this instructive graph:

An important question for those of us in the media is: how do we report on Covid-19, in this time of great flux and uncertainty? Let me direct you to this excellent Scientific American piece by Harvard’s Bill Hanage and Marc Lipsitch: “How to Report on the COVID-19 Outbreak Responsibly.” (Disclosure / disclaimer; Bill is a personal friend.)

We think reporting should distinguish between at least three levels of information: (A) what we know is true; (B) what we think is true—fact-based assessments that also depend on inference, extrapolation or educated interpretation of facts that reflect an individual’s view of what is most likely to be going on; and (C) opinions and speculation […] facts about this epidemic that have lasted a few days are far more reliable than the latest “facts” that have just come out, which may be erroneous or unrepresentative and thus misleading. […] Distinguish between whether something ever happens and whether it is happening at a frequency that matters.

Read the whole thing. As an opinion columnist, I’m on pretty safe ground, in that everything I write is definitionally C) in the above taxonomy … but basically everything I’m citing counts as B).

Which includes the following statement: when I say “if a” in the first paragraph above, I really mean “when the.” A pandemic is coming; the question is at what scale. I recognize that may sound like irresponsible doomsaying. I strongly encourage you to be skeptical, to read widely, and to draw your own conclusions. But the clamor of expert voices is growing too loud for me to ignore. Here’s an entire Twitter thread linking to epidemiologists at Harvard, Johns Hopkins, and the Universities of Basel and Bern, saying so with very little ambiguity:

Don’t panic. There is a great deal we can and will do to limit and mitigate this pandemic. It’s all too easy to imagine fear becoming far more dangerous than the virus itself. Don’t let that happen. It’s also worth noting that its mortality rate is likely significantly lower than the headline 2%, not least because that doesn’t include mild undiagnosed cases:

Furthermore, the rate seems much lower yet for anyone under 60 years old, and enormously lower for anyone under 50. Some more context regarding mitigation:

Unless all of those people cited above are wrong — which seems unlikely — we will all spend the next weeks and months sharing the very strange collective experience of watching, through our laptops and phones, through Twitter and the mass media, the spread of this pandemic through much of the world, in what will seem like slow motion. Our day-to-day lives are ultimately likely to change somewhat. (If your office job isn’t remote-work-friendly today, I assure you, it will be this time next year.) But it will be very far from the end of the world. I suspect we’ll all be surprised by how soon it begins to feel almost normal.

Over the last year or so, much-to-most of the cryptocurrency world has pivoted from the failure of “fat tokens” and ICOs, and the faltering growth of “Layer 2” payments like Lightning and the late Plasma Network, to the new hotness known as “DeFi,” which this week was used to … hack? acquire? steal? It’s pretty ambiguous … a cool million dollars.

DeFi stands for Decentralized Finance. It’s supposed to be an entire alternative financial system. One day, its visionaries say, you will be able to use DeFi to borrow and lend, to buy and sell all kinds of exotic securities, and to acquire insurance and make claims, all via completely decentralized networks and protocols, no banks or brokers or trusted third parties required, just irrevocable and implacable software, “code as law,” with no human beings involved except for you and (maybe) your counterparties, while never having to fill out any paperwork or apply for permissions, and trusting your money to no entity except whoever holds your private key(s). One day.

Many people find this a stirring, inspiring vision. However, DeFi today is very few of those things. Today it allows you to borrow crypto using crypto as collateral; use that lending market to earn interest on your crypto holdings; trade crypto via decentralized exchanges, or DEXes; commit your crypto to liquidity pools, in exchange for a percentage of fees; insure yourself against hacks somewhat; and, well, that’s pretty much it.

Some people also call stablecoins, prediction markets like Augur, and security tokens (aka stocks / real estate On The Blockchain) part of DeFi. The first two seem pretty separate to me, though, with the exception of the Dai stablecoin. Security tokens should be DeFi, but are currently an awkward fit because of their strict regulatory requirements, and anyway haven’t exactly taken the world by storm.

I should know; I spent some weeks eighteen months ago coding a security token. I’ve been writing about cryptocurrencies here for nine years. And I have followed the growth of DeFi with … well … eye-watering boredom, along with some dismay, until this week.

DeFi seems to me more like cosplaying a financial system than an actual viable alternative. I don’t see it crossing that divide any time soon, if ever. It even cosplays the De in its name, too, since very few of today’s DeFi offerings (beyond its base layers) are actually decentralized — as in, beyond the control of some kind of centralized administration — or has any real schedule for becoming so.

Technically it’s all pretty cool, I concede. But what is the point of “borrowing money using money as collateral” for the 99.9% of people who aren’t true-believer HODLers loath to even consider simply selling their crypto? Even if you accept the “floating cryptocurrencies are like gold, stablecoins are like money” analogy, this entire system only really benefits the vanishingly small number of whales who own sizable amounts of cryptocurrency already. Perhaps we shouldn’t be surprised that they who hold that gold have made the new rules, but it’s a bit much to ask that the rest of us genuflect in awe and call them the future.

Similarly, it’s nice that you can earn a little interest on your crypto holdings, but for floating cryptocurrencies, that trickle will be drowned out by the rogue-wave-like price swings in their valuations for the foreseeable future. (For instance, much of the credit for the “more than $1 billion locked into DeFi contracts,” much cited across the industry, should go to the recent rise in valuations rather than increasing participation.) Even for stablecoin collateral, no reasonable analyst would consider the interest rates commensurate with the risk —

— because, as the events of this week point out, that risk is immense. Credit where it’s due: those events were made possible because of a genuinely novel innovation, a “flash loan,” wherein an anonymous party can borrow an arbitrary amount of money — yes, you read that correctly — providing that they ensure it’s all paid back by the end of a single smart-contract transaction. Think of it as an ATM giving you all the money you want, but locking the door until you deposit it all back.

That may seem surreal and pointless, but the thing about DeFi is, a single transaction can include many different steps between the borrow and the payback. This week’s two hacks took advantage of that fact. The first used half the flash loan to short the price of bitcoin, and the other half to borrow a lot of bitcoin, which it sold to temporarily lower its price — then claimed the short profits. It also took advantage of a bug in a smart contract intended to catch such transactions.

The second used some of the loan to borrow a lot of a cryptocurrency, then the rest to bid that up in value, then used that increased value as collateral to borrow even more, then paid back the loan and kept the increased value. It didn’t appear to take advantage of any bugs at all. Combined, they reaped roughly a cool million dollars’ worth of cryptocurrency.

Were these thefts? Were these totally legitimate arbitrage plays, using the system(s) as programmed, and, at least in the second case, apparently as designed? You can at least make a reasonable case either way.

The risks certainly do not stop there. People have even floated compelling-sounding theories suggesting how a hacker could extract the entire reserves of MakerDAO, the system behind the Dai stablecoin, which represents more than half of the combined committed value of all DeFi. In fairness, the responsible people involved will cheerfully tell you that these are bleeding-edge systems with fairly broad attack surfaces, and you probably don’t want to commit money to them that you can’t afford to lose.

But all this cosplay, clever as it is, doesn’t help solve any of the hard problems preventing cryptocurrencies from mattering to most. The oracle problem: if you rely on third parties to tell the blockchain what to do, then why not just rely on third parties to manage your money? (While also offering valuable things like a help number and recourse in the case of erroneous transactions.) The identity problem: how can you implement decentralized identity and reputation, so that you can offer credit based on someone’s history and status, rather than current cryptocurrency holdings?

Working on those problems would actually help to “bank the unbanked,” something that many cryptocurrency people used to pretend to care about. They would actually reduce the power that gargantuan centralized financial establishments hold over ordinary people. They could lead to an actual decentralized financial system which, even if only 1% of the population actually use it, would keep the giants honest simply by providing a viable alternative in case they became too draconian.

Please don’t start talking about Venezuela or Zimbabwe. Unlike you, I actually spent time in Zimbabwe during hyperinflation. If we wanted to use cryptocurrencies to help the masses suffering under profligate governments using increasingly worthless fiat currencies — which I absolutely agree is a noble goal — we wouldn’t be spending our time, effort, and intellectual horsepower on the ability to use cryptocurrency A as collateral for loans denominated in cryptocurrency B. They are completely orthogonal.

Instead of tackling the hard problems, or bringing crypto to people who need it, DeFi today seems to be mostly about creating an alternative financial system which makes life mildly more convenient for those whales who happened to wind up holding a big bag of cryptocurrencies after the first few booms. And as this week’s events show, it may not even be good at that. Please can we get back to the important problems?

NASA’s Jet Propulsion Laboratory designs, builds, and operates billion-dollar spacecraft. That makes it a target. What the infosec world calls Advanced Persistent Threats — meaning, generally, nation-state adversaries — hover outside its online borders, constantly seeking access to its “ground data systems,” its networks on Earth, which in turn connect to the ground relay stations through which those spacecraft are operated.

Their presumptive goal is to exfiltrate secret data and proprietary technology, but the risk of sabotage of a billion-dollar mission also exists. Over the last few years, in the wake of multiple security breaches which included APTs infiltrating their systems for months on end, the JPL has begun to invest heavily in cybersecurity.

I talked to Arun Viswanathan, a key NASA cyber security researcher, about that work, which is a fascinating mix of “totally representative of infosec today” and “unique to the JPL’s highly unusual concerns.” The key message is firmly in the former category, though: information security has to be proactive, not reactive.

Each mission at JPL is like its own semi-independent startup, but their technical constraints tend to be very unlike those of Valley startups. For instance, mission software is usually homegrown/innovative, because their software requirements are so much more stringent: for instance, you absolutely cannot have software going rogue and consuming 100% of CPU on a space probe.

Successful missions can last a very long time, so the JPL has many archaic systems, multiple decades old, which are no longer supported by anyone; they have to architect their security solutions around the limitations of that ancient software. Unlike most enterprises, they are open to the public, who tour the facilities by the hundred. Furthermore, they have many partners, such as other space agencies, with privileged access to their systems.

All that … while being very much the target of nation-state attackers. Theirs is, to say the last, an interesting threat model.

Viswanathan has focused largely on two key projects. One is the creation of a model of JPL’s ground data systems — all its heterogeneous networks, hosts, processes, applications, file servers, firewalls, etc. — and a reasoning engine on top of it. This then can be queried programmatically. (Interesting technical side note: the query language is Datalog, a non-Turing-complete offshoot of venerable Prolog which has had a resurgence of late.)

Previous to this model, no one person could confidently answer “what are the security risks of this ground data system?” As with many decades-old institutions, that knowledge was largely trapped in documents and brains.

With the model, ad hoc queries such as “could someone in the JPL cafeteria access mission-critical servers?” can be asked, and the reasoning engine will search out pathways, and itemize their services and configurations. Similarly, researchers can work backwards from attackers’ goals to construct “attack trees,” paths which attackers could use to conceivably reach their goal, and map those against the model, to identify mitigations to apply.

His other major project is to increase the JPL’s “cyber situational awareness” — in other words, instrumenting their systems to collect and analyze data, in real time, to detect attacks and other anomalous behavior. For instance, a spike in CPU usage might indicate a compromised server being used for cryptocurrency mining.

In the bad old days, security was reactive: if someone had a problem and couldn’t access their machine, they’d call, but that was the extent of their observability. Nowadays, they can watch for malicious and anomalous patterns which range from the simple, such as a brute-force attack indicated by many failed logins followed by a successful one, to the much more complex, e.g. machine-learning based detection of a command system operating outside its usual baseline parameters.

Of course, sometimes it’s just an anomaly, not an attack. Conversely, this new observability is also helping to identify system inefficiencies, memory leakage, etcetera, proactively rather than reactively.

This may all seem fairly basic if you’re accustomed to, say, your Digital Ocean dashboard and its panoply of server analygics. But re-engineering an installed base of heterogeneous complex legacy systems for observability at scale is another story entirely. Looking at the borders and interfaces isn’t enough; you have to observe all the behavior inside the perimeter too, especially in light of partners with privileged access, who might abuse that access if compromised. (This was the root cause of the infamous 2018 attack on the JPL.)

While the JPL’s threat model is fairly unique, Viswanathan’s work is quite representative of our brave new world of cyberwarfare. Whether you’re a space agency, a big company, or a growing startup, your information security nowadays needs to be proactive. Ongoing monitoring of anomalous behavior is key, as is thinking like an attacker; reacting after you find out something bad happened is not enough. May your organization learn this the easy way, rather than joining the seemingly endless of headlines telling us all of breach after breach.

The 2019-nCoV coronavirus is a global public health emergency of significant concern. It is also, simultaneously, a fount of misinformation, wild conspiracy theories, and both over- and under-reactions. Whose fault is this? So glad you asked. I happen to have a little list.

Purveyors of misinformation. As archly observed by The Atlantic, that misleadingly-self-described Harvard epidemiologist who tweeted “HOLY MOTHER OF GOD” followed by math errors was … well … wrong.

However he pales in comparison to the bioweapon theorists at Zero Hedge (who were banned from Twitter as a result, apparently for doxxing a Chinese scientist) and let’s not forget to shake a finger of blame at the people who posted / linked to the much-debunked, non-peer-reviewed “signs HIV insertions in the coronavirus” paper online.

Science itself. Why would people link to that paper? Well, because non-peer-reviewed preprints are often mistaken by the general public for peer-reviewed science. Why are preprints so increasingly important? Because awful, predatory scientific publishers massively overcharge for access to scentific papers, often even when they’re funded by public money.

Social media. Not to belabor my dead horse here, but what you see on your social media is determined by algorithms optimized for engagement, which frequently means outrage. That viral HOLY MOTHER OF GOD tweet would have been more of a minor blip if Twitter still kept to strict chronological timelines. Note that this would also make “good” tweets far less viral. That would be the price we pay for abandoning the engagement algorithms, but it seems at least plausible that it would still lead to a better world.

General innumeracy: You remember how I mentioned people were underreacting too? I have seen so, so many self-identified galaxy-brain thinkers informing us that it’s silly to be so concerned about the coronavirus when the flu kills more people. I’ve even seen a handy Myths and Facts infographic wandering all over Facebook, ‘informing’ us all that “the common flu kills 60 times more people annually than Corona.”

People, the flu and nCoV-2019 are not comparable. It’s apples to zebras. We know exactly what to expect from the flu: we don’t yet know what to expect from this new virus. That’s why it’s of concern. You especially cannot compare annual death rates, since we don’t know what this new virus’s annual death rate is, since it’s only existed in humans for two months. Sheesh.

Human nature. This is arguably the big one. On some level, everyone loves an apocalypse, in that it’s a narrative they completely understand, one they can envision and have envisioned for themselves. So anything in the real world associated with an apocalypse gets clicks, commentary, and reshares.

I should know: when not writing for TechCrunch I happen to be the director of the GitHub Archive Program, which includes a whole bunch of present-day archiving, as well as very-long-term 1,000-year storage which is primarily intended for historical or recovering-abandoned-technologies usage … and yet everyone’s mind, whenever I talk about it, immediately jumps to “Canticle for Leibowitz”-style postapocalyptic scenarios, and stays there.

Which is fine! I mean, I appreciate that everyone’s interested in the project and has ideas about it, just as I appreciate that the coronavirus is a global public health emergency, and people should be paying close attention to it. But our collective fondness for apocalyptic narratives, combined with the other contributors above, may, if we’re not careful, transmute that attention into belief in wacky conspiracy theories and misinformation. Please stop to think, before you believe, and before you share.

Facebook’s internal “Supreme Court” can’t set precedents, can’t make decisions about Facebook Dating or Marketplace, and can’t oversee WhatsApp, Oculus, or any messaging feature, according to the bylaws Facebook proposed today for its Oversight Board. It’s designed to provide an independent appeals process for content moderation rulings. But it will only be able to challenge content taken down, not left up, until at least later this year so it likely won’t be able to remove misinformation in political ads allowed by Facebook’s controversial policy before the 2020 election.

Oh, and this Board can’t change its own bylaws without Facebook’s approval.

The result is an Oversight Board does not have deep or broad power to impact Facebook’s on-going policies — only to clean up a specific instance of a botched decision. It will allow Facebook to point to an external decision maker when it gets in hot water for potential censorship, differing responsibility.

That said, it’s better than nothing. Currently Facebook simply makes these decisions internally with little recourse for victims. It will also force Facebook to be a little more transparent about its content moderation rule-making, since it will have to publish explanations for why it does or doesn’t adopt the policy change recommendations.

But for Facebook to go to so much work consulting 2,200 people in 88 countries for feedback on its plans to create the Oversight Board, then propose bylaws that keep its powers laughably narrow, feels like an emblem of Facebook’s biggest criticisms: that it talks a big game about privacy and safety, but its actions serve to predominantly protect its power and control over social networking.

For starters, the Board is funded for six years with an irrevocable $130 million from Facebook, but it could let it expire after that. Decisions can take up to 90 days to make and 7 days once made to be implemented, so the Board isn’t designed for rapid response to viral issues.

One major issue is that Facebook is choosing the co-chairs of the Board who will then pick the 40 initial Board members who’ll choose the future members. Facebook has already picked these co-chairs but won’t reveal them until next month. A controversial or biased co-chair could influence all future decisions of the Board by choosing its membership. We also don’t know if Facebook asked candidates for the co-chair positions about their views on issues like misinformation in political ads and if that influenced who was offered the position.

You can expect a lot of backlash if Facebook chooses an overtly liberal or conservative co-chair or one firmly aligned and opposed with the current presidential administration. That will be rightful, considering a single co-chair more motivated by politics than what’s right for Facebook’s 2 billion users could have disatrous implications for its content policies.

In one of the most worrying quotes I’ve ever seen from a Facebook executive in 10 years of reporting on the company, VP of global policy Nick Clegg told Wired’s Steven Levy that “We know that the initial reaction to the Oversight Board and its members will basically be one of cynicism—because basically, the reaction to pretty well anything new that Facebook does is cynical.”

So Clegg has essentially disarmed all criticism of Facebook and crystallized the company’s defensive stance…which is one of pure cynicsm. He’s essentially saying “Why should we listen to anyone. They hate anything we do.” That’s a dangerous path, and again one that embodies exactly what society is so concerned about: that one of the most powerful companies in the world in charge of fundamental communications utilties actually doesn’t care what the public has to say.

Clegg also emphatically told Wired that the Board won’t approach the urgent issue of misinformation in political ads before the 2020 election because the Board needs time to “find its feet”. Not only does that mean the Board can’t rule on perhaps the most important and controversial of Facebook’s policies decisions until the damage from campaign lies is done. It also implies Clegg and Facebook have the ability to influence what cases the Board doesn’t look at, which is exactly what the purported autonomy of the Board is meant to prevent.

In the end, while the Oversight Board’s decisions on a specific piece of content are binding, Facebook has lee-way when deciding whether to apply it to similar existing pieces of content. And in what truly makes the Board toothless, Facebook only has to take the Board’s guidance on changes to policy going forward under consideration. The Board can’t set precedents. Recommendations will go through Facebook’s “policy development process” and receive “thorough analysis”, but Facebook can then just say ‘nope’.

I hope the chosen co-chairs and eventual members refuse to ratify this set of bylaws without changes. Facebook initial intention for the Oversight Board seemed sound. Some decisions about how information flows in our society should be bigger than Mark Zuckerberg. But the devil in the details say he still gets the final say.

The interesting thing about the technology business is that, most of the time, it’s not the technology that matters. What matters is how people react to it, and what new social norms they form. This is especially true in today’s era, well past the midpoint of the deployment age of smartphones and the Internet.

People — smart, thoughtful people, with relevant backgrounds and domain knowledge — thought that Airbnb and Uber were doomed to failure, because obviously no one would want to stay in a stranger’s home or ride in a stranger’s car. People thought the iPhone would flop because users would “detest the touch screen interface.” People thought enterprise software-as-a-service would never fly because executives would insist on keeping servers in-house at all costs.

Thees people were so, so, so wrong; but note that they weren’t wrong about the technology. (Nobody really argued about the technology.) Instead they were dead wrong about other people, and how their own society and culture would respond to this new stimulus. they were anthropologically incorrect.

This, of course, is why every major VC firm, and every large tech company, keeps a crack team of elite anthropologists on call at all times, with big budgets and carte blanche, reporting directly to the leadership team, right? (Looks around.) Oh. Instead they’re doing focus groups and user interviews, asking people in deeply artificial settings to project their usage of an alien technology in an unknown context, and calling that their anthropological, I’m sorry, their market research? Oh.

I kid, I kid. Sort of, at least, in that I’m not sure a crack team of elite anthropologists would be all that much more effective. It’s hard enough getting an accurate answer of how a person would use a new technology when that’s the only variable. When they live in a constantly shifting and evolving world of other new technologies, when the ones which take root and spread have a positive-feedback-loop effect on the culture and mindset towards new technologies, and when every one of your first twenty interactions with new tech changes your feelings about it … it’s basically impossible.

And so: painful trial and error, on all sides. Uber and Lyft didn’t think people would happily ride in strangers’ cars either; that’s why Uber started as what is now Uber Black, basically a phone-summoned limo service, and Lyft used to have that painfully cringeworthy “ride in the front seat, fist-bump your driver” policy. Those are the success stories. The graveyard of companies whose anthropological guesses were too wrong to pivot to rightness, or who couldn’t / wouldn’t do so fast enough, is full to bursting with tombstones.

That’s why VCs and Y Combinator have been much more secure businesses than startups; they get to run dozens or hundreds of anthropological experiments in parallel, while startups get to run one, maybe two, three if they’re really fast and flexible, and then they die.

This applies to enterprise businesses too, of course. Zoom was anthropological bet that corporate cultures could make video conferencing big and successful if it actually worked reliably. It’s easy to imagine the mood among CEOs instead being “we need in-person meetings to encourage those Moments of Serendipity,” which you’ll notice is the same argument that biased so many big companies against remote work and in favor of huge corporate campuses … an attitude which looks quaint, old-fashioned, and outmoded, now.

This doesn’t just apply to the deployment phase of technologies. The irruption phase has its own anthropology. But irruption affects smaller sectors of the economy, whose participants are mostly technologists themselves, so it’s more anthropologically reasonable for techies to extrapolate from their own views and project how that society will change.

The meta-anthropological theory held by many is that what the highly technical do today, the less technical will do tomorrow. That’s a belief held throughout the tiny, wildly non-representative cryptocurrency community, for instance. But even if it was true once, is it still? Or is a shift away from that pattern that another, larger social change? I don’t know, but I can tell you how we’re going to find out: painful trial and error.

Time is supposed to make technology better. The idea is simple: With more time, humans make newer, better technology and our lives improve. Except for when the opposite happens.

Google is a good example of this. I’ve been harping on the matter for a while now. Google mobile search, in case you haven’t used it lately, is bad. It often returns bloated garbage that looks like a cross between new Yahoo and original Bing.

Here’s how it butchered a search query for “Metallica” this morning:

Remember when that interface was simpler, and easier to use, and didn’t try to do literally every possible thing for every possible user at once?

It’s not just Google’s mobile search interface that makes me want to claw my eyes out and learn how to talk to trees. Everyone now knows that Mountain View has effectively given up on trying to distinguish ads from organic results (Does the company view them as interchangeable? Probably?). TechCrunch’s Natasha Lomas covered the company’s recent search result design changes today, calling them “user-hostile,” going on to summarize the choices as its “latest dark pattern.”

Google, once fanatical about super-clean, fast results is now trying to help you way too much on mobile and fool you on Chrome.

Chrome itself kinda sucks and is getting worse. But we all know that. Of course, that all this is shaking out around the same time that the company’s founders left is, you know, not shocking.

I’d also throw TweetDeck into the mix. It’s garbage slow and lags and sucks RAM. Twitter has effectively decided that its power users are idiots who don’t deserve good code. Oh, and Twitter is deprecating some cool analytics features it used to give out to users about their followers.

Chrome and TweetDeck are joined by apps like Slack that are also slowing down over time. It appears that as every developer writes code on a computer with 64,000 gigs of RAM, they presume that they can waste everyone else’s. God forbid if you have the piddling 16 gigs of RAM that my work machine has. Your computer is going to lag and often crash. Great work, everyone!

Also, fuck mobile apps. I have two phones now because that’s how 2020 works and I have more apps than I know what to do with, not to mention two different password managers, Okta and more.I’m so kitted out I can’t breathe. I have so many tools available to me I mostly just want to put them all down. Leave me alone! Or only show me the thing I need — not everything at once!

Anyhoo video games are still pretty good as long as you avoid most Battle Royale titles, micropayments, and EA. Kinda.

If I watch a Story cross-posted from Instagram to Facebook on either of the apps, it should appear as “watched” at the back of the Stories row on the other app. Why waste my time showing me Stories I already saw?

It’s been over two years since Instagram Stories launched cross-posting to Stories. Countless hours of each feature’s 500 million daily users have been squandered viewing repeats. Facebook and Messenger already synchronized the watched/unwatched state of Stories. It’s long past time that this was expanded to encompass Instagram.

I asked Facebook and Instagram if it had plans for this. A company spokesperson told me that it built cross-posting to make sharing easier to people’s different audiences on Facebook and Instagram, and it’s continuing to explore ways to simplify and improve Stories. But they gave no indication that Facebook realizes how annoying this is or that a solution is in the works.

The end result if this gets fixed? Users would spend more time watching new content, more creators would feel seen, and Facebook’s choice to jam Stories in all its apps would fee less redundant and invasive. If I send a reply to a Story on one app, I’m not going to send it or something different when I see the same Story on the other app a few minutes or hours later. Repeated content leads to more passive viewing and less interactive communication with friends, despite Facebook and Instagram stressing that its this zombie consumption that’s unhealthy.

The only possible downside to changing this could be fewer Stories ad impressions if secondary viewings of peoples’ best friends’ Stories keep them watching more than new content. But prioritizing making money over the user experience is again what Mark Zuckerberg has emphasized is not Facebook’s strategy.

There’s no need to belabor the point any further. Give us back our time. Stop the reruns.

The most interesting thing I saw online this week was Venkatesh Rao’s “Internet of Beefs” essay. I don’t agree with all of it. I’m not even sure I agree with most of it. But it’s a sharp, perceptive, well-argued piece which offers an explanation for why online public spaces have almost all become battlefields, or, as he puts it:

“are now being slowly taken over by beef-only thinkers … Anything that is not an expression of pure, unqualified support for whatever they are doing or saying is received as a mark of disrespect, and a provocation … as the global culture wars evolve into a stable, endemic, background societal condition of continuous conflict.” He goes on to taxonomizes the online knights and mooks who fight in this conflict, in incisive detail.

I agree this continuous conflict exists. (There exists another theory arguing that it’s really mostly bots and disinformation ops. Maybe, I guess, but that claim seems increasingly unconvincing.) I think this seething tire-fire conflict is part of something larger: the transition of the marketplace of ideas from a stock market into a weapons market.

Once, the idea was, there existed a “marketplace of ideas,” wherein people from across the political spectrum — generally the highly educated, but with some room for notions bubbling up from the grassroots — would introduce ideas for initiatives, actions, programs, and/or laws. These ideas would be considered, contrasted, debated, honed, amended, and weighed, and over time, in the same way stock markets identify the best companies, the the marketplace of ideas would identify the finest concepts. These in turn would then see actual implementation, courtesy of those in power — i.e. the rich and the elected — for the greater good of all.

This was the world of think tanks, of policy documents, of presentations at important conferences, of reporting breathlessly on major speeches, of trial-balloon op-eds, of congressional and parliamentary testimony, of councils and summits and studies that produced lavishly bound reports with the expectation that they would be seriously and judiciously considered by all sides of a debate. It was a world where new ideas might climb the hierarchy of the so-called great and good until they rose high enough that it was seen fit to actually implement them.

I don’t know if you’ve noticed, but if we ever lived in a world anything like that, well, we don’t any more. Some reject it on the (correct) grounds that this so-called marketplace of ideas, shockingly, always seemed to favor entrenching the interests of those “great and good,” the rich and the elected, the councilors and the presenters, rather than the larger population. Others simply want more for themselves and less for everyone else, rather than aiming for any kind of Pareto-optimal ideal outcome for all.

Nowadays the primary goal is to win the conflict, and other outcomes are at best secondary. Policy documents and statistical analyses are not taken for serious across-the-board consideration; they are simply weapons, or fig leaves, to serve as defenses or pretexts for decisions which have already been made.

This may seem so self-evident that it’s not even worth writing about — you probably need only consider your local national politics — but the strange thing is that so many of the participants in the whole apparatus, the policy analysts and think tankers and speechgivers and presenters, don’t seem to realize that nowadays their output is used as weapons and pretexts, rather than ideas to compete with other ideas in a rational marketplace.

Let’s pick a few relatively apolitical/acultural ones, to minimize the chance of your own ingrained conflict responses kicking in. Consider NIMBYism in Bay Area real estate: the opposition to building more housing on the grounds that this could not possibly lower housing prices. It’s a perfect object example of a low-level constant conflict in which all participants have long sine decided on their sides. There is no point in bringing conflicting data to a NIMBY (and, of course, they would say the same about a YIMBY like myself) as they will find a way to dismiss or ignore it. You can lead a horse to data, but you can’t make them think.

A couple more low-politics examples from my own online spaces: in the cryptocurrency world, most participants are so incentivized to believe in their One Truth that nearly every idea or proposal leads to an angry chorus denouncing all other truths. Or consider advocates of greater law enforcement “lawful access” to all encrypted messaging, vs. my own side, that of privacy advocates devoutly opposed to such. Neither side seems particularly interested in actually seriously considering any new data or new idea which might support the other side’s arguments. The dispute is more fundamental than that.

There exist a few remaining genuine marketplaces of ideas. Engineering standards and protocols, for one. (Yes, politics and personal hobbyhorses / vendettas get everywhere, even there, but relatively speaking.) The law, for another, albeit seemingly decreasingly so. But increasingly, academic papers, policy analyses, cross-sectional studies, closely argued op-eds, center-stage presentations, etc., are all artifacts of a world which no longer exists, if it ever really did. Nowadays these artifacts are largely just used to add a veneer of respectability to pre-existing tribal beliefs.

This isn’t true of every politician, CEO, billionaire, or other decisionmaker. And it’s certainly more true of one side than the other. But the increasingly irrelevant nature of our so-called marketplace of ideas seems hard to ignore. Perhaps, when it comes to the the tangible impact of these ceaseless online coal-fire conflicts, that old joke at the expense of academia applies: the discourse is so vicious because the stakes are so small.

A strange new sensation has settled across the tech industry, one so foreign, so alien, it’s almost hard to recognize. A sense that some great expectations are being radically revised downwards; that someone has turned down a previously unquenchable money spigot; that unit economics can matter even when you’re in growth mode. Could it be … thrift?

Well, OK, let’s not go that crazy. But we are witnessing a remarkable confluence of (relatively) parsimonious events. Last year’s high-profile tech IPOs are far from high-fliers: Uber, Lyft, Slack, Pinterest, and Peloton are all down from their IPO prices as I write this, some of them significantly so, even while the overall market has climbed to all-time highs. Those who expected immediate massive wealth six months later, even for relative recent employees, have been surprised.

Meanwhile, not-yet-public companies are tightening their belts, or taking their chances. We have seen recent waves of layoffs at a spectrum of tech unicorns. Others, i.e. Casper and One Medical, just filed for IPOs to general criticism if not outright derision of the numbers in their S-1s.

The less said about the WeWork debacle, the better, but we can’t not talk about it, as the repercussions have been significant. Both directly — SoftBank is ramping back significantly, including walking away from term sheets, prompting more layoffs — and indirectly, in that they seem to have swung the Valley’s overall mood from greed towards fear.

Towards fear, please note, not to fear; there’s a big difference. Even in the absence of SoftBank there is is still a whole lot of venture money sloshing around out there … although it seems possible that its investors are beginning to find it a little harder to spend it responsibly. VCs, correctly, are generally still extremely optimistic about the overall future of the tech industry, and still tend to focus on growth first, revenue a distant second, cash flow third, and profits maybe someday eventually depending on a lot of factors.

That said, the once-pervasive sense that everything tech touches immediately turns to gold is much diminished. It’s worth noting that many pure software companies, and their IPOs, are still very successful: Zoom, Docusign, Datadog, and a lot of other companies you’ve never heard of unless you’re an enterprise software fetishist are doing quite nicely, thanks. It’s only consumer tech which seems to be either currently disappointing or previously overvalued, depending on your point of view. Software is continuing to eat the world.

But there seems to be a growing recognition that the world is a forest, not a pizza, and there is a big difference between low-hanging fruit and eggs hidden in the high branches. Just because you use some custom software doesn’t make you a software company; it just means you’re paying today’s table stakes. So if you’re not a software company, and you’re not a hardware company … then how exactly are you a tech company?

By that rubric, which seems like a pretty reasonable one, WeWork isn’t a tech company, and never was. Casper isn’t a tech company. One Medical isn’t a tech company. (This is admittedly highly anecdotal, but judging from my own household’s recently experiences, One Medical’s new software systems seem to have degraded rather than improved their level of care.) They’ve been dressed up like tech companies to adopt the tech halo, but it looks awfully unconvincing on them — and they’ve done so just as that halo has begun to slip.

Maybe this multi-market malaise is temporary, a hangover from a few overhyped IPOs and last year’s SoftBank madness. Maybe the tech wheat will be separated from the wannabe chaff soon enough, and the former will continue to prosper. Or maybe, just maybe, we’re beginning to see the end of the golden days of low hanging fruit, and increasingly only hard science or hard software will be the paths to tech success. It’s a little unclear which way to hope.

Look around: what is happening? Australia, AI, Ghosn, Google, Suleimani, Starlink, Trump, TikTok. The world is an eruptive flux of frequently toxic emergent behavior, and every unexpected event is laced with subtle interconnected nuances. Stephen Hawking predicted this would be “the century of complexity.” He was talking about theoretical physics, but he was dead right about technology, societies, and geopolitics too.

Let’s try to define terms. How can we measure complexity? Seth Lloyd of MIT, in a paper which drily begins “The world has grown more complex recently, and the number of ways of measuring complexity has grown even faster,” proposed three key categories: difficulty of description, difficulty of creation, and degree of organization. Using those three criteria, it seems apparent at a glance that both our societies and our technologies are far more complex than they ever have been, and rapidly growing even moreso.

The thing is, complexity is the enemy. Ask any engineer … especially a security engineer. Ask the ghost of Steve Jobs. Adding complexity to solve a problem may bring a short-term benefit, but it invariably comes with an ever-accumulating long-term cost. Any human mind can only encompass so much complexity before it gives up and starts making slashing oversimplifications with an accompanying risk of terrible mistakes.

You may have noted that those human minds empowered to make major decisions are often those least suited to grappling with nuanced complexity. This itself is arguably a lingering effect of growing complexity. Even the simple concept of democracy has grown highly complex — party registration, primaries, fundraising, misinformation, gerrymandering, voter rolls, hanging chads, voting machines — and mapping a single vote for a representative to dozens if not hundreds of complex issues is impossible, even if you’re willing to consider all those issues in depth, which most people aren’t.

Complexity theory is a rich field, but it’s unclear how it can help with ordinary people trying to make sense of their world. In practice, people deal with complexity by coming up with simplified models close enough to the complex reality to be workable. These models can be dangerous — “everyone just needs to learn to code,” “software does the same thing every time it is run,” “democracies are benevolent” — but they were useful enough to make fitful progress.

In software, we at least recognize this as a problem. We pay lip service to the glories of erasing code, of simplifying functions, of eliminating side effects and state, of deprecating complex APIs, of attempting to scythe back the growing thickets of complexity. We call complexity “technical debt” and realize that at least in principle it needs to be paid down someday.

“Globalization should be conceptualized as a series of adapting and co-evolving global systems, each characterized by unpredictability, irreversibility and co-evolution. Such systems lack finalized ‘equilibrium’ or ‘order’; and the many pools of order heighten overall disorder,” to quote the late John Urry. Interestingly, software could be viewed that way as well, interpreting, say, “the Internet” and “browsers” and “operating systems” and “machine learning” as global software systems.

Software is also something of a best possible case for making complex things simpler. It is rapidly distributed worldwide. It is relatively devoid of emotional or political axegrinding. (I know, I know. I said “relatively.”) There are reasonably objective measures of performance and simplicity. And we’re all at least theoretically incentivized to simplify it.

So if we can make software simpler — both its tools and dependencies, and its actual end products — then that suggests we have at least some hope of keeping the world simple enough such that crude mental models will continue to be vaguely useful. Conversely, if we can’t, then it seems likely that our reality will just keep growing more complex and unpredictable, and we will increasingly live in a world of whole flocks of black swans. I’m not sure whether to be optimistic or not. My mental model, it seems, is failing me.