Everything You Think About the Future Is Wrong

DumbifyDumbify Podcast

Listen to the Dumbify Podcast on Spotify or Apple.


👋 Hey dumdums,

In December of 1999, I flew my wife to the Cayman Islands to watch civilization collapse. That was my actual plan. I was in my twenties, living in New York, absolutely convinced that Y2K was going to shut everything down. The grid. The banks. The planes. My reasoning was that if the world was ending, I should at least be wearing shorts on a Caribbean beach when it happened. I thought this made me a strategic thinker.

Then midnight hit. My cell phone still worked. The lights stayed on. My friend Josh Harris, a bona fide New York tech visionary, was partying in an underground bunker in lower Manhattan he'd built for the apocalypse. Both of us fully committed to the same cultural prediction. Both of us completely wrong.

And while we were cosplaying the end of the world, a nineteen-year-old named Shawn Fanning was building Napster in his dorm room that ACTUALLY changed things. A tech visionary built a bunker. I booked a tropical vacation. And the real future was a teenager with a laptop who nobody was watching.


Culture is the worst prediction machine ever built.

Not because culture lacks imagination, but because culture has too much of it, and all of it is pointed at the wrong door.

We've been telling stories about robots killing us since Frankenstein. Thousands of stories about machines that turn evil. Essentially zero about machines that help you write a better email or find a recipe for Tuesday night dinner.

Meanwhile, millions of adults are paying a monthly subscription to watch a stranger silently organize a refrigerator on YouTube. We spent fifty years warning each other about Skynet, and the actual future of technology was a woman in Portland putting pickles into matching containers while two million people whispered 'so calming' in the comments.

❝ The thing that will matter the most in the future is most likely the thing nobody's paying attention to right now. ❞


Nobody Paid Attention to Texting

In the 1980s, engineers working on the GSM cellular standard noticed some spare bandwidth in the signaling channel. They figured they could squeeze in short text messages. Maybe useful for service alerts. Boring technical stuff. Telecom executives thought it was pointless. Why would anyone laboriously type on a phone keypad when you could just call? Matti Makkonen, the Finnish engineer often credited as the father of SMS, couldn't even get his colleagues excited about it. By 2012, humans were sending 8 trillion texts per year. The most transformative communication technology since the telephone was a feature that nobody wanted to build, nobody wanted to market, nobody was paying attention to, and nobody could imagine anyone actually using.


Nobody Paid Attention to Random People Posting "Facts" Online Until It Became Wikipedia

In January 2001, Jimmy Wales launched a website with a premise so obviously stupid that professional encyclopedists treated it like a joke. Let anyone, anywhere, edit any entry about anything. No credentials required. Robert McHenry, a former editor-in-chief of Encyclopedia Britannica, wrote a famous takedown comparing Wikipedia to a public restroom. Then in 2005, Nature published a study comparing 42 science articles from both sources. The average Wikipedia article had four errors. The average Britannica article had three. Britannica's 233 years of editorial expertise and 110 Nobel laureates lost, essentially, to a website where someone's username is TacoLord69. The professionals couldn't imagine this outcome because the professionals had already imagined what an encyclopedia was supposed to look like.


But Everybody is Paying Attention to Our Current Freakout: "The AI Panic"

In March 2023, the Future of Life Institute published an open letter calling for every AI lab on earth to pause development for six months. Thirty thousand signatures. Elon Musk. Steve Wozniak. Yuval Noah Harari. A second letter compared AI to nuclear weapons and pandemics. In one sentence. Nobody paused anything. The companies that signed the letters kept building. Meanwhile, AI researcher Timnit Gebru called it what it was. Fear-mongering that promotes a "futuristic, dystopian sci-fi scenario" while ignoring the actual problems. Bias in hiring algorithms. Surveillance. Misinformation. Boring, unglamorous, utterly real problems that don't make for good movies.


And look, I get it. The AI thing freaks me out too.

But we're standing in the eye of the panic storm and the performance is deafening. While everyone was signing open letters about extinction, Google DeepMind's AlphaFold quietly predicted the structure of over 200 million proteins, covering nearly every protein known to science. That's not a movie plot. That's how you find cures for diseases nobody can treat yet. Google's flood forecasting AI is sending early warnings to communities across 80 countries, giving people days to evacuate instead of hours. Nobody's making a thriller about that. Accurate flood alerts don't trend. But they do save actual human lives, which I'm told is the point.


Why Do We Do This?

In 2015, Pascal Boyer at Washington University in St. Louis ran five studies on the "competence effect of threat information." Participants read two descriptions of the same situation. One included a potential danger, one didn't. Consistently, people rated the threat-sharing source as more competent. Not just more interesting. More competent. The person who tells you something might kill you sounds smarter than the person who tells you everything's probably fine.

Boyer and Blaine followed up in 2017 with transmission chain experiments. People passed information along a chain, like telephone. Threat-related information survived far better than neutral or even generally negative information. People chose to pass along the scary stuff, not because they were scared, but because sharing it made them feel like they were contributing something valuable. Culture doesn't predict the future. Culture performs fear for social credit. And we mistake that performance for wisdom.


Dumb Word of the Day

Prolepsis

Dumb Word of the Day

Prolepsis (pro-LEP-sis) is a rhetorical and literary device where you represent a future event as if it's already happened, or already inevitable. From the Greek prolambanein, meaning "to anticipate." Every Terminator movie is prolepsis. Every "AI will destroy us" op-ed is prolepsis. We narrate the future as if it's already been written, which is a neat trick, because the actual future has a strict policy of not giving us the script in advance. Perfect for this week because our entire culture is one giant prolepsis machine, pre-writing futures that never arrive.

Let's use it in a sentence: "I've been practicing prolepsis my entire life. In 1999 I flew to the Cayman Islands because I'd already decided the apocalypse was happening. The only thing that actually collapsed was my ability to take myself seriously."


Your Weekly Dumb Challenge

The Anti-Prediction Log

Every morning for a week, write down three things you think will happen that day. At night, write down the three most interesting things that actually happened. Compare the lists. I guarantee the real list is weirder, smaller, and more meaningful than anything you predicted.


Thanks for getting dumb with me today

Stay curious, stay unprepared, and remember that the real future is too strange to be a movie. That's how you know it's real.

David 🎉

Dumbify Podcast

How did you like today's newsletter?

Rating received! You're awesome.

Want to tell me more? Drop a note below.