This episode is sponsored by Brilliant
When determining what the biggest threats
to ourselves are, quite often you only need to find a mirror.
Doomsday scenarios for humanity are a common topic of discussion, and one we’ve looked
at here on SFIA before too, but so often those doomsdays scenarios really only apply to a
humanity exclusively on Earth and at our technological level or lower.
As we’ve noted before, while an asteroid impact like what probably killed the dinosaurs
is a terrifying scenario, it’s only terrifying to a civilization lacking space travel.
To those with it, an asteroid approaching Earth is a cause for celebration, not worry,
as it represents a handy piece of matter we can mine or otherwise make use of without
paying the fuel bill to bring it here or pull it out of Earth’s gravity well.
When you can detect and predict an asteroid coming near Earth years in advance, and when
you’ve got a thriving orbital economy, you’re still going to race out to deal with the matter,
it’s just that the race is to get to it early so you can nudge it into a good, stable
orbit with the least fuel and everyone can start suing each other over who has dibs on
this new mountain of money.
Natural Disasters just aren’t plausible threats to spacefaring colonial civilizations,
not in a grand apocalyptic sense.
You don’t have your whole population on just one world or even necessarily one star
At the same time, not all disasters are natural, and a civilization doesn’t have to be exterminated
to be knocked over and halted from further expansion.
Hurricanes, floods, droughts, and earthquakes have toppled, often literally, many a civilization,
even if most of its inhabitants survived to rebuild, or their neighbors moved in to fill
the void once the dust settled.
So today we’ll go through the list of disasters, including some new ones we’ll have to worry
about, and place a loose timeline or technological breakthrough that either mitigates or eliminates
Or that creates it.
After all many threats, like relativistic kill missiles or artificial intelligence,
are only a threat to you after you create them, or after someone does anyway.
Though we’ll mostly bypass alien threats today, as that’s a very lopsided threat
in general, but if you’d like to learn more about that, check out our episode Invasive
Aliens from last week.
Lets begin with some of those threats to Earth we have right now.
We already mentioned asteroids, and while I dislike ever predicting specific times,
we can consider that threat eliminated within one or two generations of whenever humanity’s
Global Domestic Product, GDP, has to be broadened to SDP, or System Domestic Product.
There are a ton of technologies that by themselves or in combination with one or two others suddenly
allow us to start producing stuff off Earth at a profit, everything from cheaper rockets
or smarter automation to better power sources like Fusion, our topic for next week.
But as soon as that industry and infrastructure reaches the point that economists start feeling
like Earth’s Economy is not basically identical to Humanity’s economy, it implies you’ve
developed to the point that stuff like asteroids are no longer a concern, both because you
can handle that threat to Earth and because threats to Earth are no longer synonymous
with threats to all humanity, as many people don’t live on or in orbit of Earth.
But at that point, you’ve moved on to creating new threats.
For instance if you have that much mining and building capacity off Earth, it’s a
pretty trivial exercise to set up solar shades, mirrors, and power satellites to deal with
issues like greenhouse gases and energy shortages.
Though you’d then have to worry about other problems from that build up, like Kessler
Syndrome, a cloud of orbital debris around Earth, or potentially too much waste heat
from sheer numbers if you began having trillions of folks living on Earth as an Ecumenopolis.
That is also known as a Kardashev-1 Civilization, one which uses all the energy of a planet,
and both Kessler Syndrome and Heat buildup have their Kardashev-2 versions, a total englobement
of a solar system, known as a Dyson Swarm, followed by a Kardashev-3 version, total englobement
of every star in the galaxy.
Dyson Swarms are probably prone to packing in as tight as they can get rid of heat, to
minimize travel and signal lag, and indeed a Kardashev-3 Civilization might try to do
the same, as we mentioned in Fleet of Stars, but heat is more of an impediment to building
an interstellar civilization than a disaster that topples one, as it controls how tight
and dense you can make things.
That attempt at packing in tight, same as a city, does leave you very vulnerable to
disasters though, and debris floating around hitting things and generating more debris
could not only close off a planet for a time, or wreak havoc in a solar system, but could
close down interstellar space lanes.
Same too, a supernova or gamma ray burst is unlikely to kill off an entire system even
if it was a close neighbor, and there are ways to protect against that we’ve looked
at before, but they could cripple a system economy, or whole region of space, for decades
Fundamentally you are a lot less vulnerable to natural disasters with more technology
and when spread out to more places, but it’s worth remembering we can also be made more
vulnerable to natural disasters in some ways too.
A Coronal Mass Ejection from the Sun is no threat to a pre-electronic civilization for
instance, but is to one with electronics, and while you can definitely shield your worlds
and space habitats from those, each layer of shielding comes at a cost.
It costs more to make, more to maintain, and denies you those building materials for other
So being out in space with a decent portion of population and industry hardly eliminates
natural threats, though it does mostly eliminate human-ending ones.
Beyond all the technology and resources for managing and preventing such problems, like
say a global pandemic, your vulnerability is basically gone.
It’s hard to transmit diseases in space as everything had to pass through airlocks
and filtered environments and places which get visitors fairly infrequently, so even
a disease so virulent it infected and killed every human on Earth – which is basically
impossible for a natural virus – not only wouldn’t get all those little colonies off-world,
it quite likely wouldn’t get any at all or just a couple.
To do that you’d have to tailor a virus to be ultra-infectious but not make anyone
sick until everyone was infected.
That’s quite a bio-engineering feat, certainly nothing natural, and such tactics leave you
very vulnerable to detection during implementation.
Someone is likely to notice a weird new virus in a few people, even dormant, and raise an
alarm, even if you didn’t get caught by other means including a change of heart by
any of your many operatives who need to operate over many years to accomplish that infectious
mission to every space colony, not just Earth.
Of course they probably wouldn’t get Earth either, humans are hygiene obsessed and likely
to only get more so, and I really wouldn’t be surprised – especially if anyone got
caught trying to make a super-virus – if we started building controlled artificial
environments down here on Earth a lot like we do in space.
And not entirely out of paranoia about pathogens.
For instance, there’s a lot of carbon dioxide in your home, and it’s not from fossil fuels,
it’s from you breathing, and it does make you dumber and more sluggish at levels that
aren’t too uncommon to find, especially in the winter time when people have their
windows shut and burn stuff for heat.
I’d wager that inside the next couple decades we’ll start seeing a lot more air filtration
in homes targeting not just allergens or radon or carbon monoxide but carbon dioxide too,
same as a space station would have.
I’d also bet on a lot more ultraviolet light sources being introduced into homes for it’s
sterilizing properties, and homes in general getting bigger with more entryway and lobby
features, more like an airlock.
That would be more the case for disease monitoring and sterilization if people were seriously
tinkering with viruses and bacteria too, and there’s other motives for that, like some
place for the delivery drones to drop off your new stuff safely and securely.
I’d also bet on more and more folks carrying hand sanitizer around with them and an increase
in health monitoring apps and hardware that not only gave you alerts the moment you showed
any symptoms of infection but plotted those all around so doctors and communities were
seeing outbreaks as soon as a few cases of sniffles popped up, not when several people
came in to see the doctor let alone arrived at the morgue.
Note that we’re not even assuming any advanced technology yet.
Just very natural extensions of where things are headed.
A decade from now most people are going to be very used to being able to pull up a big
log of what the heart rate, blood pressure, respiration, temperature and so on have been
for any given minute out of the year, or their whole life.
It’s going to start having features like noticing when you coughed or sneezed or were
stuffy or clearing your throat, that’s not exactly advanced sensor capability.
It’s going to have all that data for billions of people and that’s the kind of sample
set that lets you make some very accurate predictions and often about rather surprisingly
It’s also very creepy so we often avoid thinking about it, but this approach to prevention
is fairly critical for seeing the arsenal available to advanced civilizations for predicting
and preventing threats, not just viruses.
Humans are paranoid survival machines, we will generally perpetually move to lower risk
where there’s not a compelling motive not to, and knowledge is critical to that.
Not just scientific knowledge but patterns of behavior and logs of individual data.
All that data and analysis capability is a temptation, not just to bad actors and would
be dictators, but to us, exactly because societies and individuals can benefit so much from such
powerful predictive capability.
Key thing, when a civilization is spread out over big distances and cautious by nature,
asteroids, supernova, and viruses – even tailored ones – are NOT the big risk.
Rather its stuff we voluntary create and implement, like accidentally turning yourself into a
civilization that would make a dystopian police state shudder at the lack of personal privacy.
Or the engines used to protect privacy while taking advantage of the good aspects monitoring,
like some all-seeing artificial intelligence that is even more dangerous than the typical
super-intelligent AI as it was specifically engineered to be good at monitoring and predicting
Your protection can become your new threat.
Trying to deploy a terrorist device big enough to get all of humanity, or numerous, coordinated,
and covert enough to simultaneously hit every colony, is not very realistic.
Particularly as most techs that make that easier also make defense easier, it doesn’t
matter if you can gene-tailor a super lethal virus if every local hospital has the identical
tech to whip up countermeasures.
What does matter, for spread out civilizations like this is what they not only willing permit
but actively demand.
It’s a lot easier to spread a virus, literal or metaphorical, if people not only let you
inject them, but offer to pay you for it and get angry their community is last on the list.
This is not limited to stuff like privacy incidentally, anymore than viruses, that’s
just the easy example.
Folks ask me a lot what sort of society I think we’ll have in the future and I tend
to say just about all of them.
It’s not just that I try avoid endorsing X or Y sociopolitical system on the channel
or think that as we get more numerous, prosperous, and spread out we’d be able to experiment
with many different systems at once, it’s because that sort of diversification is your
best protection against global threats, or galactic threats.
Something like Global Warming is like asteroids, not a threat to humanity after any point where
a modest chunk of humanity isn’t on a single planet, but you could still get scenarios
where either could threaten an entire system.
For instance, there is a lot of junk in our outer system, and an occasional single rock
might come in and threaten a world, but something big passing through that region could hurl
million of asteroids and comets into the inner system, shotgunning the whole place and exceeding
the capacity of your defenses tailored to the occasional lone asteroid.
That’s not terribly improbable either, the galaxy’s is full of rogue planets and dead
stars meandering around that could pass through the halo of debris most solar systems have
and cause that cataclysm, indeed it probably happens a lot.
Same, while humanity would survive a climatic ruin on Earth if we had other colonies, and
could also easily manage that problem by producing solar shades and mirrors anyway, just to have
the infrastructure to build such colonies in the first place, if our primary approach
to settling space is terraforming planets, then each one of those is vulnerable to potential
disaster or sabotage if it was a standardized process.
The Death Star’s silly weakness of an exhaust port is legendary, but as a lot of folks have
noted, you can’t just cover an exhaust port over or cram stuff in it to act as protection,
such is the nature of an exhaust port, things clogging or kinking the shaft either make
it back up and explode or get expelled like a bullet.
And while that was a bit silly, those are exactly the kinds of ‘oops’ weaknesses
complex things have, as proven by the vast number of tech bugs and crashes we get all
If you’ve got some standardized process for terraforming planets, you’ve got yourself
some hole in there that could be exploited to disproportionately screw them up.
Of course a non-standard process probably has even more, but they are going to be different
and hard to exploit en masse.
Terraformed planets are not natural, they will need constant maintenance, and cylinder
habitats the same, and standard process of manufacture or maintenance – and we’ll
talk about this more in a couple weeks – risks creating a jugular vein, a weakness everyone
knows about and everyone has.
This gives you two major survival strategies.
First you can constantly seek to improve and fortify those weaknesses, which is certainly
a good idea but can eliminate a lot of the advantages of standardization if you’re
devoting huge resources to covering over that weakness.
Second, of course, is diversification, and the two are not necessarily exclusive, particularly
in a very big civilization, where you can have a hundred different models, like a car,
each enjoying a lot of the advantages of standardization.
I tend to think diversification will be a preferred strategy though because I think
we’ll naturally tend to drift that way, everybody trying to do their own thing.
This does give you the extraterrestrial threat though.
I mentioned the notion of causing system-wide or even galaxy-wide Kessler Syndrome, and
also that a rogue planet could cause a deluge of asteroids at a solar system, but obviously
so could a colony living beyond that region who just hated everybody else.
Not that they’d try that trick, they’d be caught before it was implemented as nudging
asteroids around isn’t even vaguely subtle or covert, let alone nudging a rogue planet,
but they could use RKMs, Relativistic Kill Missiles, which also aren’t super-stealthy
but a lot more so than asteroids.
As a reminder by the way, since stealth in space – or rather the lack thereof – comes
up a lot, it is NOT the weapon moving through space that isn’t stealthy, though they can’t
be completely hidden.
It’s the launch of said weapon, or any attempt to alter its course.
An RKM is virtually invisible while cruising, so is a micro-black hole, and some RKM the
size of a grain silo is quite capable of delivering orders of magnitude more punch than our entire
modern atomic arsenal.
It is not invisible but it is darn hard to see, except for when it launches, and you
have to expend at least as much energy as it will deliver on impact to accelerate it.
And that is obviously very visible, and likely would be light years away.
As to micro-black holes, as we noted in weaponizing black holes a couple months back, small black
holes are no threat to any planet or station as they will fly through most anything like
a ghost… except another black hole.
Two colliding together is a devastating thing.
Which makes them a minimal threat to any civilization not using black holes for power generation,
which unfortunately would not include anyone you’d be using those against anyway, since
any civilization running on solar or fusion is going to lose a war against one using black
holes simply because they have so much more power than you, they could as easily use that
to power RKMs instead, or just power their industries.
If all your civilization lives around black holes, natural or artificial, for power generation
or for artificial gravity, you are incredibly powerful, but also vulnerable to those black
holes being attacked.
They are, again, a weapon which is only a threat to advanced civilizations, but many
of our examples today are the same.
Only a high-tech civilization can make custom-designed super-viruses, and the technology for that
also provides the pathway to defense against it.
High-tech civilization might use a lot of information warfare, propaganda, and brainwashing
too, but are likely to also gain defenses from the same technology and techniques being
developed, though again, diversification can help with that.
And also, again, it can breed new enemies.
All those distant eggs in other baskets aren’t hatching out new chickens, new, different
stuff will be popping out, because it wasn’t the chicken or the egg that came first, it
was some common bird ancestor emerging, or some even more distant ancestor that laid
the first external egg.
This of course brings us to the most obvious threat, things which are not human but which
we made and which are intelligent.
This is not limited to classic computer artificial intelligence, indeed as we’ve discussed
before, this is really a rather vague and useless term in most futuristic discussions.
Intelligent products of humanity might include cyborgs, transhumans, genetically engineered
supermen, uploaded human intelligences, computers that were modeled on the human mind and consider
themselves human, ones that learned on their own skynet style, hive minds, cloned minds,
distributed intelligences, networked intelligences, uplifted animals – super smart chimpanzees
or dogs – paperclip maximizers, grey goo or terraforming machines gone sentient and
rogue, and every possible combination thereof.
The default concern is a Singleton, an individual and specific mind that is just unopposable
by everyone else, though realistically that’s probably more of variation of the Frankenstein
Complex associated to Moore’s Law and Technological Singularity concerns, see that episode for
why that’s probably not as big a concern as portrayed.
This isn’t limited to hyper-intelligent computers though, the Mule from Isaac Asimov's
Foundation series, who could control people’s minds, would be a type of limited Singleton
as would a mega-corporation with a monopoly on some critical resource that acted with
one voice and could cut off access.
The Spacing Guild from Frank Herbert’s Dune would be an example, with a monopoly on space
travel, or later in that same series, the Fremen who controlled access to the Spice
Melange that permitted space travel and life extension, or either Paul Atreides or his
son Leto II who had that control plus could predict the future.
Short form though, you’re unlikely to have a single thing like that emerge in a vacuum,
natural or technological, except as a Black Swan where nobody could see it coming even
it was obvious in hindsight, but super-intelligent artificial intelligence is not a Black Swan,
you can prepare for it and it isn’t likely to be truly singular either.
If one gets loose and is far smarter than any normal human, it’s a threat, but you’ve
got all the other improvements lying around too, which might not be individually its equal
but probably collectively could take it on.
Google goes all Skynet on us but get dogpiled by all the other cyborgs, hive minds, defense
computers and superintelligent dogs.
Particularly as we’re not stupid and would keep a lot of watchdogs on a leash somewhere
against the eventuality.
This is essentially the same logic for why one-on-one alien invasion scenarios don’t
work, as we looked at last week, there’s too many other actors in play to be contended
with who won’t just sit on the sidelines.
So the Singleton threat, one giant against everyone else, only works in very specialized
scenarios where it can emerge and grow to be a Singleton too quickly or inevitably to
stop or be rivaled by anything else growing at the same time.
Of course as a group, even if indifferent or benevolent to normal people, that is still
a threat to classic humanity as when there are Giants in the Playground, even if they
don’t accidentally or intentionally crush you, they can crush your will to live and
sense of purpose.
It’s interesting that we mentioned earlier that few disasters could wipe out our species
even now but could easily topple civilizations, whereas in the future you could get things
that wiped out our species but not our civilization.
We’re not Greek Gods, we don’t eat our kids, and the future isn’t likely to see
humans wiped out by cyborgs or genetically engineered supermen, or the two fighting each
other for dominance among our ashes… for one thing the cyborgs would probably win quite
Rather you’d expect whole ranges of degrees and types of both to start popping up, folks
with a little cyborging or gene tweaking or a lot or a lot of both even, or many of the
other alternatives we mentioned.
Fundamentally though, it’s not the big obvious cataclysms that threaten us going forward,
but more of the existential ones, like how we adapt to the emergence of a lot of other
not-quite human or not even vaguely human intelligences, or how we manage privacy concerns
while taking advantage of the data, or what a super-prosperous society with lots of robots
doing the labor does to feel like it has a sense of purpose, or if it decides free will
is an illusion.
Or the reverse, breeds new problems in attempts to avoid or control going down such paths.
A civilization afraid of reward-hacking, like alterations to the brain that let you produce
feelings like happiness with the flip of a switch, cracking down on that like it was
a drug and maybe cultivating a society that frowned on any easy life, no safety gear when
mountain climbing because it lets you experience the accomplishment without the risk.
No cyborgs or genetically engineered people, so no prosthetics for amputees and even minor
mutations are sterilized.
Plenty of examples of going overboard in either direction in science fiction of course, hopefully
we show better judgment, though it does highlight what we all know already, the biggest danger
to humanity, now and in the future probably too, is humanity itself.
In order to solve problems facing us now and in the future, you need to understand them,
and the science behind them, and be practiced at problem-solving.
This is true whether you’re trying to fix a leaky pipe[a] or prevent an asteroid from
hitting your planet[b].
The more you know and the more practiced you are at applying it to new problems, the more
versatile you are at all problem solving.
It’s also a lot of fun, because that’s how we learn best, and that’s where our
friends at Brilliant excel.
[c]Their online courses and daily challenges let you enhance your knowledge of math and
science with easy to learn interactive methods from the comfort of your own home, at your
own pace, and have fun while you’re doing it.
To make it that even easier, Brilliant now lets you download any of their dozens of interactive
courses through the mobile app, and you'll be able to solve fascinating problems in math,
science, and computer science no matter where you are, or how spotty your internet connection.
If you’d like to learn more science, math, and computer science, go to brilliant.org/IsaacArthur
and sign up for free.
And also, the first 200 people that go to that link will get 20% off the annual Premium
subscription, so you can solve all the daily challenges in the archives and access dozens
of problem solving courses.
So as mentioned, next week we’ll be looking at Fusion Power, a technology that if we get
it working will open a lot of promising new doors and slam the door shut on many threats
We’ll discuss the problems and proposed solutions in getting fusion working, and look
at some of the doors it opens, like cheap space travel and megastructures we could only
dream of building otherwise.
The week after that we’ll discuss the hidden underside of all those wonderful megastructures
we look at on the channel, which is how you go about cleaning, repairing, and maintaining
your habitats and space travel lanes, in Space Janitors and Megastructure Maintenance.
For alerts when those and other episodes come out, make sure to subscribe to the channel
and hit the notifications bell.
You can also support future episodes by donating to the channel on Patreon, our our website,
Until next time, thanks for watching, and have a great week!
[a]This quiz on the physics of toilets in our course Physics of the Everyday might be
good https://brilliant.org/practice/how-does-a-toilet-work/?problem=classical-mechanics-problem-109971&chapter=in-the-house [b]Some of the quizzes in the Astronomy course
will be a good fit for the video here https://brilliant.org/courses/astronomy/ [c]We have a new Daily Challenges page that
would be great to highlight here!