Follow US:

Practice English Speaking&Listening with: Threats to Interplanetary & Interstellar Civilizations

Difficulty: 0

This episode is sponsored by Brilliant

When determining what the biggest threats

to ourselves are, quite often you only need to find a mirror.

Doomsday scenarios for humanity are a common topic of discussion, and one weve looked

at here on SFIA before too, but so often those doomsdays scenarios really only apply to a

humanity exclusively on Earth and at our technological level or lower.

As weve noted before, while an asteroid impact like what probably killed the dinosaurs

is a terrifying scenario, its only terrifying to a civilization lacking space travel.

To those with it, an asteroid approaching Earth is a cause for celebration, not worry,

as it represents a handy piece of matter we can mine or otherwise make use of without

paying the fuel bill to bring it here or pull it out of Earths gravity well.

When you can detect and predict an asteroid coming near Earth years in advance, and when

youve got a thriving orbital economy, youre still going to race out to deal with the matter,

its just that the race is to get to it early so you can nudge it into a good, stable

orbit with the least fuel and everyone can start suing each other over who has dibs on

this new mountain of money.

Natural Disasters just arent plausible threats to spacefaring colonial civilizations,

not in a grand apocalyptic sense.

You dont have your whole population on just one world or even necessarily one star


At the same time, not all disasters are natural, and a civilization doesnt have to be exterminated

to be knocked over and halted from further expansion.

Hurricanes, floods, droughts, and earthquakes have toppled, often literally, many a civilization,

even if most of its inhabitants survived to rebuild, or their neighbors moved in to fill

the void once the dust settled.

So today well go through the list of disasters, including some new ones well have to worry

about, and place a loose timeline or technological breakthrough that either mitigates or eliminates

the threat.

Or that creates it.

After all many threats, like relativistic kill missiles or artificial intelligence,

are only a threat to you after you create them, or after someone does anyway.

Though well mostly bypass alien threats today, as thats a very lopsided threat

in general, but if youd like to learn more about that, check out our episode Invasive

Aliens from last week.

Lets begin with some of those threats to Earth we have right now.

We already mentioned asteroids, and while I dislike ever predicting specific times,

we can consider that threat eliminated within one or two generations of whenever humanitys

Global Domestic Product, GDP, has to be broadened to SDP, or System Domestic Product.

There are a ton of technologies that by themselves or in combination with one or two others suddenly

allow us to start producing stuff off Earth at a profit, everything from cheaper rockets

or smarter automation to better power sources like Fusion, our topic for next week.

But as soon as that industry and infrastructure reaches the point that economists start feeling

like Earths Economy is not basically identical to Humanitys economy, it implies youve

developed to the point that stuff like asteroids are no longer a concern, both because you

can handle that threat to Earth and because threats to Earth are no longer synonymous

with threats to all humanity, as many people dont live on or in orbit of Earth.

But at that point, youve moved on to creating new threats.

For instance if you have that much mining and building capacity off Earth, its a

pretty trivial exercise to set up solar shades, mirrors, and power satellites to deal with

issues like greenhouse gases and energy shortages.

Though youd then have to worry about other problems from that build up, like Kessler

Syndrome, a cloud of orbital debris around Earth, or potentially too much waste heat

from sheer numbers if you began having trillions of folks living on Earth as an Ecumenopolis.

That is also known as a Kardashev-1 Civilization, one which uses all the energy of a planet,

and both Kessler Syndrome and Heat buildup have their Kardashev-2 versions, a total englobement

of a solar system, known as a Dyson Swarm, followed by a Kardashev-3 version, total englobement

of every star in the galaxy.

Dyson Swarms are probably prone to packing in as tight as they can get rid of heat, to

minimize travel and signal lag, and indeed a Kardashev-3 Civilization might try to do

the same, as we mentioned in Fleet of Stars, but heat is more of an impediment to building

an interstellar civilization than a disaster that topples one, as it controls how tight

and dense you can make things.

That attempt at packing in tight, same as a city, does leave you very vulnerable to

disasters though, and debris floating around hitting things and generating more debris

could not only close off a planet for a time, or wreak havoc in a solar system, but could

close down interstellar space lanes.

Same too, a supernova or gamma ray burst is unlikely to kill off an entire system even

if it was a close neighbor, and there are ways to protect against that weve looked

at before, but they could cripple a system economy, or whole region of space, for decades

or longer.

Fundamentally you are a lot less vulnerable to natural disasters with more technology

and when spread out to more places, but its worth remembering we can also be made more

vulnerable to natural disasters in some ways too.

A Coronal Mass Ejection from the Sun is no threat to a pre-electronic civilization for

instance, but is to one with electronics, and while you can definitely shield your worlds

and space habitats from those, each layer of shielding comes at a cost.

It costs more to make, more to maintain, and denies you those building materials for other


So being out in space with a decent portion of population and industry hardly eliminates

natural threats, though it does mostly eliminate human-ending ones.

Beyond all the technology and resources for managing and preventing such problems, like

say a global pandemic, your vulnerability is basically gone.

Its hard to transmit diseases in space as everything had to pass through airlocks

and filtered environments and places which get visitors fairly infrequently, so even

a disease so virulent it infected and killed every human on Earthwhich is basically

impossible for a natural virusnot only wouldnt get all those little colonies off-world,

it quite likely wouldnt get any at all or just a couple.

To do that youd have to tailor a virus to be ultra-infectious but not make anyone

sick until everyone was infected.

Thats quite a bio-engineering feat, certainly nothing natural, and such tactics leave you

very vulnerable to detection during implementation.

Someone is likely to notice a weird new virus in a few people, even dormant, and raise an

alarm, even if you didnt get caught by other means including a change of heart by

any of your many operatives who need to operate over many years to accomplish that infectious

mission to every space colony, not just Earth.

Of course they probably wouldnt get Earth either, humans are hygiene obsessed and likely

to only get more so, and I really wouldnt be surprisedespecially if anyone got

caught trying to make a super-virusif we started building controlled artificial

environments down here on Earth a lot like we do in space.

And not entirely out of paranoia about pathogens.

For instance, theres a lot of carbon dioxide in your home, and its not from fossil fuels,

its from you breathing, and it does make you dumber and more sluggish at levels that

arent too uncommon to find, especially in the winter time when people have their

windows shut and burn stuff for heat.

Id wager that inside the next couple decades well start seeing a lot more air filtration

in homes targeting not just allergens or radon or carbon monoxide but carbon dioxide too,

same as a space station would have.

Id also bet on a lot more ultraviolet light sources being introduced into homes for its

sterilizing properties, and homes in general getting bigger with more entryway and lobby

features, more like an airlock.

That would be more the case for disease monitoring and sterilization if people were seriously

tinkering with viruses and bacteria too, and theres other motives for that, like some

place for the delivery drones to drop off your new stuff safely and securely.

Id also bet on more and more folks carrying hand sanitizer around with them and an increase

in health monitoring apps and hardware that not only gave you alerts the moment you showed

any symptoms of infection but plotted those all around so doctors and communities were

seeing outbreaks as soon as a few cases of sniffles popped up, not when several people

came in to see the doctor let alone arrived at the morgue.

Note that were not even assuming any advanced technology yet.

Just very natural extensions of where things are headed.

A decade from now most people are going to be very used to being able to pull up a big

log of what the heart rate, blood pressure, respiration, temperature and so on have been

for any given minute out of the year, or their whole life.

Its going to start having features like noticing when you coughed or sneezed or were

stuffy or clearing your throat, thats not exactly advanced sensor capability.

Its going to have all that data for billions of people and thats the kind of sample

set that lets you make some very accurate predictions and often about rather surprisingly

unrelated things.

Its also very creepy so we often avoid thinking about it, but this approach to prevention

is fairly critical for seeing the arsenal available to advanced civilizations for predicting

and preventing threats, not just viruses.

Humans are paranoid survival machines, we will generally perpetually move to lower risk

where theres not a compelling motive not to, and knowledge is critical to that.

Not just scientific knowledge but patterns of behavior and logs of individual data.

All that data and analysis capability is a temptation, not just to bad actors and would

be dictators, but to us, exactly because societies and individuals can benefit so much from such

powerful predictive capability.

Key thing, when a civilization is spread out over big distances and cautious by nature,

asteroids, supernova, and viruseseven tailored onesare NOT the big risk.

Rather its stuff we voluntary create and implement, like accidentally turning yourself into a

civilization that would make a dystopian police state shudder at the lack of personal privacy.

Or the engines used to protect privacy while taking advantage of the good aspects monitoring,

like some all-seeing artificial intelligence that is even more dangerous than the typical

super-intelligent AI as it was specifically engineered to be good at monitoring and predicting

human behavior.

Your protection can become your new threat.

Trying to deploy a terrorist device big enough to get all of humanity, or numerous, coordinated,

and covert enough to simultaneously hit every colony, is not very realistic.

Particularly as most techs that make that easier also make defense easier, it doesnt

matter if you can gene-tailor a super lethal virus if every local hospital has the identical

tech to whip up countermeasures.

What does matter, for spread out civilizations like this is what they not only willing permit

but actively demand.

Its a lot easier to spread a virus, literal or metaphorical, if people not only let you

inject them, but offer to pay you for it and get angry their community is last on the list.

This is not limited to stuff like privacy incidentally, anymore than viruses, thats

just the easy example.

Folks ask me a lot what sort of society I think well have in the future and I tend

to say just about all of them.

Its not just that I try avoid endorsing X or Y sociopolitical system on the channel

or think that as we get more numerous, prosperous, and spread out wed be able to experiment

with many different systems at once, its because that sort of diversification is your

best protection against global threats, or galactic threats.

Something like Global Warming is like asteroids, not a threat to humanity after any point where

a modest chunk of humanity isnt on a single planet, but you could still get scenarios

where either could threaten an entire system.

For instance, there is a lot of junk in our outer system, and an occasional single rock

might come in and threaten a world, but something big passing through that region could hurl

million of asteroids and comets into the inner system, shotgunning the whole place and exceeding

the capacity of your defenses tailored to the occasional lone asteroid.

Thats not terribly improbable either, the galaxys is full of rogue planets and dead

stars meandering around that could pass through the halo of debris most solar systems have

and cause that cataclysm, indeed it probably happens a lot.

Same, while humanity would survive a climatic ruin on Earth if we had other colonies, and

could also easily manage that problem by producing solar shades and mirrors anyway, just to have

the infrastructure to build such colonies in the first place, if our primary approach

to settling space is terraforming planets, then each one of those is vulnerable to potential

disaster or sabotage if it was a standardized process.

The Death Stars silly weakness of an exhaust port is legendary, but as a lot of folks have

noted, you cant just cover an exhaust port over or cram stuff in it to act as protection,

such is the nature of an exhaust port, things clogging or kinking the shaft either make

it back up and explode or get expelled like a bullet.

And while that was a bit silly, those are exactly the kinds ofoopsweaknesses

complex things have, as proven by the vast number of tech bugs and crashes we get all

the time.

If youve got some standardized process for terraforming planets, youve got yourself

some hole in there that could be exploited to disproportionately screw them up.

Of course a non-standard process probably has even more, but they are going to be different

and hard to exploit en masse.

Terraformed planets are not natural, they will need constant maintenance, and cylinder

habitats the same, and standard process of manufacture or maintenanceand well

talk about this more in a couple weeksrisks creating a jugular vein, a weakness everyone

knows about and everyone has.

This gives you two major survival strategies.

First you can constantly seek to improve and fortify those weaknesses, which is certainly

a good idea but can eliminate a lot of the advantages of standardization if youre

devoting huge resources to covering over that weakness.

Second, of course, is diversification, and the two are not necessarily exclusive, particularly

in a very big civilization, where you can have a hundred different models, like a car,

each enjoying a lot of the advantages of standardization.

I tend to think diversification will be a preferred strategy though because I think

well naturally tend to drift that way, everybody trying to do their own thing.

This does give you the extraterrestrial threat though.

I mentioned the notion of causing system-wide or even galaxy-wide Kessler Syndrome, and

also that a rogue planet could cause a deluge of asteroids at a solar system, but obviously

so could a colony living beyond that region who just hated everybody else.

Not that theyd try that trick, theyd be caught before it was implemented as nudging

asteroids around isnt even vaguely subtle or covert, let alone nudging a rogue planet,

but they could use RKMs, Relativistic Kill Missiles, which also arent super-stealthy

but a lot more so than asteroids.

As a reminder by the way, since stealth in spaceor rather the lack thereofcomes

up a lot, it is NOT the weapon moving through space that isnt stealthy, though they cant

be completely hidden.

Its the launch of said weapon, or any attempt to alter its course.

An RKM is virtually invisible while cruising, so is a micro-black hole, and some RKM the

size of a grain silo is quite capable of delivering orders of magnitude more punch than our entire

modern atomic arsenal.

It is not invisible but it is darn hard to see, except for when it launches, and you

have to expend at least as much energy as it will deliver on impact to accelerate it.

And that is obviously very visible, and likely would be light years away.

As to micro-black holes, as we noted in weaponizing black holes a couple months back, small black

holes are no threat to any planet or station as they will fly through most anything like

a ghostexcept another black hole.

Two colliding together is a devastating thing.

Which makes them a minimal threat to any civilization not using black holes for power generation,

which unfortunately would not include anyone youd be using those against anyway, since

any civilization running on solar or fusion is going to lose a war against one using black

holes simply because they have so much more power than you, they could as easily use that

to power RKMs instead, or just power their industries.

If all your civilization lives around black holes, natural or artificial, for power generation

or for artificial gravity, you are incredibly powerful, but also vulnerable to those black

holes being attacked.

They are, again, a weapon which is only a threat to advanced civilizations, but many

of our examples today are the same.

Only a high-tech civilization can make custom-designed super-viruses, and the technology for that

also provides the pathway to defense against it.

High-tech civilization might use a lot of information warfare, propaganda, and brainwashing

too, but are likely to also gain defenses from the same technology and techniques being

developed, though again, diversification can help with that.

And also, again, it can breed new enemies.

All those distant eggs in other baskets arent hatching out new chickens, new, different

stuff will be popping out, because it wasnt the chicken or the egg that came first, it

was some common bird ancestor emerging, or some even more distant ancestor that laid

the first external egg.

This of course brings us to the most obvious threat, things which are not human but which

we made and which are intelligent.

This is not limited to classic computer artificial intelligence, indeed as weve discussed

before, this is really a rather vague and useless term in most futuristic discussions.

Intelligent products of humanity might include cyborgs, transhumans, genetically engineered

supermen, uploaded human intelligences, computers that were modeled on the human mind and consider

themselves human, ones that learned on their own skynet style, hive minds, cloned minds,

distributed intelligences, networked intelligences, uplifted animalssuper smart chimpanzees

or dogspaperclip maximizers, grey goo or terraforming machines gone sentient and

rogue, and every possible combination thereof.

The default concern is a Singleton, an individual and specific mind that is just unopposable

by everyone else, though realistically thats probably more of variation of the Frankenstein

Complex associated to Moores Law and Technological Singularity concerns, see that episode for

why thats probably not as big a concern as portrayed.

This isnt limited to hyper-intelligent computers though, the Mule from Isaac Asimov's

Foundation series, who could control peoples minds, would be a type of limited Singleton

as would a mega-corporation with a monopoly on some critical resource that acted with

one voice and could cut off access.

The Spacing Guild from Frank Herberts Dune would be an example, with a monopoly on space

travel, or later in that same series, the Fremen who controlled access to the Spice

Melange that permitted space travel and life extension, or either Paul Atreides or his

son Leto II who had that control plus could predict the future.

Short form though, youre unlikely to have a single thing like that emerge in a vacuum,

natural or technological, except as a Black Swan where nobody could see it coming even

it was obvious in hindsight, but super-intelligent artificial intelligence is not a Black Swan,

you can prepare for it and it isnt likely to be truly singular either.

If one gets loose and is far smarter than any normal human, its a threat, but youve

got all the other improvements lying around too, which might not be individually its equal

but probably collectively could take it on.

Google goes all Skynet on us but get dogpiled by all the other cyborgs, hive minds, defense

computers and superintelligent dogs.

Particularly as were not stupid and would keep a lot of watchdogs on a leash somewhere

against the eventuality.

This is essentially the same logic for why one-on-one alien invasion scenarios dont

work, as we looked at last week, theres too many other actors in play to be contended

with who wont just sit on the sidelines.

So the Singleton threat, one giant against everyone else, only works in very specialized

scenarios where it can emerge and grow to be a Singleton too quickly or inevitably to

stop or be rivaled by anything else growing at the same time.

Of course as a group, even if indifferent or benevolent to normal people, that is still

a threat to classic humanity as when there are Giants in the Playground, even if they

dont accidentally or intentionally crush you, they can crush your will to live and

sense of purpose.

Its interesting that we mentioned earlier that few disasters could wipe out our species

even now but could easily topple civilizations, whereas in the future you could get things

that wiped out our species but not our civilization.

Were not Greek Gods, we dont eat our kids, and the future isnt likely to see

humans wiped out by cyborgs or genetically engineered supermen, or the two fighting each

other for dominance among our ashesfor one thing the cyborgs would probably win quite


Rather youd expect whole ranges of degrees and types of both to start popping up, folks

with a little cyborging or gene tweaking or a lot or a lot of both even, or many of the

other alternatives we mentioned.

Fundamentally though, its not the big obvious cataclysms that threaten us going forward,

but more of the existential ones, like how we adapt to the emergence of a lot of other

not-quite human or not even vaguely human intelligences, or how we manage privacy concerns

while taking advantage of the data, or what a super-prosperous society with lots of robots

doing the labor does to feel like it has a sense of purpose, or if it decides free will

is an illusion.

Or the reverse, breeds new problems in attempts to avoid or control going down such paths.

A civilization afraid of reward-hacking, like alterations to the brain that let you produce

feelings like happiness with the flip of a switch, cracking down on that like it was

a drug and maybe cultivating a society that frowned on any easy life, no safety gear when

mountain climbing because it lets you experience the accomplishment without the risk.

No cyborgs or genetically engineered people, so no prosthetics for amputees and even minor

mutations are sterilized.

Plenty of examples of going overboard in either direction in science fiction of course, hopefully

we show better judgment, though it does highlight what we all know already, the biggest danger

to humanity, now and in the future probably too, is humanity itself.

In order to solve problems facing us now and in the future, you need to understand them,

and the science behind them, and be practiced at problem-solving.

This is true whether youre trying to fix a leaky pipe[a] or prevent an asteroid from

hitting your planet[b].

The more you know and the more practiced you are at applying it to new problems, the more

versatile you are at all problem solving.

Its also a lot of fun, because thats how we learn best, and thats where our

friends at Brilliant excel.

[c]Their online courses and daily challenges let you enhance your knowledge of math and

science with easy to learn interactive methods from the comfort of your own home, at your

own pace, and have fun while youre doing it.

To make it that even easier, Brilliant now lets you download any of their dozens of interactive

courses through the mobile app, and you'll be able to solve fascinating problems in math,

science, and computer science no matter where you are, or how spotty your internet connection.

If youd like to learn more science, math, and computer science, go to

and sign up for free.

And also, the first 200 people that go to that link will get 20% off the annual Premium

subscription, so you can solve all the daily challenges in the archives and access dozens

of problem solving courses.

So as mentioned, next week well be looking at Fusion Power, a technology that if we get

it working will open a lot of promising new doors and slam the door shut on many threats

to mankind.

Well discuss the problems and proposed solutions in getting fusion working, and look

at some of the doors it opens, like cheap space travel and megastructures we could only

dream of building otherwise.

The week after that well discuss the hidden underside of all those wonderful megastructures

we look at on the channel, which is how you go about cleaning, repairing, and maintaining

your habitats and space travel lanes, in Space Janitors and Megastructure Maintenance.

For alerts when those and other episodes come out, make sure to subscribe to the channel

and hit the notifications bell.

You can also support future episodes by donating to the channel on Patreon, our our website,

Until next time, thanks for watching, and have a great week!

[a]This quiz on the physics of toilets in our course Physics of the Everyday might be

good [b]Some of the quizzes in the Astronomy course

will be a good fit for the video here [c]We have a new Daily Challenges page that

would be great to highlight here!

The Description of Threats to Interplanetary & Interstellar Civilizations