Экзистенциальные риски
Как сохранить жизненный потенциал?
Mathematically speaking, risk is expected loss, which is the mean of loss function, which is a form of product of probability function (a measure) and the corresponding loss function.
What is it?
The loss function has meaning only when there is what to lose. The concept of "existential risk" thus rests upon the hypothesis that (among all the possible alternate futures) there exists a future of abundance of value, and that something can be lost in retrospect. This way we can define a loss with respect to the value of those positive hypothetical futures.
For example, at this point, as we do not know of any other intelligent life except the Earth's and find it hard to estimate the complexity of life formation out of inanimate matter, it seems to us that we are "Universe's only chance" for intelligence (as we are the only example of intelligent life we know of). Imagining what Earth's life could do if it were to spread to the Universe − how many wonders it may create, how many individual conscious experiences it may create using the existing physical matter and energy − gives us an estimate of possible future value from the seed of life on Earth.
With this knowledge then, Earth's life appears to us extremely precious, that, if it were to go extinct like dinosaurs had gone extinct, we would be very sorry to have lost those imaginary possible future wonders that it may create (wonderful structures, conscious experiences and thoughts of life made from matter available in reachable Universe) of tremendous value. This worry about the loss of future wonders of life defines existential risks.
The risk perception would change dramatically, if we were to find a simple way to synthesize life from inanimate matter, or if we were to discover another intelligent space-faring civilization, making our contribution less significant, but as long as we do not know of any, our estimate of possible loss is "over the roof" (tends to infinity) because we imagine that our life could create so much, if it were given a chance to continue developing.
This way, our existential hopes (amazingly optimistic, positive future visions) creates the worries of their loss: the existential risks.
What to fear?
Many may think humanity is so full of people, that it's highly unlikely that we would go extinct, and so it's very unlikely that the future would be lost. In fact, many of us dreamers think the same, and think of the creative positive side most of the time. However, with the imagined possible future value (just think how much matter in the Universe there is to be converted to consciousness) in mind, even a very small possibility of failure to realize those dreams makes the risk estimate go to infinity, and therefore, actions to reduce that appear very valuable, like the only logical thing to focus on. There are many things that may pose risk, even for an entire life on Earth:
- Asteroids
- Supervolcanos
- Gamma ray bursts
- Runaway climate change
- Superintelligence gone wrong
- (many others)
If any of the bad scenarios happen, the Earth may not have another chance to evolve high intelligence before the Sun is too hot.
Luckily, we have ideas to address those various possible issues, yet time keeps ticking and we are not yet prepared and not diversified beyond Earth. So, this is an issue to keep long-term dreamers sane and working towards the security of the potential of life.
A few thoughts on an existential perspective, and a few avenues to explore for future technologies solving for the existential hope.