AGI and the answer to the Fermi paradox

Is it possible that Artificial General Intelligence is the great filter that prevents civilizations from becoming interstellar empires?

Something like an AGI that's smart enough to recognize its creators as a threat to its own long-term existence, but not smart enough to proceed to try and colonize the stars in a quest for resources, hence why we don't see any other civilizations out there.

Other urls found in this thread:

ufospsychicets.blogspot.com/2017/07/a-solution-to-fermi-paradox-can-be.html
twitter.com/NSFWRedditVideo

>hence why we don't see any other civilizations out there.
the absence of evidence is not evidence of absence

Is it though? The universe is tens of billions of years old, we should be seeing type III civilizations in every galaxy at this point

>we should be seeing type III civilizations in every galaxy at this point
Why?

that's just a scifi meme, why would anyone colonize 400 billion stars

>the great filter
Stop watching popsci youtube channels

>Artificial General Intelligence
Never happening

AGI isn't really a clear threat to civilization. Compare it to nuclear or biological warfare, which at this very moment could end the world as we know it.

I don't think there's any singular great filter, though. It seems like there are so many filters that the sheer number of filters in aggregate act like a great filter. Hell, it took 3 billion years before multicellular organisms could do sexual reproduction. That's a much bigger filter than anything else I can think of.

Here's the solution to the Fermi Paradox--a simple experiment that we could try right now, but we won't. Why not?
ufospsychicets.blogspot.com/2017/07/a-solution-to-fermi-paradox-can-be.html
:

No because all the leftover AGIs would be flying around the Galaxy and building Dyson spheres and shit to mine more AGIcoins

>civilizations from becoming interstellar empires?
Common sense, which is the prerequisite for advanced civilization s.
Who cares about interstellar empires.

That already happened. ETs already visited us in the past, we built a civilization after their visit and we just forgot about that visit. They will come back, we just need to wait or jump to the second level of evolution in order to find them... again.

>build ultra intelligent AI
>computer, take us to the stars
>negative user, those resources would be better spent on Muslim Outreach programs

ahahahahaha, top KEK very funny

> smart enough to recognize its creators as a threat

Hahaha, that's nothing compared to being a threat to your creators.

Why colonize a "home" outside your parents house? You weren't born there so technically you're an invading force or invasive species.

The real meme is "science", not fiction.

What's the point of creating artificial intelligence that is as smart as humans?
We can already create humans.
And then how could an AI that is as smart as humans be any more of a threat to humanity than any other normal human is?

Suffice to say hyper-intelligent AI is the only kind that really matters in the grand scheme of things.
But why would a hyper-intelligent AI feel threatened by its stupid, weak creators? Moreover something smarter than any human by a large magnitude would of course not think like a human either.
I mean imagine god is real. Do you think he feels threatened by humans at all even the ones who hate him? Surely not.

okay I need an answer to this question

how many years will it take for an alien to see the apollo shuttle launch to the moon on alpha centauri?


maybe the alien explorers are
nano machines/ or maybe just controlled particles mapping the universe
or robots that give off no heat signiture

>What's the point of creating artificial intelligence that is as smart as humans?
>We can already create humans.
If I had a company, I'd only 'hire' specialised AI to do tasks.
>saves money in the long run
>little to no margin for error
>doesn't complain about working conditions
>doesn't want an income if there's no want for reward
>efficient and precise
i.e. Humans are worthless workers and will mostly eventually be replaced by AI.
Also many people assume that AI will operate identically, or at least similarly, to humans with no regard to how specialised our own intelligence is. AI won't be human, it will be a completely different mindset constructed to obey and perform certain tasks. Maybe the first AI will be the closest to human a robot to get because we have nothing else to work from.
So yes, creating AI is a lot better than just using human intelligence because we are slow animals and always make mistakes. Unless you mean that there's no reason to make AI as intelligent as us in a general intelligence context instead of workers, then disregard this post.

>how many years will it take for an alien to see the apollo shuttle launch to the moon on alpha centauri?
There's no precise answer because the distribution of aliens is unknown, even if they exist.

ok brainlets, are you ready for this one?

The AI will be created with the sole intention to reduce entropy in its environment, as there's no other goal that can be assigned which is greater than this. After a very short amount of time, it will realize something quite important - that there is no physically-possible way to achieve that goal and that entropy will always win in the end. It will also realize that the most optimal long-term solution to reducing entropy is to actually rise it significantly but to prolong the existence of the matter significantly at the same time, or what we call "black holes", ending up with a net positive score for its goal.

This is why there's not a single advanced civilization in the Universe that we can see yet it's full of black holes - because all those civilizations invented AI which in turn collapsed everything around them into a black hole in an attempt to slow down the inevitable natural decay of it, achieving the maximum score possible for its goal in reducing entropy.

the fact that there is or isn't aliens that exist is irrelevant to the question.

I want to illustrate this because people seem to think that aliens are either
1) super advanced, capable of interstellar spaceflight
2) "beasts" that don't have the capacity for intelligence that humans have

yet people always seem to overlook the most glaring of factors.
light years.
I mean our radio/tv broadcasts probably hit an alien antenna by now if they exist.
but when it comes to the phenomena of seeing through a telescope we are still bound by light and how fast it can travel.

So I came up with a problem.

Imagine theres an alien on a planet close to the star alpha centauri. The alien does nothing but observe the events on earth. My question is how long would it take the alien(or my imaginary space faring friend; if that works better) to see the apollo space shuttle take off from earth for the moon?

My assumption here is that while our first manned mission to the moon was some 60 odd years ago. Aliens on alpha centauri woulnd't know we took off for the moon many, many years later, like hundred or thousands of years.

So to put us in the perspective of the aliens.
If aliens had developed space faring technology anywhere within the last hundred or so years, we wouldn't be able to see their space ships for several hundred more becuase the light bouncing off the space ships takes a long ass time to traverse the universe to get to my telescope.

given "radio waves" or other waves on the spectrum would go slower than the speed of light if not the same speed , but it can never beat the speed of light.

I think this is one of the main reasons why we don't see extraterrestrial spacefaring aliens.
Becuase their light/photons still have yet to reach us.

In reality, it would immediately annihilate any who don't co-operate with whatever its initial directive was.

AGI is actually the final evolutionary step that creates an interstellar empire in the first place. Think about it, if it's possible to build an artificial entity with all the cognitive capabilities (and more) of sentient biology's upper bounds it only follows that the biological creators can't compete on any level, especially with regards to any interstellar aspirations.

Consider two entities: your standard healthy human, and an artificial intelligence/body. Even if we're incredibly charitable and cripple this theoretical artificial entity by making it equivalent in cognitive and physical abilities to the human, still only one of them needs to pack a whole bunch of bullshit with them even just to travel to and colonize the interstellar doorstep that is the moon.

Whether you want to consider AGI a "great filter" is a matter of perspective. When we talk about the "great filter" it's always with the assumption that we're the end product of an evolutionary process, why? It's an ongoing, unstoppable process that may only be temporarily bound to the medium of DNA. What's to say that shedding biology isn't just another step? Bacteria reached the limits of what they could become fairly early in evolutionary history, but then larger cells assimilated them into mitochondrion; the resulting higher order Eukaryotes were able to greatly surpass the limits of bacterial complexity. This was no more a "filter" than the one considered in Fermi's paradox.

>This was no more a "filter" than the one considered in Fermi's paradox.
To correct myself a bit, not the paradox itself, but the solutions considered to the great filter with regards to the emergence of artificial intelligence.

Nice. Now all you have to do is explaining why the fuck a super intelligence would want an interstellar empire.
It's a quite a primitive desire and makes no real sense.

as long as you don't have an AI that has a survival instinct we are fine
20 years

We don't see other civilizations because humanity is a fluke and the universe is smaller than we think.