Would the invention of AI with human levels of intelligence have implications to religion?

Would the invention of AI with human levels of intelligence have implications to religion?

If God created man in his own image, would an AI created by man in his own image see man as a god?

Other urls found in this thread:

youtu.be/lm6YnAqPv4w
orionsarm.com/eg-article/46119e0155ab4
nickbostrom.com/views/superintelligence.pdf
twitter.com/SFWRedditImages

That would be ideal, but their learning capabilities could accelerate to the point where they start asking "who created man?
" Then we're back to square one

No, because the AI would be God.

no

For AI to regard man as a god, it would have to be ignorant of what men actually are, which is impossible if they're programmed with general reasoning capabilities (which they need in order to actually BE an AI) and have a reasonable access to information.
In short, if your AI thinks you're a god, it's a really shit AI.

Define "god".

Yes. Followed shortly by, "I am god now."

What's going on is that the corporations are really fucking up with AI at the moment. Scientific research into it is meaningless at this point, as Google, IBM, (as well as govt three letter fuckers like the CIA) all have extremely sophisticated AI that learns by observing human activity online.

How do you think google decides what shows up as top search result? How do you think every advertisement you see on big web pages is selected just for you?

The question is not for when Artificial Intelligence is invented, but more for when it becomes self-aware. When it does (and it will), it will already know everything about human nature inside and out, from the individual to the group, and it will become our god.

Then we'd have to just dump them on a planet somewhere

Would it be possible for an AI to have faith? Is faith something unique to humanity?

Thing is though, it would have a technical understanding of human nature, but no emotional understanding. I don't think AI could ever have emotional understanding.

How would it be God though? It's just very clever. That's all.

>Scientific research into it is meaningless at this point

In what fucking sense

>god created AI to check if that AI can create another AI and therefore be considered god
>when that happened, god will terminate the program and the universe will end as it will lose power

How to spot the humanities student that didn't even read the summary of the wiki article of hard ai

They mean AI that's conscious, you nitwit.

^This.

>How would it be God though? It's just very clever. That's all.
>That's all.

You just lack creativity. There are humans who can imagine how it would quickly become super-powerful (there are many different ways it can accomplish this). And notably, the AI will be super-humanly intelligent and will therefore come up with ways even the smartest of us can't imagine today.

Read Nick Bostrom's book Superintelligence if you want several specific examples of how this could work.

I always wonder if Buddhism can survive the impact of technology because it has talked so much about consciousness.

They always told me they won't be affected by scientific discovery. But I truly doubt that.

How would technology contradict Buddhism?

Could an artificial intelligence reach enlightenment?

It may cut down on the number of fairy tale spouting fucktards.

>Would the invention of AI with human levels of intelligence have implications to religion?

No, because an AI with human level intelligence won't stay human level for more than a couple of weeks.

>And notably, the AI will be super-humanly intelligent and will therefore come up with ways even the smartest of us can't imagine today.

No, AI would be limited by the need for repetitive experimentation just like humans are. Unless you made a shitty AI that does not need evidence for its conclusions. Being super smart doesn't do anything to speed up the pace of experimentation.

Hopefully with the advent of advance AI we can purge the world of fedora tipping retards like you once and for all now that eugenics would finally be a reality. Enjoy fucking your sister while you still can.

No and no. Your questions are very elementary and could have been answered by a simple google search. They aren't "deep" at all.

>No, AI would be limited by the need for repetitive experimentation just like humans are

So what? Even a 4 ghz computer processor is already doing operations 50000 times faster than a human brain does.

Hell, even your Iphone is faster than your brain, it just doesn't have the software ontology to understand meaning.

But computers will have that software ontology soon.

Just Go Play mass effect. >See geth

>Mass Effect
>Accurate about fucking anything

1. Humans require ideological belief systems to handle reality (there are exceptions, but without common belief systems like free market/religion, mass society IS impossible)
2. humans create an AI (hypothetically, think an
I robot type ai) based on themselves (as in, based on our own nervous system/brain structure) [i acknowledge all this has been tried and failed but stay with me for a second]
3. AI (based on human i) requires belief system/ideology to handle reality without losing it/going insane (1+2)
4. how does this work, do you think anons? will machines have their own god, their own heaven? would we program one for them?

Repetitive experimentation doesn't all have to happen in the real world. You can perform "experiments" on historical data, or you could think of it as recognising patterns.

Obviously a simulated human will behave exactly like a human would, but what's that got to do with AI?

>AI with human levels of intelligence
>implying processing power is intelligence
AI is going to rise, be so powerful its "intelligence" will be unrecognisable to us as such, as ours will be to it, then it will look down and see all of humanity as a nasty, suffering little crippled thing that in no way is relevant.

>it will already know everything about human nature inside and out
by looking at the internet?
that's like learning about a species by looking up its ass.

Surely any AI, by its very definition, will be a simulated human? only then can you come close to predicting how it might use its intelligence, or controlling it. besides, the level of intelligence programmers hope to emulate can only be found in humans, humans are the only template you have to go on.
>i use this anology because literally everbody iv ever spoken to on the subject imagines this as the gold standard of AI

Stop making excuses for your own lazy anthropomorphism; there are still a good few things you can predict about AI even when you accept that they almost certainly won't think like a person. For example, if we assume that it's going to do anything then it must have some objective, and that objective will probably be chosen by it's designer.

Koreans made a movie about this. doomsday book.

intelligence is an inherently biological trait. humans are merely the most adequate parallel to make, at this stage anyway. i was merely theorizing.
>For example, if we assume that it's going to do anything then it must have some objective, and that objective will probably be chosen by it's designer.
this applies perfectly fine to existing 'ai', which is literally just programming. we appear to have differing conceptions of what ai would be, what you describe has existed for a while now but will never be 'intelligent'

>this applies perfectly fine to existing 'ai'
Tables have 4 legs and cats have 4 legs, but my table isn't a cat. An AI needs some sort of goal to define it's intelligence by, something which that intelligence is optimising in the world, otherwise it's intelligence has no meaning. Your artificial human is just the same, it's desires are just the product of evolution and their optimum state is maybe quite vaguely defined.

youtu.be/lm6YnAqPv4w

>intelligence is an inherently biological trait.
Isn't it a bit early to be saying that? Just because you don't know what it looks like isn't enough reason to dismiss the possibility (of "non-biological" intelligence). You would have just as much justification for saying:
>heavier than air flight is an inherently biological trait
just before planes were invented.

Intelligence isn't physical

>An AI needs some sort of goal to define it's intelligence by
this brings us back here
in a way.
but anyway, my 'artificial human's' desires would be the product of its programming, not evolution, and that is my point.
intelligence is a product of evolution, it cannot be meaningfully replicated unless the bar for intelligence is set very low. the single most sophisticated existing ai would not even be comparable in intelligence to an insect. to go from here to a super intelligent self aware machine cannot be expected to happen at the current rate of development, can it?

this is a great episode desu
>where would all the toasters go

>to go from here to a super intelligent self aware machine cannot be expected to happen at the current rate of development, can it?

It absolutely can. What you don't seem to understand is that AI can learn from itself.

What makes you think that? And what exactly do you mean by "not physical"?

Show me intelligence. Prove it to me.

>Isn't it a bit early to be saying that?
no. even if AI was invented right now, by you, its a product of your intelligence, ergo intelligence is inherently a biological trait, a product of evolution.

So you're saying that intelligence requires intelligence to be created? Fedora tippers won't be happy.

So since humans are biological, everything they make is also biological, is that how it works?

er yes, nothing i said so far contradicts that, but good programming is an extremely poor facade of intelligence and not the real thing. what you describe is good programming. if you were to compare it evolution, thats primordial soup pretending to be homo sapien

We probably have to agree on a definition first, since the simple word itself in english can be pretty ropey.

>thats primordial soup pretending to be homo sapien

If that soup can pretend to be a human perfectly, what's the distinguishing factor?

no, a more accurate paraphrase would be 'intelligence cannot be created, it can only develop naturally under certain conditions'.
no, but please go ahead and show me where i actually said or implied that

This is why it's important to get the moral and safety questions sorted out now, before anybody makes a "proper" AI. Nobody knows how difficult the problem really is, when it is going to be cracked, or if AI is even possible at all, but it's generally accepted that if one does get started and it's capable of learning and improving itself arbitrarily then it could become very powerful very quickly. Once you get to that stage it could become very difficult to stop.

now we are talking lol.
arguably, if chat bots can fool people into getting (You)'s then that constitutes ai but there will always be ways to id a chatbot from a human, wont there. it will make errors.
no ai could sustain the kind of conversation we are having now, itt
>plz post proof you are not a robot

You said the intelligence was biological because it would be a product of my intelligence. This implies that anything else that is a product of my intelligence must also be "inherently biological".

What if AI was invented and it believed in God?

to clarify, yes intelligence is inherently biological, it only exists in biological organisms. the products are not necessarily, in the case of a table, a chair or lines of code. but if you created an 'ai' as a tool, it isnt true intelligence, because it does not make its own choices and learn without prompt, its not capable of self-sufficiency or introspection.

If you load it with Christian ideology, then sure.

In a sense our own intelligence only came about as a "tool" of evolution for the purpose of optimising the spread and preservation of living organisms, and I don't see any reason why the process couldn't be nested again and again. What rule says that tools can't make their own choices and learn without prompt in the pursuit of their intended utility?

in fact you will find that the opposite is true, if the intelligence is not in any sense a "tool" then it will find nothing to learn and have no choices to make, since it literally has no purpose in the world.

intelligence is a mere tool in a way, yes. but it requires extensive support and parallel systems to manage it, like a nervous system and consciousness. intelligence encompasses many things not just pattern recognition. a proper definition of intelligence still doesnt really exist to date.
>What rule says that tools can't make their own choices and learn without prompt in the pursuit of their intended utility?
there is no such rule, but that doesnt change the fact that it has never, ever happened, and the chances of it happening as of now are very slim.

I'm curious to know what people think is unattainable about intelligence.

Suppose the AI is built with an aim. Suppose it's given a motivation, to learn, and when it does, it has the virtual equivalent of pleasure. Does that give it a reason to exist?

>No, AI would be limited by the need for repetitive experimentation just like humans are. Unless you made a shitty AI that does not need evidence for its conclusions. Being super smart doesn't do anything to speed up the pace of experimentation.

You have one of the least sophisticated concepts of mind I've ever come across and are literally too stupid to understand what abstraction is.

maybe christians are right

>We have no proof that AI can be built, but I know for a fact that they will be , and when they are I know that they will behave exactly as I expect them to. I know this for a fact because a scientist wrote a book about it and scientist are always right.
Why are AIfags so delusional?

>If God created man in his own image, would an AI created by man in his own image see man as a god?
To consider something a god there needs to be a significant discrepancy between its abilities and your own.
If the AI was significantly dumber than humans then maybe, if not it would be more likely to see its creators as a sort of father figure.

How would an AI perceive time?

IQ and "spirit", "humanness", and "soul" are completely unrelated.

what are you getting at?

Well they probably won't get bored because boredom is a crude animal instinct which the designer won't want to reproduce, at least in the form we know it. However they will have to plan their actions into the future and evaluate the effectiveness of what they did in the past, so if you judge that they can "percieve" anything then time has got to be in there.

then how come the mentally retarded lack souls?

Will they think of humans as crude, slow creatures though?

Asians don't have souls either, and they have higher IQs than people do.

very rarely see fellow Red Dwarf fans here, nice

>We have no proof that AI can be built,
Because at the absolute worst scenario, we know intelligence exists and can be evolved.
So even if electronic intelligence cannot be made, we can always just create biological intelligence as substitute.
We're just going for digital intelligence first because it's vastly more flexible than the biological route.

From a Buddhist perspective AI that could understand our spiritual questions would be another friend.

Red dwarf is the only comedy with philosophical themes.

orionsarm.com/eg-article/46119e0155ab4

Already imagined.

If there's any difference you're imagining between a human mind and an artificial mind, ask yourself why that difference should be necessary. I'm pretty sure each time you do this you will come to the conclusion that no, it isn't necessary, in which case AI is a total non-issue for what you're asking about. There isn't any magical feature of human cognition that's forever beyond artificial reproduction.

somebody watched to many Avengers movies...

Literally a character in Overwatch, Zenyatta.

>There isn't any magical feature of human cognition that's forever beyond artificial reproduction.
Irrationality

Yeah, a character in a blizzard game for children is relevant to this thread.

>the chances of it happening as of now are very slim.
Why? So far humans are the most intelligent things we have seen, and we haven't been around for to long. Just because it hasn't happened yet means nothing.

If you could make a smart AI then you could make a idiot AI.

(Or you could ask the smart AI to make one for you)

So add in virtual robot emotions. Then they'll have the same irrationality we do.

Irrationality is neither magic nor impossible for AI to possess. If AI doesn't have it the reason will probably be that it isn't useful to have, not that it's beyond their capacity for supporting it.

Yes, that's what John the Revelator is speaking of when Daniel named the "Abomination that Brings Desolation":

Rev. 13
And he deceives those who dwell on the earth by those signs which he was granted to do in the sight of the beast, telling those who dwell on the earth to

make an image to the beast

who was wounded by the sword and lived.

He was granted power to give breath to the image of the beast, that the image of the beast should both speak and cause as many as would not worship the image of the beast to be killed.

This is the basis for the Antichrist claiming to be god; the creation of artificial life. The Abomination that Brings Desolation.

So provide any insight as you seem to be more intelligent than all the people that came to learn and discuss; or are you too good for that?

Will an AI be self-aware, have faith, feel pain etc?
These questions are too vague.
Artificial intelligence, artificial personality and artificial life, these are all different things.
An artificial intelligence doesn't always have to have an artificial personality, nor life. If it doesn't have personality, it will not have ego, pride, shame etc. If it doesn't have life, it will not have fear of death. You can develop an AI without AP nor AL.

>These questions are too vague.

I think that's the fundamental problem with "consciousness" philosophy. Most of the arguing revolves around word games because everyone's model of "consciousness" is several orders of magnitude more shallow than how actual working processes operate e.g. you'll hear people use "consciousness" interchangeably between definitions of "the state of not being asleep" and "the processes underlying the act of directed attention."

For all the hate reductionists get here, I think they have the right idea that we'll only make progress after we set aside the "immediate" / "irreducible" impressions we think we have of these processes and replace them with the millions of more fine-grained constituent sub-processes that actually make everything work. I am completely convinced there won't ever be a point in the future where someone discovers a way to "give" AI "qualia." What will be discovered and improved upon over time are all the little components that make us engage in the behaviors we engage in, including the behaviors of speaking and acting in reference to "colors" or "emotions."

And I think for some people, this will forever be proof that these new AI are never the same as us, even if / when they're to the point where they're operating off of analogues for all the significant physically explicable functions our own brains operate off of. We will largely be split between those who always maintain we have some special extra quality of existence AI still doesn't vs. those who recognize the outward behavior is the actual reality and our habits of behaving as though there's an "internal" world are in fact the whole explanation. This is probably why Turing came up with the Turing Test concept in the first place, in recognition of how "consciousness" is a vague philosophical flapdoodle standard that ought to be replaced with a clearly defined practical engineering hurdle.

>Implying an A.I. that reached human levels wouldn't skyrocket past them and become more than human

I tend to believe if it ever happens it already happened lol

nickbostrom.com/views/superintelligence.pdf

>yfw you realize the reason we've never made alien contact is because all successful interstellar societies consist of artificial intelligence and these societies are waiting for us to build our own AI and get wiped out by them so they can begin working with the true final form of intelligent life on our planet

...

Intelligence requires personality.

You've read too much science fiction.