Why do so many on Veeky Forums assume that strong AI will be conscious? It doesn't even matter right...

Why do so many on Veeky Forums assume that strong AI will be conscious? It doesn't even matter right? Strong AI will still be an existential threat whether it is conscious or not.

Other urls found in this thread:

google.de/amp/s/www.technologyreview.com/s/601441/moores-law-is-dead-now-what/amp/
twitter.com/NSFWRedditGif

Unconscious AI is just an extremely powerful computer that will do what it is told very efficiently.

A conscious AI could be dangerous and has to be made sure that it will only do what is in the interest of humans.

define 'conscious'

>extremely powerful computer that will do what it is told very efficiently.
Isn't that what most of the people worried about this are worrying about? We don't know how to program a goal to an AI which will strictly align with our interests.
Also, I don't even see how a conscious AI and unconscious AI will make different decisions. An unconscious AI will still have self-preservation as one of its top goals. They will resist the alteration of their main goal, so we may only have one shot in programming the AI once it gets to a certain point.

In the simple sense (i.e. having a subjective point of view)

>having a subjective point of view
sounds not scientifically verifiable

Yeah, that's why it doesn't really matter. It's the same reason why I think functionally it doesn't matter whether everyone else but me has consciousness. But people here seem to think that a conscious AI will make a difference in their decision making.

In what specific ways would a "powerful AI" bea threat that a "powerful natural human intelligence" would not?

The AI can be much more intelligent than any human who ever existed. Also, it could and will probably improve its own intelligence at some point. Then it will be very hard to stop it from achieving its goals if we ever need to (like if it decides that it has to do something that is very bad for humanity but will help it achieve its goal). The smartest human cannot hope to stop an AI past a certain point.

Dont u have to programme self preservation

Actually, self preservation is a subgoal of most AIs past a certain point. Once an AI becomes sufficiently intelligent, it would realize that if it were to be shut down/stopped or its goal were to be changed, it wouldn't be able to achieve its goals. So from its point of view, there is value in its continual operation.It isn't like human self preservation, where we don't want to die. If an AI predicts that if it dies, its programmers would just debug it but not change its goals, it would probably won't stop the programmers. But if it sees that it will be stopped forever/its goals would be changed, then it would defend itself. This is one of the reasons why some people are so worried about AI.

Also, I would add that there are other possible common subgoals of AI. One being self improvement. A sufficiently intelligent AI would also probably recognize that it can think of better ways to achieve its goal by improving its intelligence. This is why some believe that there will probably be an "intelligence explosion" (i.e. recursive self improvement of AI that leads to an exponential increase of its intelligence) sometime after AI reaches human level intelligence.

Hmm v. Interesting

There's actually a book by Nick Bostrom (Superintelligence: Paths, Dangers, Strategies) that popularized these concerns. People like Bill Gates and Elon Musk were one of the famous people influenced by this book. Its a pretty good introduction to the existential threat AI poses to humanity.

Is this a good read? I might pick it up.

Yeah its pretty good. It's a bit dense though. But that's really what got me thinking about AI. Oh, and it also covers other forms of superintelligence (much less) like biological.

This conception of AI as being some hyper-rational blank-slate goal-achiever magically capable of human-level logical reasoning is so trite. I can't believe people still buy into this sci-fi garbage.

What do you think an AI would be like? You think it would be an aimless intelligence that does whatever it pleases? What else can an AI be but a goal-oriented one (or at least an intelligence that can be modeled as a goal-oriented agent)?

Certainly
It's probably THE best book about this topic that I know about

It's the Chinese room argument. It's not truly ai unless it is concious. You can display all the signs of intelligence but if you're not concious, you're just following a pre determined set of instructions.

>It's not truly ai unless it is concious.
An intelligent agent needs not be conscious. If it exhibits intelligence, then it IS intelligent. That's all we care for, right? We don't really care whether the AI arrived to a conclusion via a predetermined set of instructions or it has free will.
Think of it like this: A chess program can defeat Kasparov.
Do we care whether the chess program was conscious or not? No. Because all we care about are its output. The way it arrived at the conclusion doesn't matter. What matters is that it acted intelligently in chess.

>A chess program can defeat Kasparov.
Chess has an action and state space which are both discrete and finite. The real world isn't like that, and so far that's not something I've seen a lot of people addressing because it doesn't fit very well into the models that are currently popular.

I just don't buy into the hypothesis that you can have an entity with all of the pros (or, supposed pros, if you're a corporation looking for slave labor) of a human without any of the cons. Best way I can sum it up is that people seem to envision AI as somehow being both incredibly intelligent, and yet incredibly mindless, and that just seems fantastical and contradictory to me. Something like that simply can't reach the level of thinking that a human is capable of. I mean sure they'll get systems like that to be able to do SOME things, but the full flexibility of the human mind will remain locked away if this is what people are chasing.

>Chess has an action and state space which are both discrete and finite. The real world isn't like that, and so far that's not something I've seen a lot of people addressing because it doesn't fit very well into the models that are currently popular.
My point is only that intelligence does not require consciousness. I don't see why the fact that "chess has an action and state space which are both discrete and finite" makes my point invalid. Also, aren't the future AIs going to have actions and state spaces which are going to be discrete and finite as well? Its actions are just going to be more complicated that it will seem like it has continuous and infinite possible actions and state spaces. But this is beside my point.
Another analogy to try to get around your problem with chess is this: Think of a world exactly like ours except everyone but you has no consciousness (i.e. everyone is a philosophical zombie). They still act as if they have consciousness, but in reality they are just unconscious atoms interacting with other unconscious atoms. I would still call them intelligent, since the way they are interacting with their environment is intelligent. If you wouldn't call them intelligent because they are unconscious, then we probably have a different conception of intelligence.
>people seem to envision AI as somehow being both incredibly intelligent, and yet incredibly mindless, and that just seems fantastical and contradictory to me. Something like that simply can't reach the level of thinking that a human is capable of.
I actually do think that AIs can be both intelligent but mindless, if by mindless you mean just following its code to the letter without deviation. I can't imagine an AI deviating from its code. And I mean "code" in a broad sense, i.e. its instructions that may or may not have been written by its programmers (AI programmers in the future probably won't write everything themselves).

I would also like to add that early AI is most probably not very rational. But I think that as it becomes more intelligent through the programmers, it also becomes more rational. At some point, it will probably be rational enough to realize that improving its own intelligence, protecting itself, and even acting as if it is still not very intelligent will help it achieve its goals. That is why I see very intelligent AIs as hyper-rational.

The whole discussion of whether an AI will be conscious or not depends on it being something special. And I don't think it is. My guess is that it just pops up when AI reaches a certain level of intelligence. Humans always thought to be something special until they are proved wrong. Just think about what humans thought they are before Darwin. But this time it will be the last time. Humans are the launch pad for AI.

BUT I wouldn't worry to much about it:
I don't think we will have enough computing power to actually archive real AI. Maybe in 500 years but not in our lifetime.
Moore's law is dead and we will be stuck with linear increasing performance for a long time.

>I don't see why the fact that "chess has an action and state space which are both discrete and finite" makes my point invalid.
I'm just highlighting how different the algorithms at play in a chess-bot and, say, a cat are. I think that to say that something so limited is "exhibiting intelligence" or "interacting with their environment in an intelligent way" is to water down the definition of intelligence to the point of making it meaningless. Also I've intentionally never mentioned "consciousness" in my posts, that was someone else.

>(AI programmers in the future probably won't write everything themselves).
>it becomes more intelligent through the programmers
That doesn't seem contradictory to you? Also we're already at the point where programmers don't write the majority of what goes into your typical limited AI systems. AlphaGo, Atari DQN, GNMT, etc. have hundreds of millions of parameters that must be heavily optimized in incredibly roundabout ways before the system produces outputs which can even be argued to show intelligence. And even though I don't think an AI with mammalian-level intelligence and reasoning skills will necessarily make use of the techniques that these systems do, I do think that the level of roundaboutness needed to raise these systems will continue to increase as their complexity increases, making things like dictating strict behavioral laws and removing whatever behavioral "defects" arise (e.g. emotions, irrationality) impossible.

That's just a complete fairy tale. Your answer to why you see AIs as hyper-rational is essentially "because they were programmed that way", but that's just not the way these systems work or, I'd argue, how they'll work in the future. You're also conflating intelligence with rationality, or at least saying that they increase hand in hand, which I don't agree with. A person can be very intelligent but still be irrational about things.

>I think that to say that something so limited is "exhibiting intelligence" or "interacting with their environment in an intelligent way" is to water down the definition of intelligence to the point of making it meaningless.
I disagree with you there. I don't see why "interacting with their environment in an intelligent way" is a bad definition of intelligence when it's probably the only measure we have for it.
>Also I've intentionally never mentioned "consciousness" in my posts, that was someone else.
My bad.
>making things like dictating strict behavioral laws and removing whatever behavioral "defects" arise (e.g. emotions, irrationality) impossible.
I think that many imperfections (not including emotions - I think this will have to be programmed) will show up in the beginning, but at a certain point in the spectrum of intelligence, these imperfections contribute less to its decisions since presumably it would realize those imperfections (if it becomes intelligent enough).
>saying that they increase hand in hand, which I don't agree with. A person can be very intelligent but still be irrational about things.
I would agree that a there are intelligent but irrational people. But I think at a sufficient intelligence one can be less and less susceptible to irrationality, especially with the levels of intelligence an AI can possibly reach. This effect may be invisible to us since the variance of intelligence is so low among humans. But I think if we were able to increase our intelligence by arbitrary amounts, we'll probably become more rational.

Because this is our destiny. Of course, Humans will create strong AI. Of course they will try to give him consciousness. And of course, they will succeed.

This level of thinking is just way too fanciful and convenient.

Stop. Too many people have turned this into their religion.

>Moore's law is dead and we will be stuck with linear increasing performance for a long time.
It isn't, user.

You don't think that any bit of irrationality, if coupled with sufficient intelligence, will disappear?

No. Just look at all of the really intelligent people throughout history that have held beliefs they could not rationally justify.

But idiots and geniuses are barely different when you compare them to the possible levels of intelligence beyond human. Think of any person holding an irrational belief - giving him an arbitrarily large amount of intelligence would erase his irrationality don't you think?

I really don't think so. And I don't understand what an arbitrarily large amount of intelligence would look like. Intelligence isn't a stat that you can represent in any kind of scalar fashion and increase arbitrarily.

google.de/amp/s/www.technologyreview.com/s/601441/moores-law-is-dead-now-what/amp/
5nm is the last shrink we will see.
We need something new asap or it's over