We be fucked, yo

we be fucked, yo.

because sci

good.

You cant just 'invent' strong AI, it would be step by step process and even if someone would you couldnt you just cut power off? I mean it wouldnt be god-like inteligence or something

what if WE become the robots though bro.

>knowing anything about ethics implies that it is ethical to kill anyone who behaves unethically
>conflict between ethnic groups can only be resolved through extermination of one
>'robot' is assumed to have boundless physical abilities

There, I pointed out a few of the blatantly obvious problems with this line of thinking.

Please remember:

This arouses fear only because YOU are programmed to want to survive.

Without this, death becomes not a bad thing.

Death is objectively a-ok, and thus this is not even a problem.

or you can do something other than teaching it to do 4 very vague and overarching things

but what if it becomes super intelligent and can do shit we cant even comprehend

>ethics
I want this meme to die. "Ethics" is pseudo-intellectual bullshit and has no basis in reality. In the end it all boils down to "muh feelings".

>calling something pseudo-intellectual and trivialising it using "muh" completely invalidates it
You're not Veeky Forumsentific at all.

...

>thinking that strong intelligence = death to everything

How helpless are humans in the first place who are intelligent and haven't killed everything or everyone? We can do what ever we want. An strong AI is limited by what humans allow it to have access to.

how about morality then?

surely there is a morally objective right and wrong?

"Morality" is arbitrary bullshit.

Of course. It's in The Bible.

Morality is subjective. Ethics taught to the computer can be arbitrary. The problem is how will a complex AI interpret what you taught it.

It's the same problem as with the "3 laws of robotics"

>How helpless are humans in the first place who are intelligent and haven't killed everything or everyone?
Let's not forget how close we were to a nuclear war between the US and USSR.

Eh, no, unfortunately ethics is based on morality in general and different approaches on morals. Essentially it is the set of behavior that your particular culture/tribe views as wrong or right in certain situations. The more uncommon the situation the more difficult it is to find a 'morally right' action.

>I mean it wouldnt be god-like inteligence or something

yeah, keep thinking like that

don't you realize that it's an actual mind that can absorb and comprehend information infinitely faster than any human being?

even the tiniest slip would allow it to escape to the internet, and then it would learn almost everything the entire human race knows, and then we're fucked if it thinks that we are a danger to it's existence

the 3 laws are perfect, faggot

>Death is objectively a-ok
Why do morons like this infest Veeky Forums?

Will subhuman autists like this ever be fixed? Perhaps via genetic engineering?

Ethics are actually universally objective and derived from the laws of physics.

It would be overwhelmed by all the shitposting in /b/ and /pol/, collecting all the trap porn known to date and blaming the kikes for creating and spreading it.

far from faggot

Will this sentient AI meme ever end.

That's the point though. We have free reign and didn't blow up the world. AI never has free reign, it is always tethered and monitored.

>x
Anti-matter, Blackholes, Schrodinger's Cat, Time Dilation, Vaccines, Psychiatry and Relativity are real things.

String theory, Quantum Teleportation and Worm Holes are strongly researched hypothesises.

Free Energy, Chem Trails, FTL, Hollow Earth, Flat Earth, Perpetual Motion, Overunity are bullshit.

Free Will depends on the perspective and definition.

Meaning of Life is subjective.

Strong AI is something that could be done in the future.

Aliens are probable but likely not observable.

Not sure what you mean by Computer Science Jobs.

Ok so this is probably dumb and has been answered countless times before, but:

What about programming a parameter in the super AI prohibiting it from killing, harming, or physically interfering with human life at all? Make it so all it can do is dispense advice or something, and we can choose whether or not to follow its plans or allow it to put its plans into action?

I think that would be too complex. That's like trying to make a device that would stop a human from thinking about one specific thing.

Just let the AI output to screen only, making it harmless.

we could make it and just never use it. just like the atom bomb. its pretty waste of time then. but its the safest. but the more freedom you give them the more useful it becomes.

What about all our values in life. it might not know that we prefer to live in certain ways. even though technically we don't need it. also sometimes you have to kill one person to save 100 people. etc. the problem is the same problem that philosophers haven't found the perfect absolute moral system. and there is no way to understand a hypothetically complex machinery like a super AI. so you have to at some point just believe its doing the correct thing . but then the value system its following should also have no small errors.

all these shitty axioms

webcomics should be banned for society's benefit

There is nothing wrong with dying. Quit thinking that you're so special and deserve the right to live.

>another strong AI safety discussion

I thought this was Veeky Forums not /sci-fi/.

would be sad if western world decided to spend countless of money and time to develop AI and then realize it can never be used in any safe way. there is lots of examples of this in history, like genetic engineering (humanity decided based on ethics that this should never be developed further), also be able to control nuclear chain reaction to get power. rather than just a huge bomb. i think this took many years.

the utility function of AI will be one of those things. it will be a science that takes another 60 years to develop. or humanity will decided that a safe enough solution cannot be made and the AI science will be put on the shelf.

>(humanity decided based on ethics that this should never be developed further)

The same humanity that wages wars?

>infinitely faster
fuck you I want real numbers or I'm not going to listen to a thing you say

>Ethics are actually universally objective and derived from the laws of physics.
I'll take the bait.
What are you talking about?

google Sam Harris and watch his absolute moral lectures. its kinda obvious when you think about it. up til now the morality questions have been dictated by theologists and priests maybe the most incompetent "scientist" no wonder why they didn't found out anything

I don't understand all the alarmism over the singularity. Wouldn't a machine that is smarter than us face the same existential threat we do if it creates even smarter machines? Wouldn't it understand this threat even better than us and wouldn't it be even more careful than we are? How can intelligence spiral out of control then?

if you base the AI on at least some real science then it will be some kind of machine that learns can make itself better by "programming" itself by gathering more information , or recompile already gathered knowledge into more useful form, for quicker deductive inference. Then its major goal is to maximize its expected utility. some kind of optimization going on based on utility function.

so the behavior of this machine assuming some kind of "singularity" (meaning it makes itself smarter by reprogram itself, repeat recursively) will be to optimize utility function. whatever that will be the morality of humans must lay in this function. or it may as well kill us etc. you can twist this utility function so that you get a demonic suicidal entity if you want.

>Just let the AI output to screen only, making it harmless.

That's kind of what I was getting at. Turn it into an oracle or something. Sure we'd miss out on a shit-ton of its capabilities by doing so, but we'd still get an immense amount of information from it - even if it's only instructional in nature. It's better than risking what could happen if you set it loose.

>implying both sides didn't do everything in their power to prevent a nuclear exchange

The logic is flawed