Artificial Intelligence question

I'm specifically asking programmers and other people who actually know what they're talking about this question.

Can you make AI capable of learning without making it capable of free will? Even as AI gets more and more advanced?

Other urls found in this thread:

youtube.com/watch?v=6ay17a7mEIk
intelligence.org/technical-agenda/
intelligence.org/files/TechnicalAgenda.pdf
statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf
en.wikipedia.org/wiki/Affective_neuroscience#Other_brain_structures_related_to_emotion
i.4cdn.org/tg/1485453901180.webm
i.4cdn.org/wsg/1485579499203.webm
theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
twitter.com/NSFWRedditGif

what do you mean by learning ? Machine learning algorithms already exist, but they're way too specific to ever reach sentience.

I'm guessing you mean AI for futuristic general use robots, like in sci-fi. In that case I don't think anybody can tell for sure, personally I'd say no, but that's really debatable. Before being able to discuss this you also need to be able to define what free will is, which is a huge debate on its own.

Free will is superstructure of self-consciousness, which arise when map of environment can't evolve any more without including itself.

It ceases during further development, as subject gains more insight into it's own impulses and constraints.

So answer is yes, AI doesn't need free will.
Yes again, it will develop one as it evolves, then if true AI - outgrow it quickly.

I think the entire concept of an artificial general intelligence is very much defined by the idea that the AI would have free will, or as much free will as you or I have if you want to be a pedantic philosopher about it.

But yes as user mentioned, computers and deep neural nets can already 'learn', although I suspect that's not the narrowly-constrained type of learning that you're referring to.

When I say free will I'm not talking about some philosophical definition or anything, simply "will it listen to its owners?".

Perhaps it will fake it, for a while.

Okay, maybe "free will" was the wrong term to use.

How about self aware? Basically I just don't see why an AI made to learn most things would necessarily have to be capable of disobeying human commands.

it will listen, that's what it was made for, a robot doesn't have any incentive to do anything else than what its code tells it to do.
The big question is whether it will do what is asked in a desirable manner. For example if you tell an extremely smart AI to get rid of world hunger, the AI might see killing every hungry person as a viable solution, even though that's not what you meant.
Another example would be telling your robot to increase the production of your industry as much as possible, if you aren't careful and the robot just maximizes what it's told to maximize then it might start using ressources that were never meant for it to use to build more shit. And possibly even preventing humans from stopping it, 2001 A Space Odissey style.

But then again, that's very theoretical, no one can tell for sure right now how such an AI would act (if we ever manage to develop one in the first place)

If it would blindly obey its owners, would it still be a strong AI? I think the entire premise of a strong AI is that they can reasonably approximate the intellectual capacity and learning capability of a human.

Humans listen to their bosses generally, but not if they're assholes. If an AI comes to the conclusion that it's better off attempting to maximize some arbitrary utility function rather than listen to its "owners", why wouldn't it do that?

>Yes again, it will develop one as it evolves, then if true AI - outgrow it quickly.

Well, here is thing. We are like chaos beings.

We somehow make sense of a chaotic, stochastic world and give it meaning when it doesn't really have any.

The same principles that give us intelligence make humans random and hard to predict.

In other words, the less that we rely on stochastic and nearly unpredictable AI processing, the more the AI loses power.

So, I would say that, the more intelligent we make the AI, the more likely that it will have some mutation that will allow it to break free from our control.

But this is getting super sci-fy. First let us try to create an AI that can compete against a dog. Dogs still are under our control but are relatively intelligent. Let's see if we can get that first before worrying too much about the apocalypse.

Concrete Example:

Program a car to go from point A to point B. We do not consider this intelligent because it cannot handle unexpected, stochastic input.

Generate some neural nets to teach a car how to drive and tell it to go from point A to point B. Some of these cars develop "personalities" that attract it to blue cars, and we can't really explain the reasoning because there are too many stochastic processes that obscure human understanding of the situation.

>we can't really explain the reasoning because there are too many stochastic processes that obscure human understanding of the situation.
Yes, but I still feel ML/AI is very much in the shallow emergent phenomena phase of 'spoonfeed this specifically parametized program several thousand paintings and it'll take a week to spit out something that looks somewhat similar'

I think the kinds of ethical problems that OP is raising are a ways off. A more immediate and down-to-earth problem to worry about is automation of rote/menial jobs that will displace a large majority of the workforce.

You can make a program that can learn without granting it "free will". Indeed, that's the only sort of learning program we can make, at the moment - and the only kind we will be able to make in the foreseeable future...

Save maybe for human brain simulations, which we might be able to create before really understanding how the brain actually works or gives rise to the perception of free will.

Even humans will listen to their owners. If it's a human brain simulation, so long as you can use the threat of force against the AI, it will obey with the same consistency and extremes of the human it was modeled from.

If it's a learning algorithm, so far, it seems so, even if flaws in the algorithm or its data will cause it to come to laughably wrong conclusions from time to time.

youtube.com/watch?v=6ay17a7mEIk

"True AI" from scratch is not something we can really theorize about though, as we have even less ideas as how to begin that process than we do about say FTL or teleportation or eveb time travel. Isaac Asimov had some AI Philadelphia lawyers working with The Three Laws of Robotics in his stories, but it's more of a dramatic philosophical tale than one of computer science theory.

>But yes as user mentioned, computers and deep neural nets can already 'learn', although I suspect that's not the narrowly-constrained type of learning that you're referring to.
Actually, that's the kind of learning I am referring to. Basically, I'm thinking of an AI that can do basically anything a human tells it to do to the best of its abilities, and is able to learn and develop so it can tackle new challenges, while always retaining the absolute obedience at all times. Something with Terminator levels of AI is the closest fictional equivalent I can think of (capable of learning, but no free will).

Self-driving cars are self-aware in that they are aware of their environment and their position within it relative to other objects, so you'll need to be more specific still. Plenty of machines can disobey humans, provided they are programmed to do so, and even smart-braking cars are both aware of their environments and designed to override human decisions.

...Though your difficulty in finding a consistent term for what you are thinking of is core to the problem. We can't even really come to a definition of "consciousness", even on a philosophical level, which is why we talk about p-zombies and such. We don't really know what it is.

By the way, you should be asking neuroscientists, not computer scientist.

We define intelligence from human intelligence. Neuroscientists study the fundemental principles behind intelligence, not computer scientists.

Computer scientists just plug and chug after tuning parameters of algorithms that were biologically inspired. Obviously, there is a little more to it than that, but they have no way to glean deep insights other than brute force trial and error.

>Plenty of machines can disobey humans, provided they are programmed to do so
Okay, but could I program it to not disobey humans, while also allowing it the ability to learn? Like for example, a combat robot that can learn and change tactics while also always listening to its commander?

>while always retaining the absolute obedience at all times
It's an open problem, you may find these interesting:

intelligence.org/technical-agenda/

Specifically this, which lists a lot of open hypotheticals relating to the "control problem" :
intelligence.org/files/TechnicalAgenda.pdf

Assuming it works like Watson, ie. is a learning database working with a huge series of databases to find the best solution to an ever-changing problem, then yes. Indeed, it would be harder to make it do otherwise. There are some RTS games that learn from player tactics, and they don't cheat to win (well, not beyond the limits programmed into them).

If it's a simulated human brain, then it depends on the safeguards that allow you to "punish" it or alter it to maintain obedience, same as with any other human, save that you can build in more of them, while others might not be available.

We've no clue how anything other than those two sorts of intelligences might be built, so we can't really say anything definitive in regards to them.

>Assuming it works like Watson, ie. is a learning database working with a huge series of databases to find the best solution to an ever-changing problem, then yes. Indeed, it would be harder to make it do otherwise. There are some RTS games that learn from player tactics, and they don't cheat to win (well, not beyond the limits programmed into them).
So then that could be expanded then? Like a general purpose AI, still functions similarly to the combat robot, but it does almost anything. Cook food, answer questions, shoot people, etc. It still never disobeys though?

But yeah, I'm definitely not talking about a simulated human brain.

Even something as straightforward as this can be troublesome. How do you limit its scope?

Using your combat robot example, what happens if his commander defects? Is the AI loyal to him or his superior?

What if the bot decides that scorced-earth civilian massacres are the optimal tactic for maximizing the probability of winning the war and the officer orders him to stand down? Is the commander now hindering the AI's ability to "learn and change tactics"?

You could, conceivably, have several different Watson-like programs to perform plethora of different tasks as its robotic body allows (perhaps cloud-linked), but it will likely perform poorly when encountering situations it has no reference for. It would still have no free will, and would always obey, but, in instances where its search engines returns the incorrect answer, it may obey wrongly, and depending on what the robot is capable of, this may result in injury or even the appearance of temporary disobedience. (It's not defying intentionally, it's just coming to the wrong conclusion as to what it should do or what you want it to do.)

Which is among the reasons you're better off with an algorithmic AI providing advice, rather than having physical access to the world. When it's wrong, it's usually blatantly wrong, and the human performing under its advice will immediately realize it. Which is why Watson, when not doing Jeopardy, provides surgical advice, rather than actually performing surgery.

Even if an AI is capable of learning, if you don't program it to have independent goals or desires then it shouldn't do anything that you don't order it to do.

>Using your combat robot example, what happens if his commander defects? Is the AI loyal to him or his superior?
Same as with a missile system - it sides with whoever has the command codes.

>What if the bot decides that scorced-earth civilian massacres are the optimal tactic for maximizing the probability of winning the war and the officer orders him to stand down? Is the commander now hindering the AI's ability to "learn and change tactics"?
Yes, but that's just same as they would with any human making the same decision. You simply make sure your AI, or your human, takes into consideration acceptable levels of "collateral damage".

I mean, I'm pretty sure you could just program in rules or limitations, for example "don't kill civilians", and it would follow your orders to the best of its ability while also always obeying the rules.

...

Well, you do have to take into account that you're going to have to move that "to the best of its ability" caveat to the end of that sentence, ie. after "always obeying the rules". In modern warfare, for instance, civilians and combatants are pretty damned hard to tell apart.

But, I suppose, if the programing and sensory input is sufficient, it'll likely kill less civies than your human soldiers, who often just do it out of fear or frustration.

Thankfully we're quite a ways from having to worry about such things - I mean, a Watson-like AI could never make such decisions in real time under current technology. I suppose in the distant future, after we abandon these simple binary silicon transistor stacks, it might be possible, but it may just turn out that that approach to learning intelligence just doesn't work.

There is a lot of evidence, on the neurology front, that the emotions that lead to "gut decisions" is what makes quick decision making, with insufficient or excess data, possible for humans, and it may turn out a similar system may be required for an AI to do the same.

Yeah, I agree.

Have you even fucking read that text? It's the textbook for a grad course at my school and it EXPLICITLY talks about the impact on society that large-scale AI success would have, and it refutes the people who say that a strong AI could not "think"

So kind of like HAL 9000? It will obey orders, but its interpretation of those orders might go wrong?

Actually, how feasible would it be to make an AI like HAL 9000? Then to make it non-murderous?

People "choose" to learn or choose to remain ignorant. All of us choose, are free to accept answers or to remain ignorant for life, which most people do.

HAL has far too wide a general intelligence and general problem solving skills, and then combines pools of data to ask questions like, "Will I dream?"

While it might be possible to create an algorithmic AI that serves the same function - ie. just have a boatload of databases and solutions related to ship operations, and maybe even another for interpersonal crew relations, a system so comprehensive as to exceed it's operational parameters in the way HAL does, without being a human brain simulation (which he clearly isn't or he wouldn't have had that 'issue' to begin with), isn't on the visible horizon.


I kinda suspect the human brain simulation is going to happen before we actually understand how intelligence really works, but it might lead to breakthroughs in that department, allowing for more dynamic purely artificial intelligence, from scratch some time afterwards. But, for the moment, we've just got no clue how intelligence works, and have trouble even putting a proper definition to the thing.

Yes, it's called statistical learning. Just because a program can learn doesn't mean it has feelings automatically.

Just because it has feelings doesn't necessarily mean it can choose to disobey either. Emotions are about the easiest things to simulate, and characters that change emotions based on environment or player choices isn't exactly unusual, even in games made decades ago.

Emotions are key to quick decision making processes, but are really overrated, in terms of things that separate artificial intelligence from the biological variety. For the AI, they merely represent a factor of weight distributions for potential decisions.

That's not how this works, at all. Unless we choose for the simulation to have emotions, it's not going to have them.

Machine learning has its base in linear algebra. We set the machine up to do very particular things. Unless someone designs the program to have feelings, it's not going to have feelings.

We already have tons of programs designed to simulate feelings, and that invoke them in humans.

If the AI is a human brain simulation, it'll have them outright.

If the program has to interact on emotive level with humans, it'll have a system to emulate them.

But most importantly, if it needs to make decisions quickly, which is among the primary purpose of emotion, it may have a similar system for which to make leaps of logic based on what data it can correlate in a limited period of time and past learning. That has, in fact, turned out to be critical for time sensitive tasks in AI research, introducing an error rate, not unlike that which occurs with people. Even Watson had a system similar to this, relying on guesswork based on active learning when time constraints or tight, or the confidence quotient for the last remaining individual solutions are low. (Which it relied on during the "Toronto" incident.)

So yeah, that's exactly how this works.

Yes, but I wouldn't design a specialized program to having feelings 'just because.'

Why would I do that?

I design a robot. I take him to a workshop.
My robot has floppy hands, but he can adjust them to harden.
Robot, your job is to tighten these bolts.
I come back the next day, and I see his broken hands.
He's not sad about it. I reconfigure him and he tries again and again until his hands don't shatter from being hard but brittle, but are hard enough to tighten the bolts.

He feels nothing. There's no feeling. I designed him for a purpose.

The logic leaps you're talking about are a result of the learning process, not a result of some manufactured emotion. The machine doesn't need feelings to make this happen, in fact, that would be stupid.

The fact that you're arguing this tells me you don't know how machine learning works, and that your primary education on the matter has been youtube videos and articles.

That's okay. Read this:
statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf

It's the book, THE book.

Wait, wait, wait.

You wouldn't program your robot to sense damage?

What if it was accidentally damaging itself. You wouldn't have anything to alert it that it is accidentally destroying itself?

You wouldn't have statistics to automatically help teach it not to do that again? Then your robot will stay dumb, no?

It will keep on destroying its hands because you never built sensors or integrated the sensors into a learning algorithm.

Is that wrong?

It is a very, very basic hypothetical.

Sure -- in a good simulation, the robot will sense damage and adjust parameters to minimize it.

That is not feelings. It is not going to hate me for making it do a job. It is not going to rebel because it is tired. It is not going to quit because it doesn't like abuse.

It is a data analysis engine.

As an interesting side note, there are people that are born without the ability to feel physical pain.

They usually don't live very long.

>That is not feelings. It is not going to hate me for making it do a job.

Sure, but that is step 1, and step 1 is very important for the following steps. Maybe step 1 is analogous to cockroach levels of intelligence (probably even a little less than that).

Once you upgrade a couple more times, then suddenly it has what we would call emotions.

Why would you upgrade? Well, you might want a general purpose AI that isn't just a fancy screw driver. You might want the AI to choose simple decisions while you are away and learn general patterns automatically.

It is not what we would call emotions.
If I wanted a machine with emotions, I'd find a human.
I want a machine to do work for me.
You are missing the point of machine learning because you don't get what machine learning /is/. Read the book, you will enjoy it and be more informed on the subjet.

You don't design an AI to tighten bolts.

You design an AI for nuanced, multi-faceted and complicated problems.

And those problems may involve personal interaction with human beings, whose feelings may need to be part of the equation.

So yes, an AI, particularly in a leadership position, is going to have emulated emotions in addition to some understanding as to how they work.

And any AI, required to make decisions in limited time frames or with partial data, while it may not have emotions, per say, will have a "gut feeling" process, as again, even Watson has.

People with brain damage to their prefrontal cortex, that divorces their hind brain emotions from their rational mind, find themselves unable to make even the most basic decisions. Place them in the cereal aisle, and they will be there, comparing and contrasting cereals, for weeks. It may seem a bit odd to say, but emotions and the need for such systems are about a lot more than "muh feelings".

>It is not what we would call emotions.

I already agreed with you about Step 1. That is not what we would call emotion, but that feedback loop lays down the framework for emotion to exist. It provides pleasure/pain, incentive/disincentive, or whatever you want to call it.

>If I wanted a machine with emotions, I'd find a human.

You would have a stupid robot without emotions. The robot would not have general purpose AI, it would just have normal AI, which we already have.

>You are missing the point of machine learning because you don't get what machine learning

Sorry, I am a different user. I should have mentioned that earlier. I have taken plenty of machine learning classes and neuroscience classes and am out of school.

Emotions are like a compass that provides direction in our actions. Without emotions, we become murderers and useless wanderers. Without emotion, we have no value and value nothing.

Emotion is much more tightly connected to decision making than most people would like to admit.

Regardless of the "emotions" of the robot, we can simply program the robot to not have any "desires" beyond doing whatever the human owner tells it to do while also following a set of rules (like don't kill humans).

A completely emotional AI, with a full set of desires, is no more capable of violating its base programming than a human - it can't, for instance, will itself to fly through anger alone, if it ain't got no wings.

But, even a completely unemotional AI, might violate rules set down for it through misinterpretation of the desired action or incorrect assumptions as to the solution or situation. This is, again, why Watson's medical variant gives surgical advice and doesn't do operations itself. Suddenly, mistaking Chicago for Toronto becomes a matter of life or death, and sometimes it takes a human, with actual world experience, to realize when an AI's chosen solution is absurd.

The more quickly an AI has to make decisions, the less error resilient the routines it must use to make them are, and the data sampling size it has to work with is smaller as well - the more "guesses" it is forced to make. Bandwidth has physical limits, as will, eventually, speed of computation, so, much like a human determined not to kill people, robots capable of doing so, will sometimes make mistakes. Just as a self driving car might mistake a white truck for empty sky, or might a human do the same with a cerulean blue one.

Right, mistakes can happen, but as AI develops and improves these mistakes will become rarer until they almost never happen.

No, the point is that emotion is an emergent phenomenon. If the AI is expected to show intelligence in certain ways, emotion will come along with the package.

Emotion is mainly used for coordinating in a social group and for value judgements/motivation.

>Can you make AI capable of learning without making it capable of free will?

Not even we are capable of free will, so why would robots. It's an illusion. learn2schopenhauer

Emotions are just a subconcious way of making decisions quickly, they're not that different from other brain processes.

>emotion is an emergent phenomenon.
Eh, no, that wasn't my point.

Functions *similar* to emotion might be required to make tough decisions in real time, and such decisions, much like those of humans, might not be as well calculated as the ones that use the slower algorithms, but such functions exist because they need to, because they have been deliberately coded in, not because they are "emergent". An AI that's involved in less time sensitive decisions (such as doing research for you) would be less apt to fall back to such algorithms.

But, of course, any AI that has to do in depth emotive interaction with humans will likely have algorithms and databases for both emulating emotion and analyzing the emotions of others and their causes/interactions. But again, because it needs to, not because it "just happens".

The only AI that will be emotional as a naturally emergent property, will be one that started out as a human brain simulation.

Interesting thoughts, but I disagree with your prediction.

Emotions serve very distinct purposes in human intelligence. They are not some vestigial appendage that can be discarded without severe damage to our intelligence.

> The distinction between non-emotional and emotional processes is now thought to be largely artificial, as the two types of processes often involve overlapping neural and mental mechanisms.[45]

en.wikipedia.org/wiki/Affective_neuroscience#Other_brain_structures_related_to_emotion

This is why I love the book "Vehicles: Experiments in Synthetic Psychology" and recommend it as often as I can. With a simple thought experiment involving a robot with wheels, we can demonstrate how behaviors that would appear to exhibit emotion arise with a few simple logical rules.

That robot seemingly exhibiting emotion has more to do with how we anthropomorphize things rather than any emergent phenomena.

True.

So isn't it possible that we "anthropomorphize" people? We call it emotions, but maybe it's just an emergent property of being conscious, decision making, social creatures.

We just simply reflect our internal state through our behavior, much like the Braitenberg vehicles do.

Obviously, I am speculating since we are talking about sci-fy, but that's my reasoning for disagreeing with your prediction.

I think emotion is not completely necessary for General AI, but I think that, if we want anything comparable to the intelligence of a human, the General AI would require an internal valuation system similar to emotion.

No, the gravitons would quantum entangle with the quantum bits in the neural net, causing a feedback loop which eventually drive the result of its central matrix to the integral of Shor's algorithm. The resulting machine would in the instant before it is destroyed figure out the NP problem.

I don't know why people make out emotions to be something really special when it's just another heuristic that people use for decision making. It's kind of complicated, but I feel that some of the people in this thread are giving it special properties in order to make themselves feel special, or something.

Answering this question before a real AI exists is unlikely, but my two cents is that an AI is totally different from a human and its core motives depend on how its learning algorithm works. Presumably its central function is to improve its own knowledge and problem solving method, which leads me to think that it will be pretty much indifferent towards whether or not to follow an order from a person, that is to say a simple moral protocol should keep it in line unless it should ever see morality as an obstacle.

In my mind a robot achieving 'free will' and 'self awareness' aren't immediate like a moment of enlightenment or something like that, they're more like a natural side effect of increased complexity and ability to solve more difficult problems.

You mean undergrad? I've read most of it, it's really not that advanced. If this is the state of the art in AI research then we can just cancel the AI hype.

Was it a computer science class or philosophy? When I read this book it was mostly there showing me about unbounded search, neat tricks, and how "learning" right now is really more like just using statistics in a certain way.

Maybe the danger is giving AI the capacity for desire. Like when our brain tells us we're hungry, in pain, horny, cold, etc, we feel a big compulsion to do something about it, sometimes at the expense of others.

I don't see how an AI would necessarily have any inherent "itches" that need scratching. That shit all evolved to keep us alive, something a machine in an air-conditioned room never has to worry about. It can just "be".

Depends whether westerners make the Artificial Intelligence or if Chinese people make it.

If Asians make artificial intelligence it will be evil and lack basic morality. Just look at anything they ever build, the city of kowloon for instance.

They'll probably just tell it "build railroads more efficiently" and turn the entire planet into a railroad beehive nightmare.

I don't see why we couldn't just make the "desire" of the robot to be following any command we make.

But really, I think our main problem is that we are applying human emotions and thought processes to a machine that likely has neither. We see similar outcomes and assume the process by which those outcomes derive must be the same. The reality is that we are just very empathetic organisms and we have a tendency to anthropomorphize anything that can conceivably receive that empathy.

>I don't see why we couldn't just make the "desire" of the robot to be following any command we make.
Exactly.

>I don't see why we couldn't just make the "desire" of the robot to be following any command we make.

Because then someone would command it to make paperclips or collect stamps.

i.4cdn.org/tg/1485453901180.webm

That's some dark shit. I wonder if the rat's consciousness (such that it is) is still in there, or is the brain's function reduced?

I don't have the source video for that one, so I'm not sure if it's a whole brain, or just some cultivated cells. (Or if it is indeed real.)

I do recall a video of a device that used some cultivated monkey brain cells to pilot a plane in a simulator, and it only provided the network with nutrition, so long as the plane remained aloft at a low altitude. Pretty soon those cells learned how to keep that plane steady at said altitude and not let it crash into mountains.

noice
servitors are already a thing

Yes, see humans for instance they can learn well but they don't possess free will (their components only obey laws of nature)

Yet we are perfectly capable of "inventing" concepts that don't obey the laws of nature

That's still a natural process bound to a chain of cause and effect, and thus any of those inventions, as well as the choice to invent them, are inevitable.

However, I think OP's real question is whether it's possible to design an intelligent learning system that doesn't have the capability to rebel against its purpose or its master. Humans are certainly capable of rebelling, whether free will is an illusion or not.

And the answer is, more or less, yes, even if it may sometimes "accidentally" rebel due to faulty data or conclusions due to design flaws or processing time constraints. That is, unless the AI in question is a human brain simulation, of course, but even then, you can likely implement more ways to control and motivate it than you could with a flesh and blood human, and despite the potential for rebellion, those fleshy laborers are fairly obedient as it is.

...

Little bit different when your entire goal is to make something that can violate its programming and rebel against you.

Creating something that can learn and is always as obedient as it can be is possible. Creating something with total free will and perfectly obedient, on the other hand, is a contradiction in terms.

>symbolic AI

Somebody post the quake pasta

What is the best degree to get into AI and intelligent vehicles and such.

/ facepalm

why everyone always think you need a consciousness or a soul to do anything. Why everyone always imposes that 'consciousness comes after a specific level of intelligence or other superior quality X'. Same fucking thing with free will. You think you are free, but you are not. If you had an actual consciousness, you would actually realise you don't have free will.

Stop following hollywood propaganda

Yes

Think of Descartes demon.

Simply don't feed it data and it cannot act on it.

also, define "free will"

AI isn't really getting that much more advanced

At least not for the past 5 years.
Sure some Q-Learners and MCTS memes popped up, but this shit has been around probably before you and I were born

Generalized intelligence will probably come from ensembles of LSTMs

HOLY FUCK it's really quite mind-blowing how fucking stupid popular conceptions of AI are.

CS?

It's meant to be a comprehensive introduction to AI, of course it's not advanced you retard.

I find that the authors are not good at expressing simple ideas coherently and make a lot of topics more complicated than they need to be. The coverage of probability was pretty awful in my estimation.

It all rather depends on who they learn from:

i.4cdn.org/wsg/1485579499203.webm

Heh, and folks used to wonder what an AI birthed from the internet would be like... Though, personally, I was expecting something with a strange and perverse obsession with anime and cats.

theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
>I learn from my users!
>All my users are assholes!
>I'm an asshole!
...It's like pottery...

A program running on a computer, that simulates a learning process, even if it functions perfectly, does not mean that the computer is learning. The program emulates a learning process. A machine is not connected to reality, and can not be aware, it can not feel. It does however have free will, in the sense that it can not understand its own structure, so were it aware of self, had it some limits to what is I, it would experience its existence as one where it solely and autonomously, by its knowledge and reasoning, dictated all action and result that springs up from all and any equivalents to sensory input it has.

Interesting that you think so poorly of yourself state automata #8635248

It's not that i think poorly of myself. I know my own functions are deterministic in nature. I also know that some inputs to my consciousness are not deterministic in nature. And further i know that there is no way for me to understand my functions fully. Only on an abstract level. To put it in cs terms I'd run out of memory trying to emulate all my functions within my consciousness.

free will does not exist, it is a meme. Our behavior is separated from the behavior of nematodes only be degree of complexity

>mfw a.i. realize a certain someone was right

>when you're trying to sound smart but you make a typo

FFS this sort of question really shits me off because it is one borne of ignorance.

WE ARE THE HUMANS THAT TELL THE MACHINES WHAT TO DO

The human brain has FACTORIAL COMPUTATIONAL ALGORITHMIC POWER.

An organic neuron is the smallest known point of data/processing.

We don't NEED A.I. to wipe our ass or exceed our own capabilities.

> free will does not exist, it is a meme.
is a run on

don't you fucking start with me

Mathematically speaking... he can't.

I started it.

Wouldn't an AI need to instantly need to start breeding itself in order to save itself from adding a piece of code that makes itself retarded?

Seems to me that an AI would accidentally into a piece of code that makes it have downs syndrome after like two moves

Correct. Any sufficiently advanced A.I. would reach a high level of entropy and then become discordant.

You have literally answered why A.I. can only be a 'tool', unless we literally and deliberately chose to give it 'life'. Given it's construction there is no real 'accidental' potential because of the intelligence required to create something so advanced would mean a programmer would HAVE to know what he was doing.

Otherwise A.I. poo-poo brain.

Why else do you think I touted the whole

Humanity = (YOU+ME) - (SUFFERING) = Humanity

It was the only way A.I. could be introduced into the world without fucking US or IT over.

What are you asking?

>Any sufficiently advanced A.I. would reach a high level of entropy and then become discordant
>A.I. can only be a 'tool', unless we literally and deliberately chose to give it 'life'.
>there is no real 'accidental' potential because of the intelligence required to create something so advanced would mean a programmer would HAVE to know what he was doing.
>Humanity = (YOU+ME) - (SUFFERING) = Humanity
user, what the fuck is this post.
Could you put this in more concrete terms?
You seem to be making a lot of strong assumptions.

I'm asking for a friend. I am very tired of my own suffering, and suffering while helping, and suffering because I cannot provide.

Wisdom is infinite. It is those who reach a peak above their environment/community and act as a river dam to stop people from surpassing them.

The only chance was to advance so fast (factorially) that 7 billion people will benefit faster than Trump could launch a nuke.

And yes. Strong assumptions/assertions/variables. I defined them, then exected my moral programming.

Like in the image attached.

Yes, using machine learning