Artifial Intelligence

Do you know any good resources for learning the fundamentals of AI? Most used textbooks?

I wanted to study it at university, but I don't have time to get even more classes, so I would like to learn it at my own pace. Are there good resources with the theory, problems/projects, etc?

Other urls found in this thread:

Veeky
twitter.com/NSFWRedditImage

Look for a tutor or ask elon musk for help.

cmon bro

Start with ai, a modern aproach for the basics
See you in a few monts

It should be outlawed desu

thanks

Almost stages IA
Definition and goal
Rule base system
Algoritms A* and constrain system
Logical constrains system a.k.a prolog
Random and aprox algorithms
Statistic learning
Algorithms statist learning a.k.a machine learning
Neuronal networks on massive scale and more layers a.k.a deep learning.

Today IA begins linear algebra, vectorial calculus, probability theory and algoritms.

Are you intending to create an A.I. capable of evolving?

Veeky Forums-science.wikia.com/wiki/Computer_Science_and_Engineering#AI.2C_Machine_Learning.2C_and_Computer_Vision

What do you guys think of this man?

dumbass fedora.

Do you know who he is, or are you just going by his appearance?

And keep in mind that to justify calling him a dumbass (at least on the basis of his work) his ideas must be wrong in some obvious way that is trivial to explain or at least reference.

Exceedingly capable AI theorist, with a Stallman-level sense of PR. Just about everything he says is right, yet at the same time he gives PLENTY of reasons for people not to take him seriously.

Like what?

Just interested.

And I would say that the reasons he gives FOR taking him seriously (in terms of outlining the risk of hard-takeoff AGI and the necessity of exotic research into steering said AGI on a friendly path beyond the point of the Singularity) outweigh any major guffs I have seen from his work.

In fact if you approach it on the basis of expected Utility, then there is nothing MORE important than taking this stuff seriously.

>Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American AI researcher and writer best known for popularising the idea of [math] \bf{ friendly ~ artificial ~ intelligence} [/math]
>Between 2006 and 2009, Yudkowsky and Robin Hanson were the principal contributors to Overcoming Bias,[19] a cognitive and social science blog sponsored by the Future of Humanity Institute of Oxford University. In February 2009, Yudkowsky founded LessWrong,[20] a "community blog devoted to refining the art of human rationality".[21] Overcoming Bias has since functioned as Hanson's personal blog. LessWrong has been covered in depth in Business Insider.[22]
>Yudkowsky has also written several works of fiction.[23] His fan fiction story, Harry Potter and the Methods of Rationality, uses plot elements from J.K. Rowling's Harry Potter series to illustrate topics in science.[21][24][25][26][27][28][29] The New Yorker describes Harry Potter and the Methods of Rationality as a retelling of Rowling's original "[math] \bf{ in ~an ~attempt ~to ~explain ~Harry's ~wizardry ~through ~the ~ \underline{ scientific ~ method} } [/math]".
>Over 300 blogposts by Yudkowsky have been released as six books, collected in a single ebook titled Rationality: From AI to Zombies by the Machine Intelligence Research Institute in 2015.[31]
>Yudkowsky identifies as an atheist[32] and a "small-l libertarian".[33]

>In fact if you approach it on the basis of expected Utility, then there is nothing MORE important than taking this stuff seriously.
Oh, I fully agree. But that is only evident once you have ALREADY read and understood the whole mammoth body of work. To someone not already familiar with it all, he does kind of radiate "self-absorbed crank". Bostrom, for example, does a much better job acting the part of a respected researcher.

This is not a criticism of the actual content of his work, mind you; it's a criticism of his first-glance presentation.

>bolds "friendly artificial intelligence"
Do you then think the notion of AI risk is ridiculous, sufficiently ridiculous that anyone who raises it as an issue has declared their own autism?

>HPMOR
Do you then think that the fact that he writes fiction on the side and uses it to popularise scientific concepts means that he's somehow ridiculous, or inept, or that he has some L Ron Hubbard style cross over in his conception of fiction and science?

Also you seem pretty scornful of how much he blogs. I suppose that's because you are worried about a lack of peer review? And yet the issue of hard takeoff is largely evaluable on its conceptual merits, and you still think that pointing out the aspects of EY's online biography that you have pasted here is a sufficient knock-down argument for proving that he is not only wrong, but obviously stupid in a way that should be apparent even from the limited articulation of your posts?

Do you see that even giving your 'arguments' the maximum amount of credibility your still coming across as someone with nothing to say on the topic and know grasp of its key concepts or stakes?

Yeah, no, agreed. He has some cringey self-presentation. But it still surprises me that people are willing to dismiss him so casually.

My only hope is that there is a common factor between those people and the kind of people who WON'T be making serious individual AI breakthroughs.

>But it still surprises me that people are willing to dismiss him so casually.
Very few people are actually capable of judging large complicated ideas on their own merits, especially outside their area of expertise. The vast, vast majority of people can ONLY judge these things by social cues rather than by their content. And I think we can all agree that Yudkowsky has the social cues pointing against him.

That seems like a habitual problem more than an intrinsic intellectual problem, but I don't know. I can't think of too many analogous ideas to FAI that are virtually apolitical but also existentially crucial.

You might be right though. Jeez that sucks.

>Do you then think the notion of AI risk is ridiculous, sufficiently ridiculous that anyone who raises it as an issue has declared their own autism?

Yes. Read an actual textbook on AI and see how the algorithms work. AI is nothing more than heuristic decision making. They're not going to have "thoughts" any time soon in the next couple of centuries.

As I hope you realize, the whole notion of AI risk is based on what AI *could* someday do according to our current understanding of what the inherent limitations are and are not. It has *nothing whatsoever* to do with the abilities of present-day techniques.

>makes a confident prediction when analysis of AI predictions by field experts has literally no consensus.

>Claims AI is nothing more than 'heuristic decision making' - a phrase of nearly equal generality to 'problem optimisation' or 'information processing'

>Claims that heuristic decision making is all it takes without acknowledging that Eurisko, Cyc and Neural Nets have all been high-confidence AI projects utilising those principles that ran into limited/no success.

>Touts heuristic decision making without acknowledgement that decision theory is currently incomplete and thoroughly non-computable apart from its most basic elements.

>Claims to have read an AGI text book but still refers to it by the misnomer of AI

>Implies that EY's logical error is to impute AGI's anthropomorphically with "thoughts" when this could not be further from the truth

>Considers two centuries a buffer zone for the solution of exotic problems in AI Alignment so spacious that he thinks it justifies his dismissal of EY as 'dumb'

>Uses the phrase "AI is nothing more than"

Wow, you are so little threat to humanity that its not even funny. There are lead fume breathing famine victims in the wastelands of the Sudan who stand more of a chance at mocking up an AI than you do. I hope you don't get paid for anything to do with Computers, mate. You are helpdesk material at best.

>our current understanding of what the inherent limitations are and are not

We have no idea what thinking even is let alone how to make computers think.

>>Touts heuristic decision making without acknowledgement that decision theory is currently incomplete and thoroughly non-computable apart from its most basic elements.

Oh great, it's another kid that spends all his time reading Wikipedia articles who only know [math]of[/math] things. Decision theory has nothing to do with making decisions. Read a ToC textbook too.

No, YOU have no idea what thinking even is let alone how to make computers think. That doesn't mean other people don't. There is a lot of AI theory research, developing theories of intelligence in general, possibilities and limitations, mathematical background, the specific workings of *human* intelligence, mathematical analyses of existing AI techniques and the limitations this implies, and how similar topics. This is quite a different field from what you'll find in typical AI textbooks, which focus on the workings of current techniques and what you can do with them. Notions of AI risks are based on the results of this AI theory.

Decision theory has to do with robustly predicting the outcomes of decision making agents. Hence why merely giving your AGI the ability to weigh alternatives according to an heuristic as opposed to framing its decision making process in the architecture of a computable decision theory is the difference between Friendly AGI and getting wiped out of existence.

I would point out your the unjustified nature of your calling me a know-nothing kid considering that you have volunteered next to zero domain content in this argument, but ad hominem seems to be kind of your whole deal.

This

wikipedia page "machine learning"

Disregard the brain fart, I was thinking of Decidability.

never heard about him, but from what i read he is non educated, havnt done anything or made anything, just written books and speculated like typical philosphers.

i think the real heros are the countless of researchers that actually advance the field. I respect more peolpe that get their hands dirty and program and implement and figure out new stuff, like the people in Deepmind, Google brain, IBM people. thats how the field will advance.

You're a faggot. He publishes papers all the time with MIRI and only popularises the dangers of AGI in order to recruit programmers to worthwhile projects. Programmers that go after the grail of AGI without properly considering the hard take-off and Singularity risks associated with recursive improvement are best case un-imaginative, worst case fucking stupid. Eliezer Yudkowsky has admirers at Deepmind and Google Brain by the way.

>never heard about him
Should have stopped there.

Can we please dispense with the insults and stick with just discussing factual matters?

>You're a faggot

stopped right there

you have absolutely no clue what AI risk is then, do you believe that the most important risk is AI becoming self aware???? how retarded can one be???

Depends whether you want to master it or understand it enough to build NNets in tensorflow.

> Elements of Statistical Learning (need calculus, linear alg. and stats)
> Introduction to Statistical Learning (no prereq)
> Hands-On Machine Learning with Scikit-Learn and TensorFlow (predominantly practical)