Math to make CS serious

I'm a CS student. In the beginning of the course, there wasn't much math, some teacher said that in practice they don't use math at all. Some said that in some specific areas like AI, more math was used. Great, I wanted to study AI anyway. So finally I began to study AI and was one of the biggest disappointments in my life. Some fields like metaheuristics have no cohesive theory at all. Worst: I almost don't see people putting effort to find one. No elegant universal analysis or mechanism. Very little math used. Most publications are in the style of "I tried to change the parameters of this algorithm to this to see if it works better in some random instances of this problem".
But I believe that CS is important. In particular, I believe that AI is important.
You guys adore to mock CS. Fine enough. But can you suggest what math would be necessary to study CS and AI "seriously"?
I'm asking this probably because of how much I'm disappointed with my field, but still, the question is serious.
I would specially welcome answers from mathematicians that turned to the field of AI.
Most of the more relevant theorems and theories and algorithms that I saw in CS and AI were actually made by mathematicians and physicists, now I’m not surprised.

Other urls found in this thread:

en.wikipedia.org/wiki/Computer_algebra_system
ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial?
acm.org/education/curricula-recommendations
twitter.com/SFWRedditImages

I'm assuming you're going to take the same sequence of calculus, multivariable calculus, differential equations, and linear algebra that every physical sciences and engineering student takes at decent schools.

Discrete math is likely the best place for you to start after this. Computational geometry would probably be useful too. Set theory, I believe, is also useful but I don't know how any of this relates to AI.

Keep in mind I did my degree in chemistry and don't know much about CS besides what my roommate told me, so take this with a grain of salt.

I thank you for your reply, but that's the standart curriculum, and doesn't seem to help much to improve the deph of understanding of AI. Well, not alone.

In an analogy, it's like we have a lot of methods to solve some equations. But no theory of abstract algebra.

Since this is the only thread about CS on Veeky Forums:

Can someone explain to me how is AI, a fully logical mathematical system, even expected to happen when the logic behind the computer itself is flawed?

For an example, the Two's complement mechanic of the processors and the arbitrary way they're supposed to represent positive and negative values. They are not bound by mathematical logic but are rather abstract representations of bits (start with a "0" for positive and with a "1" for negative, when in reality no bit value can ever start with a zero since it is redundant, and such bit combinations do not exist in the real word).

How do you expect your hardware to work logically when at the literal lowest level you've used arbitrary abstraction that is only interpretable by humans? It's basically like saying "blue bits will now be negative, and red positive!". Or in other way, "it just werks so why bother change it lmao"

>arbitrary abstraction

Are you trolling? Do you know anything at all about computer logic?

Please proceed to explain to me how "0100" and "100" are in any way logically different

It's fine that you are a freshmen, but you need to hide it better. You are just embarrassing yourself with this post. If you want to delete it you can click the button the left, and delete it at the bottom of the screen.

Take a computer architecture or basic circuits class if you are actually interested in understanding this topic that you are very confused about.

/the time I was actually nice to the dumbass.

quality argument

And no, you weren't nice. You directly adhommed me and proceeded to avoid answering anything regarding the question I kindly posted

Explain this

Are you asking what is going on inside of the registers that makes me able to send '0100' and have it be distinct from sending '100' to a register? I guess I'm confused still about if you want logic or physical implementation of it explained to you?

>Most publications are in the style of "I tried to change the parameters of this algorithm to this to see if it works better in some random instances of this problem".

Most AI researchers just publishing the same paper over and over again with tiny tweaks. What's left is throwing onto and seeing if it "sticks" (is anything at all above the baseline).

>I believe that AI is important

It really isn't. AI = rule based problem solving and gambling on statistics. You're never going to program an AI waifu or AI god.

...

>Are you asking what is going on inside of the registers that makes me able to send '0100' and have it be distinct from sending '100' to a register?

No, did you even read the "Two's complement" part?

How do you build a system that is supposed to obey logic and use zero human-exclusive abstraction when it magically considers a symbol that is supposed to represent "lack of information" as an information carrier?

because there is no 'lack of information' going on ever. It is not the difference between a switch going on and off it is the difference between bright(1) and dim but still on(0), usually with OP codes butt-ending each? Does that make sense?

I think you are just not understanding how encoding of bits actually works.

>They are not bound by mathematical logic but are rather abstract representations of bits

No, they are bound by babies' first number theory. Computer work in Z/2^nZ so every number is in an equivalence class with positive and negative numbers in each class. We then declare the "standard labels" for these classes to be [math]-2^{n-1}[/math] to [math]2^{n-1}-1[/math].

One is in Z/16Z and the other is in Z/8Z

I understand that, but what I'm trying to say is that the logic behind doing that doesn't apply in the real world - it is a human-made abstraction. You cannot run an AI on abstractions.

I will repeat, again, 0100 is not supposed to be different than 100 as the zero in front of it doesn't actually exist, and forcing it to carry information is a form of abstraction. It's a good and very smart way to be able to brand values as negative, but doesn't happen in the real world - an object surrounded by zero information is simply an object and nothing more.

Maybe I'm just interpreting something wrong, so sorry if it is so. I'm not even studying anything related to CS, just found that interesting enough to ask here

so are you saying you would be happier if 1 was 50, 0 was -50 and a specially designated '-' operator on the register was designated as 25?

I get what you are saying I think? but it just doesn't strike me as something so odd as to be worth pointing out? it seems to function just fine how it is, but yes it could b done differently?

>It really isn't. AI = rule based problem solving and gambling on statistics. You're never going to program an AI waifu or AI god.


Clearly not if the field doesn't get serious, that's why I'm so mad.

What is 2? Is it the 2 in N or the 2 in Z or the 2 in Q or the 2 in R or the 2 in C or the 2 in H or the 2 in O or the 2 in {0,1,2}?

>How do you expect your hardware to work logically when at the literal lowest level you've used arbitrary abstraction that is only interpretable by humans? It's basically like saying "blue bits will now be negative, and red positive!". Or in other way, "it just werks so why bother change it lmao"


The higher levels are made to take this abstraction into account, so even higher levels don't have to worry about it.
Like when you are planning something you don't worry about how your synapses are conecting.

You have it backwards.

In a computer you first begin with physical circuitry which you use to create 1 bit registers. Then you chain these registers together in order to create multi-bit registers. Later you include a bunch of these registers together with a bunch of other circuitry that manages basic arithmetic as well as some basic instruction set.

At some point you have to decide on a size of register to use for your arithmetic and you have to decide exactly how you are going to encode numbers into those registers. It is at this point that one decides to take the real world of voltage and circuitry and superimposes an abstract world where "abstract integers" are encoded in some way. In other words, circuitry is the real world, numbers are the abstraction.

Unfortunately no matter which abstraction you use you will always have some limitations which will present themselves if you're working at a very low level.

Encodings are everywhere (eg. integers in set theory) but the only places people ever bitch about them are in the dumbest of places (eg. pop-sci people arguing against AI).

The solution is that there are many levels of abstraction between hardware and physical circuitry that allow those problems to melt away. Here is another example where such abstractions are used to solve the problem of symbolic computation on a computer.

en.wikipedia.org/wiki/Computer_algebra_system

Moreover some of the limitations machines should face (eg. reasoning about explicit floating point numbers) are similar to problems ordinary humans face when reasoning about rationals. In particular a float is stored as a certain number of significant bits which are then left or right shifted by a certain amount. Similarly when a human thinks of a rational they either think of an abstract representation (symbolic computation) or they conceptualize a certain number of digits left/right shifted from a decimal point.

This is reminds me of the post where people say that math is imaginary and made up by humans

I'm not really into AI, but I've seen some people say (including an acquaintance of mine) that we still don't have enough information in the field of AI to make some sort of all-encompassing theory. Take, for example, ANNs. We still don't know why the fuck different activation functions perform better on some data and worse on others, or why some regularization techniques work. At best there's an "intuitive" explanation that sort of makes sense, but there's no coherency between all of them.
And so people think that only once we have a certain "critical mass" of knowledge will we be able to deduce the general theory of ANNs.

>Most of the more relevant theorems and theories and algorithms that I saw in CS and AI were actually made by mathematicians and physicists
That's because it is just math. You can't deny that Dijkstra, Von Neumann, Knuth, Turing, Codd were/are computer scientists. But they are at the same mathematicians, because there is no difference. For them computer was simply a real life model of the theory and it's not really important itself (except for, perhaps, Knuth). If Shannon and Kolmogorov did their work nowadays they'd be called computer scientists if you focus on their information theory/computation theory parts.

because

ABCD

is not the same as

FGHI

it's a register system, not a real number, with zeros as placeholders. every "slot" has a meaning.

In general I think that

>Statistics
>Artificial Neural Networks
>Evolutionary Algorithms
>Game Theory
>Computer Graphics
>Computational Geometry
>Calculus
>ODE/PDE/SDE
>Dynamical Systems & Chaos Theory
>Linear Algebra
>Discrete Math
>Mechanics

are some topics you would need knowledge in to be a good AI researcher. Some of these can be skipped over if you don't care for robotics.

AI is really vague, so it's hard to pin down exactly what you need - it depends on what the type of AI you're interested in. Depending on the type of AI you might need more math such as Fourier Analysis, Functional Analysis, Topology or Differential Geometry, Combinatorics, Graph Theory etc.

I want to stress Evolutionary Algorithms for without it, AIs will only mimic humans at best.

I should mention I'm no mathematican, I hold a B.Sc in SE and working on M.Sc in Complex Systems, so I'm completely biased.

>8469067

How do you suggest that such things be represented? How can you ever truly represent a negative? A bit is just a voltage. So a negative bit would need to be a negative voltage. But there are no true negative voltages, the only difference is the thing you are measuring it relative to. Just like a counterweight on a crane is 'negative mass' relative to what it picks up, or how air resistance is a negative force relative to the direction of a falling object.

A drawing of -1 is never a true negative number because it doesn't use negative ink, only using some arbitrary token '-' which is interpreted by something on a higher level.

When we think of negative numbers using our brain, is information going backwards through synapses? Do the neurons consist of negative atoms?

If our brains cannot truly represent negatives then how can it be said that we are any more likely to grasp such concepts than than a machine?

this

it isn't illogical to represent to the absence of something with something, or how else would you describe the absence?

Lel integer underflow is why Gandhi nukes everyone

Check out turing computers, it will make a lot of the reasons of why both things you say are equivalent and the inherent limitations of computing (this last thing is an open debate). For more on the latter check out digitl physics.

You are confusing math with the real world.
>the 0 doesnt actually exist
Yes it does, it carries the information "out of the n registers you can use to represent a number the first m

Forgot to add thats assuming its being used to represent an unsigned integer.

So there are four schools in AI. Thinking rationally, thinking like humans, acting like humans, and acting rationally. Most serious people in the field focus on the last one. Also worth noting the field is less than a 100 years old, currently the field puts a heavy emphasis on empirical results. This is because no one knows how AI should be done, people are looking at developing methods that work, then to go back and explain them. Unfortunately, it's proven hard to develop mathematical basis for some explaining WHY (not how) some methods work better than others. As the field advances, and there are more results or experiments to go off of, people will begin to create a more mathematical basis for the field.

You can compare this to electricity, Faraday preformed empirical experiments to show things about electricity, Maxwell explained his experiments through math.

I want to rigorously study deep learning. What areas of math should I know very well? Are there any major theorems from the field I should know?

Linear Algebra
Statistics

Computation Theory
Neuroscience

Posted too soon,

Convex/Combinatoric/Nonlinear optimization

depending on the instruction set architecture a 0 there could completely change the opcode that the processor executes.

i suggest you read a book on operating systems or even kernel development, you'll come to realize that abstractions are fundamental to computing. unless you're trolling or something.

Great thanks. Any recommended learning resources you'd start with for the beginner?

The most important math I learned in CS was literally 8th grade shit. Amdahl's law, and extending it to software systems and algorithms is basically all you will ever need as a software developer. Obviously indepth knowledge of pitfalls in whatever toolsets you work with et cetera... But really the subject is only difficult if you lack abstract reasoning skills, and if you lack abstract reasoning skills every science is hard.

Holy shit, just wait till you realize math is made up outside of computers too

What's your background?

Try to develop a working knowledge of neural networks. Do the mnist challenge. Build one from scratch. Then implement more complex versions. Read loads of papers, understand what people are doing and how parameter tuning affects the network.

Stanfords ufldl Tutorial is a good place to start there, work through that.

Once you have some understanding of them, try to apply some of your mathematical knowledge to the field, to prove small things then work your way up.

Aren't there more interesting optimization algorithms than evolutionary algorithms?

>Complex systems

Care to mention where at? I'm interested in maybe doing my masters on the same subject.

>only math he learned was 8th grade shit.

Literally I fucking hate it when people say this. I'm doing a CS undergrad, but the plan I have for my undergrad has me taking 16 CS topic classes, and 13 undergraduate mathematics classes all said and done.

The problem with the field and what I hate is that you can customize your specialization to literally be the dumbest fucking shit imaginable, where you fill all your credits with web dev/gaymin/ muh phone apps/ etc etc, and cut out all the actually difficult classes.

Or you can have it set like how I'm doing where you just maximize the amount of math involved and make the degree actually worth something!

I'm embarrassed to be associated with the kinds of trash who get CS degrees with no effort by just taking every easy class. This field really doesn't have to be like this.

You misunderstood me user, I had to take a fairly rigorous set of classes as our degree was proscribed by old faculty, there were no gaymin app classes.

I'm just saying in practice, I rarely use more than algebra.

Me too bump

Thanks for the advice. Sounds like I'm on the right track. I built a neural net from scratch and achieved a 96-97% accuracy on the MNIST challenge. Recently came across a TF tutorial that uses a different activation and cost function that'll increase the accuracy to 99%. I'll do those improvements next.

I the tutorial you recommended: ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial?

I have a pure math degree, did unrelated industry work a while (not related to math/cs). Plan to take a few CS courses and then apply to PhD programs in this area

Are you literally god damn retarded?

OP, I'm not sure of the quality of the school you're attending, but there are areas of computer science which are mathematically rigorous.

A list of mathematically rigorous areas:
>computational geometry
>computational complexity theory
>cryptography

In your case,
>computational learning theory

Take your pick.

>Gaymin

Gaming is really not that easy. Building game engines aren't that simple, whereas muh drag and drop fields like web dev/phone apps are trivial. Also, all those math classes are pretty useless in the industry if you plan on going in straight after. I plan on going to grad school which is why I took so much math.

Yeah because random sci retarded undergrads will make you a better curriculum than the professors at your uni. Stop falling for memes, CS curriculum is fine for CS.

>Gaming is really not that easy

Don't mind it, /g/ just likes to attack /v/ to feel better about themselves.

The CS curriculum has been watered down to hell mate. Just look at what used to be required in 1968

Drop out of CS and just read the topics in the Veeky Forums wikia. It will be far more rigorous than anything you see in your classes.

Looks pretty similar to mine

CS isn't deep, or difficult. But that should be okay. There is no point in trying to artificially make it more complicated than it is just so you can take it more seriously.

What a retarded diagram. But that's OK because it has all the fancy colors. Confirmed for stuck-in-what-is-publishable-thinking.

fucking LMAO who made this shitty image

The ACM, one of the most respected international societies for computing. They're the ones who administer the Turing award.

Well you know what respected means?

>what people who don't know anything but are anxious 'bout what others think care about.

one has length 4 and the other has length three.

That completely depends on what you define the data to be. Because data is what you define it to be.

lel, that would get me a dual degree lele

This is pretty much exactly the curriculum at mine

Then you don't know what the word means.

>But I gotta rebel against the man.

How about you graduate from undergrad first, kiddo.

Already have both a masters and a research degree thank you. I know what most articles look like nowadays. 99% fluffy bling bling and 1% substance.

>I'm a CS student.
stopped reading there brainlet.

I have a BS in Math, BS in CS & MS in CS. Worked in CS for 20 years. In all those years I only had to utilize my college math knowledge once (simple Gaussian Interpolation copied right out of the book - before wikipedia u know). When you spend most of your time in application or system programming, you just don't run across any real math too often. Yeah there is some graph theory every now and then (which in my day was taught in CS) but honestly it mostly comes up when I interview new hires and usually I have forgotten all the axioms and tricks and have to look them up anew. Could be different for you if you end up working with flight simulators or rendering engines ...

But don't sweat it. I took math in college because I liked it. It allows me to have some interesting conversations with non CS peeps around the office. And every now and then I get to dust off that knowledge. Right now I need to dust off stats and probabilities for web site A|B hypothesis testing ...

Agreed. My bro calls me a toaster engineer. I don't mind. Toasters need to be programmed too. And what many sci's may not get is that you can be very creative in programming. You can build whole worlds in code. You also get to have arguments with your colleagues on how to model concepts in code and express them in an aesthetically pleasing way (i.e. there is some art involved).

Pretty much my exact curriculum at UA.

>For an example, the Two's complement mechanic of the processors and the arbitrary way they're supposed to represent positive and negative values. They are not bound by mathematical logic...
And this is why every cs student should be forced to take digital design

Chalmers in Sweden

Probably but not any that I know of. It really depends on the problem at hand. You wouldn't use evolutionary algorithm to find roots of [math]x^2-1[/math] but you they are handy for more complex problems such as bipedal movement or finding good strategies in infinite games.

This looks like about half of my undergrad computer science program, which I finished in 2009. (No I'm not going to tell you where.)

I'm the guy you are replying to, I want to work on crypto research projects, that's why I have am stacking in so much math, I'm fully aware that it isn't very useful for everyday programming. I posted that as a misunderstanding of his post.

Oh my god

New pasta

lolled at the CS major bragging about knowing proof by induction. in line with reality at my university

I don't understand how CS programs work at other places at all. I'm taking freshman CS classes right now and we are getting to induction in my first semester here.. it is a freshmen topic with 0 prerequisites.. and then we have like 10 math credit requirements for the degree? How is this considered all the math that some people learn? Like what is it that CS majors at other schools are learning if not mathematics?

Are there other CS degrees where these things are taught as something that is supposed to be difficult? How could you consider algorithm proofs difficult?

*meant 10 math classes, not 10 credits total.

Post your curriculum or you're full of shit.

the only math requirements at my school were calc1, calc2, linear algebra, a course on discrete mathematics (with introduction to proofs), a course on probability, and a course on algorithms.

Why is it that Veeky Forums is completely unwilling to believe that some people have nontrivial CS programs without unreasonable levels of documentation?

The other thing is that most majors at decent universities are fairily customizable and flexible. Most math credits at my uni fit into my CS elective credits.

On the other hand if I want to do computational linguistics I can make linguistics credits fit in under my CS degree! I can give more examples if I wanted, but you guys get the point. If you go to a shitty uni that doesn't let you customize your degree into whatever you want to know, your uni is failing you.

Just post your school's CS page.

I'm not that user, and the uni I went to is none of your business.

Not that it would help, of course; the last two times I saw someone post realistic curricula, people screamed that it was fake.

I've looked at dozens of CS curricula from Ivies down to no name state schools and they overwhelmingly have very little math in them.

The only schools with decent math requirements are the rare overseas universities.

>The only schools with decent math requirements are the rare overseas universities.
They are not that rare, user. We have universities in Europe too. About as many as you guys have in the US, in fact. You are confusing your own tunnel vision with a fact about the actual world.

>Math Requirements in CS
>While nearly all undergraduate programs in CS include math courses in their curricula, the full set of such requirements varies broadly by institution due to a number of factors. For example, whether or not a CS program is housed in a School of Engineering can directly influence the requirements for courses on calculus and/or differential equations, even if such courses include far more material in these areas than is generally needed for most CS majors. As a result, CS2013 only specifies mathematical requirements that we believe are directly relevant for the large majority of all CS undergraduates (for example, elements of set theory, logic, and discrete probability, among others). These mathematics requirements are specified in the Body of Knowledge primarily in the Discrete Structures (DS) Knowledge Area.
>We recognize that general facility with mathematics is an important requirement for all CS students. Still, CS2013 distinguishes between the foundational math that are likely to impact many parts of computer science—and are included in the CS2013 Body of Knowledge—from those that, while still important, may be most directly relevant to specific areas within computing. For example, an understanding of linear algebra plays a critical role in some areas of computing such as graphics and the analysis of graph algorithms. However, linear algebra would not necessarily be a requirement for all areas of computing (indeed, many high quality CS programs do not have an explicit linear algebra requirement). Similarly, while we do note a growing trend in the use of probability and statistics in computing (reflected by the increased number of core hours on these topics in the Body of Knowledge) and believe that this trend is likely to continue in the future, we still believe it is not necessary for all CS programs to require a full course in probability theory for all majors.
>acm.org/education/curricula-recommendations

In my experience, at my university(midwest state school) they don't require anything higher than Linear Algebra, but the sample curriculum and their recommendations on the CS page strongly suggest that you do several linear modeling/time series classes and at minimum take diff equations and introductory analysis.

Not sure why you are in such a huff over this? Every school has different requirements.

And most students are lazy fucks that do the bare minimum.

Most of the students in my CS program took extra math classes when they did not have to.