Math to make CS serious

I'm a CS student. In the beginning of the course, there wasn't much math, some teacher said that in practice they don't use math at all. Some said that in some specific areas like AI, more math was used. Great, I wanted to study AI anyway. So finally I began to study AI and was one of the biggest disappointments in my life. Some fields like metaheuristics have no cohesive theory at all. Worst: I almost don't see people putting effort to find one. No elegant universal analysis or mechanism. Very little math used. Most publications are in the style of "I tried to change the parameters of this algorithm to this to see if it works better in some random instances of this problem".
But I believe that CS is important. In particular, I believe that AI is important.
You guys adore to mock CS. Fine enough. But can you suggest what math would be necessary to study CS and AI "seriously"?
I'm asking this probably because of how much I'm disappointed with my field, but still, the question is serious.
I would specially welcome answers from mathematicians that turned to the field of AI.
Most of the more relevant theorems and theories and algorithms that I saw in CS and AI were actually made by mathematicians and physicists, now I’m not surprised.

Other urls found in this thread:

en.wikipedia.org/wiki/Computer_algebra_system
ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial?
acm.org/education/curricula-recommendations
twitter.com/SFWRedditImages

I'm assuming you're going to take the same sequence of calculus, multivariable calculus, differential equations, and linear algebra that every physical sciences and engineering student takes at decent schools.

Discrete math is likely the best place for you to start after this. Computational geometry would probably be useful too. Set theory, I believe, is also useful but I don't know how any of this relates to AI.

Keep in mind I did my degree in chemistry and don't know much about CS besides what my roommate told me, so take this with a grain of salt.

I thank you for your reply, but that's the standart curriculum, and doesn't seem to help much to improve the deph of understanding of AI. Well, not alone.

In an analogy, it's like we have a lot of methods to solve some equations. But no theory of abstract algebra.

Since this is the only thread about CS on Veeky Forums:

Can someone explain to me how is AI, a fully logical mathematical system, even expected to happen when the logic behind the computer itself is flawed?

For an example, the Two's complement mechanic of the processors and the arbitrary way they're supposed to represent positive and negative values. They are not bound by mathematical logic but are rather abstract representations of bits (start with a "0" for positive and with a "1" for negative, when in reality no bit value can ever start with a zero since it is redundant, and such bit combinations do not exist in the real word).

How do you expect your hardware to work logically when at the literal lowest level you've used arbitrary abstraction that is only interpretable by humans? It's basically like saying "blue bits will now be negative, and red positive!". Or in other way, "it just werks so why bother change it lmao"

>arbitrary abstraction

Are you trolling? Do you know anything at all about computer logic?

Please proceed to explain to me how "0100" and "100" are in any way logically different

It's fine that you are a freshmen, but you need to hide it better. You are just embarrassing yourself with this post. If you want to delete it you can click the button the left, and delete it at the bottom of the screen.

Take a computer architecture or basic circuits class if you are actually interested in understanding this topic that you are very confused about.

/the time I was actually nice to the dumbass.

quality argument

And no, you weren't nice. You directly adhommed me and proceeded to avoid answering anything regarding the question I kindly posted

Explain this

Are you asking what is going on inside of the registers that makes me able to send '0100' and have it be distinct from sending '100' to a register? I guess I'm confused still about if you want logic or physical implementation of it explained to you?

>Most publications are in the style of "I tried to change the parameters of this algorithm to this to see if it works better in some random instances of this problem".

Most AI researchers just publishing the same paper over and over again with tiny tweaks. What's left is throwing onto and seeing if it "sticks" (is anything at all above the baseline).

>I believe that AI is important

It really isn't. AI = rule based problem solving and gambling on statistics. You're never going to program an AI waifu or AI god.