Anyone like math side of computer science?

I really enjoy computer science. Anyone else? I like the math side of it.

Other urls found in this thread:

info.ecosia.org/what
pdfs.semanticscholar.org/b341/82dc45a624fcfb41e86958d205059488da61.pdf
twitter.com/AnonBabble

me too thanks

I just finished watching Category Theory for Programmers on youtube. Maybe now I'll understand 20% of what Kmett is doing with his life.

I like constructive mathematics and type theory

I'm really into logic and took a class relating it to isomorphism algorithms for graphs this past semester. I don't know how category theory ties into any of this but the faculty at my university seem unusually interested in it so that's cool and I may get to learn more eventually.

Bump

> Anyone like math side of computer science?

It's the only side that I like.

It's the only thing that counts as actual CS and the entire reason I do it.

You guys think we can intellegently discuss P v NP for once? What do you think the relationship is? Did you hear about that Chicago Professor who did an NP complete problem in Quasipolynomial time?

Yeah, it was in the 20's.

>Did you hear about that Chicago Professor who did an NP complete problem in Quasipolynomial time

No, because that would have proved EXP = NEXP and I see nothing on the subject on google.

You're somewhat mistaken. Dr. Babai gave a polynomial-time algorithm for the graph isomorphism problem, which is not known to be NP-complete. There is a related problem, however, called the subgraph isomorphism problem, which is NP-complete.

For the record, if somebody gives a polynomial-time algorithm for an NP-complete problem, P = NP. For some reason, your post reads of not being aware of this.

quasipolynomial-time algorithm*** for graph isomorphism

Have you heard the good news about our Lord and Saviour, NJ Wildberger?

>You guys think we can intellegently discuss P v NP for once?
Probably not. But we can try.

>What do you think the relationship is?
It's well-understood that P != NP. The tricky open question is how to prove it.

What do guys know about techniques for placing lower bounds on time complexity for problems?

Like, specifically, what's the lower bound for finding the maximal-sum subarray of a two dimensional array of integers? It can be done in n^3, and that it can't be done faster than n^2 is obvious, but beyond that I can't even begin to approach the problem.

as a follow up what are general rules of thumb for determining the big o of an algorithm?

one heuristic i know is if you halve something it
is log n... what are other rules?

I know how P=NP works. Thank you for correcting my misunderstanding about the problem solved.

Have you tried not dropping out in elementary school?

Really? How so?

info.ecosia.org/what
SAVE THE WORLD!

I like statistics and building machine learning models to automate stock trading.
>tfw you hit >50% prediction accuracy and no matter what happens you gain money

why did you follow up my interesting question with a bad one

If you don't know how to do big O analysis of algorithms after seeing it being done a couple of times I don't know what to tell you. It's just counting.

it can be done faster than n^2 if the numbers follow some kind of pattern or can be reduced first for some other reason

It actually isn't "well-understood" that P != NP. There are prominent computer scientists who believe that P=NP, including Donald Knuth.

Until something has been proved beyond doubt, it is not "well-understood." It may be believed, but not "well-understood."

>tfw every loser's strategy on the stock market makes money until it doesn't

I want to solve the general case. In the general case you have to look at every never at least once, so n^2 is a dead simple lower bound.

I don't know anything about that really. But perhaps you can express the number of required operations in terms of a well-known algorithm. For example matrix multiplication has a lower bound of n^2logn. Just a guess

I never had a class in it yet. Fuck you two

Not them but that's just how things go here. Anything people already know how to do is elementary and anything they don't is just a meme.

>the you do factor regression on news articles concerning the companies youre invested in so the model self corrects by going beyond simply examining the markets
I'm gonna win forever, n00b

Yeah well fuck this place

No. People just think P != NP because no polynomial time algorithm has been produced for a NP hard problem.

>TCS thread
>turns into complexity theory general
I want theory A trash to leave

Neither did anyone because big-O notation is a footnote at best since nobody is retarded enough to require more than that.

Ya, as an engineer/physicist it's great to see what is necessary in the world to allow math to express itself or to unlock the math through algorithms. I love computational geometry and the rasterisation problem.

Quantum Computing since Democritus is an incredible book, any other 'casual' compsci reading?
Pic related comes close but is a little basic

>yfw you didn't factor in trade costs and you take it in the ass.
Btw this thread is full of tryhard cs brainlets who pretend to like mathematics

>tfw I did by taking advantage of an optimizer library (gurobi) and you're jelly because you never realized how to make automated ML programs with benefit instead of simple code monkey bs

Me again. Just wanted to mention exploring the limits of compatability exposes both how the universe works AND how it interfaces with mathematical structures. We are definitely living in a unique subset of mathematics. Sort of like a prison of numbers.

Computability* fuck autocorrectors reeeeeeeeeeeeeeeeee

Oh, well in that case it's literally just fancy techniques for counting. You always do Big O notation on a given algorithm, you define reading, writing, arithmetic as one unit of operation each, and then you count how many times you have to do on each. Big O is just a way of rigorously saying "I absolutely do not give a shit if it runs in 3n+7 steps vs 47n + log n steps, they're the same," and makes reasoning about the math much simpler. Seriously just look up some examples, it's very intuitive.

Ya, once you think of your code in this terms you start to get intuition for a good loop iteration vs a bad one. Why do 100*n + 100*m calculations when you can get the same result with m + 100*n? Just compute the m step once and you have reduced the machine cycles by a half. The more you think in these terms you will see the tradeoffs in time, space (memory), and "closeness" ie it is most efficient to run 2 algorithms when they have a strong degree of closeness in either their goal or their data. The less closeness you have, the more searching you must do. These are basically axioms and limitations for what can happen in this universe.

pdfs.semanticscholar.org/b341/82dc45a624fcfb41e86958d205059488da61.pdf

Please, tell me literature to read.
I need to do things like this.
Help?