Why do CSfags care so much about polynomial time ?

why do CSfags care so much about polynomial time ?

if [math]P(n) = n^100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000[/math], what makes it better than [math]exp(n) [/math] for all practical purposes?

That's a good question desu

I've always wondered that

bump

exp is infinitely bigger, that's what
it doesn't mean it's practical, which is why P=NP being a big deal for practical things (eg crypto) is a meme

Because P(n) = n^c is faster than P(n) = c^n, where c is any constant. It may not be faster for n smaller than 10/100/1000/etc., but will be for any sufficiently large input. In general small inputs take little time to evaluate, so it's better to care about bigger ones.

Theoretical interest.

because anything that beats 2^n means that we don't have to look at all possible permutations and have come up with a deterministic algorithm that can solve the problem in some other way. Once that is done it usually becomes easier to find a much faster solution, once the 'trick' to not have to look at 2^n times is known. The idea is that there is no logical reason to look at all the input combinations of any fixed length because the input can always be much larger.

TL;DR - Other people will almost always be able to improve your poly algorithm if it is really big.

Think about Facebook and the massive amounts of data they handle. Time complexity becomes a problem

what about for c = 0

>exp is infinitely bigger

no

P as defined in OP is bigger than exp for all practical purposes (n lower than Graham's number)

no, see above

Imagine you have an algorithm that is being fed an array of 2 million variables.

Would you like your algorithm to do 2 to the 2 million operations, or for it to only do 2 million squared?

Are there practical problems solvable in polynomial time that require huge exponents? I feel like most practical problems are never more than O(n^4).

I mean there is some more obscure shit (like say finding the convex hull of a set of points in [math]\mathbb{R}^k[/math] where [math]k>8[/math]) or other rarely encountered combinatorial optimization problems, but no-one uses this shit in real engineering.

Check what general does mean tard.

ok but why is it that most P problems are O(n^4) at worse then ?

why is there such a gap between P problems and EXP problems ?

i would have expected bigger degrees of polynomials to occur more often

I wouldn't say so, it's probably more like student's aren't taught n^5 or worse algorithms, because their applications are limited. If you say that the main operation, the one which is performed n^6 times, take 0.01 second, then evaluation for let's say 50 elements would take years. Many calculations are performed on 10^5+ elements,

100! is a number with roughly 160 zeros, 100^3 has 9. If you had to find the shortest possible path between Paris and Berlin and be 100% sure about it, we would speak in terms of how many Earth's lifetimes evaluation would take.

>n^100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
I'm curious to see this algo

Max-Bisection problem is approximable to within a factor 0.8776 in around [math]O(n^{10^{100}}) [/math] time.

>practical purposes

You might as well ask pure mathematicians why they care about their subject, even when it is not practical in real life.

This is why linked lists are so shit.

>exp is infinitely bigger
Congrats, pick up your CS degree at the Registrar's office.

>it can be improved I swear
kek

an enlightening example is matrix multiplication. the naive algorithm is n^3. we know there exists a n^(2+eps) algorithm for any eps > 0. but these have tremendous constants that make even the n^(2.7) ones useless for all purposes

well the difference is that 3 is not a massive number, the point essentially stands that it would require a non-trivial analysis for the ocmputer to find out which is the answer for an NP complete problem without simply checking all the permutations

What you've described is impossible.

Reminder that sleepsort has O(N) time complexity

The [math]\Omega(nlogn)[/math] complexity is only for comparison-based sorting algorithms. There are other linear-time sorting algorithms that use bucketing systems and such.

whoops, N being proportional to the highest number
I meant O(1)

What polynomial algorithm is going to be that large of a polynomial? The max polynomial you'll see is like n^4 for some inefficient algorithm.

Its the rate of change that is important.