Does P=NP?

Does P=NP?
Why do you think so?

Other urls found in this thread:

news.mit.edu/2009/explainer-pnp
twitter.com/AnonBabble

I have no opinion about it.

If I had proof for or against it I would be already in my way for my 1 million right now.

P is definitely equal to NP

R E E E E E E E

OUT!

N=1?

really makes you think...

No because then the world would be exciting. But it's clearly mundane.

P=0

P is obviously not np

How so?

...

>Obviously
Please present your obvious proof and collect your million

Same as how Reimann hypothesis is obviously true.

P = 0 and N = 0

But where as you can experimentally evaluate the riemann hypothesis by evaluating the first so and so zeroes of the zeta function, there's really no way to evaluate precisely how close to being fully optimized an algorithm is.

Someone explain this to a neuroscientist?

Neuroscientist? More like brainlet

I am.

news.mit.edu/2009/explainer-pnp

Someone correct me if I'm wrong and pop-sci'd too hard, but computer algorithms run time depend on how the code is written, for example: a recursive code to calculate factorials solves the problem in, I think, O(2^n) time. Since 2^n obviously becomes very large if I wanted to find the value of a large factorial, the code is inefficient for large numbers. Something runs in polynomial time if the code runs in O(n^x) where x is some natural number. Things that run in polynomial time are typically done pretty fast (at least relative to non-polynomial time algorithms).

P=NP asks that, if there is a code that can verify if something is correct in polynomial time, could the original problem have been possibly solved in polynomial time? A quick example, it is difficult for a computer to find combinations of numbers from a set of given numbers that add to a specific number, since a code that does this would probably check every single possible combination. However, if you were given the combinations of numbers that correctly added up to a specific number, the computer would pretty much instantly be able to tell you that you are correct by just adding up the numbers.

If P=NP, then there was a way to solve the original problem with speed in a similar pattern to verifying it. If P=NP then this also applies to all NP algorithms, meaning that (if the proof is constructive) you could find a fast way to solve problems in any circumstance. This could dramatically improve computing power, but it would also be devastating for cyber security (code cracking algorithms can now be ran very fast).

If P=/=NP, then the original problem may in fact always will take much longer to solve than it will take to verify. Most computer scientists believe this.

Everything you said was complete bullshit.

Especially,
>a recursive code to calculate factorials solves the problem in, I think, O(2^n) time.

factorial(0) = 1;
factorial(n) = factorial(n - 1) * n, where n > 0.

Perhaps you meant "integer factorization" or something?

You're kinda close.

The real explanation is:
>let the input size be n.
>as a function of n, how many steps does it take to compute a desired property of n?
>O(some function) is the set of all functions which share the most significant term.
>ex: O(log(n)) < O(n) < O(n^2) < O(2^n)
>if this function of n is in O(n^k), where k is a constant, then the computation takes polynomial time.
>if this function of n is O(k^n), where k is a constant, then the computation is superpolynomial.
>now, assume we have something we want to compute. we need to answer 2 questions:
>(1) what's the fewest number of steps (the most efficient algorithm, the smallest O() class) that we can compute the result?
>(2) what's the fewest number of steps that we can check the correctness of a result?
>a good example is integer factorization. to factorize a product of two primes, it takes superpolynomial time, roughly O(k^n). to check the result, you just multiply the two primes together and see if the product matches the number you started with, which is 2 steps, or O(1).
>if p = np, then (1) and (2) have essentially the same answers, and any computation can be done as quickly as it can be checked for correctness.
>if p =/= np, then (1) and (2) are fundamentally different somehow, and there are certain computations that simply can't be solved as quickly as they can be checked.

The reason most computer scientists, myself included, think p must not equal np is that equality would have implications like "you can factor an arbitrarily large prime number more or less instantly," which sounds pants-on-head retarded at an intuitive level.

The reason it's so hard to prove is that you have to show that for a given computation, the *best possible* solution takes superpolynomial time, and proving best possible is incredibly hard, even for basic things like sorting.

That's why the Millenium prizes will award 1 million of USD to anyone who can prove P =/= NPC

It must be the hardest way to earn a million ever devised.

it' s not
when you decompose an algorithm into a series of matrix products, a problem in NP will contain a noninvertible matrix

>Everything you said was complete bullshit.

Actually, the great majority of what he said is accurate.

>The reason most computer scientists, myself included, think p must not equal np is that equality would have implications like "you can factor an arbitrarily large prime number more or less instantly," which sounds pants-on-head retarded at an intuitive level.

Now this is pretty funny--referring to yourself as a computer scientist and then having completely misapprehended the consequences of P=NP. Your extrapolation to factoring primes "more or less instantly" is not even correct in the loosest definition of "more or less".

The consequence of P=NP would be that there is a an algorithm to factor primes in time proportional to n^k for some k. But such a k could be tremendously large, and the algorithm underlying it hopelessly complex for human discovery.

tl;dr You have literally no idea what you're talking about.

Wait so if I understand you correctly.

Given that to check a list is in order is O(n) then if P=NP the most efficient sorting algorithm sorts a list in O(n).

Though that would mean the sorting would have to be not comparison based? Because it can be shown that the lower bound for comparison based sorting is O(nlogn) no?

I'm pretty sure that you're allowed to change the exponent in the O(n^k), so O(nlogn) is still faster than O(n^2) which is in polynomial time.

Not author of that post here.
But user, take into account the fact, that the result
grows exponentially. Obviously it is not O(2^n), but O(n) neither.

>the result grows exponentially.

It does not. CS uses a convention that numbers take values from static finite sets unless otherwise specified.

For example, if you have two numbers [math]A[/math] and [math]B[/math], calculating [math]A \cdot B[/math] takes a constant time. Furthermore, a polynomial number of constant-time operations cannot result in an exponential-size output w.r.t. the input.

You can decide that numeric multiplication takes more than [math]\mathcal{O}(1)[/math], but then you will consequently affect the time complexities of countless algorithms that happen to use multiplication.

dude just take logs of computation time lmao

P != NP

Heuristics makes problems exponentially harder to compute. I do not have a citation for this claim.

can someone ELI5 this?