What do you think about P=NP

Does P=NP

Other urls found in this thread:

en.wikipedia.org/wiki/Lambda_calculus
faculty.fuqua.duke.edu/~rnau/definettiwasright.pdf
en.wikipedia.org/wiki/Probabilistically_checkable_proof
scottaaronson.com/blog/?p=1720
scottaaronson.com/blog/?p=459
youtube.com/watch?v=YX40hbAHx3s
twitter.com/NSFWRedditGif

No, we're not doing this thread today. Fuck off.

What if P actually equaled 0 this whole time? I bet the computer scientists would feel pretty silly.

That would be pretty funny.

It's better than the constant /pol/ threads

It's not like we need to have one or the other.

I'm also not sure it is but it's a pretty close thing.

Well, as a Computer Scientist, I will have you know that this problem is highly integral to the fundamental nature of the universe. For instance, when you have a computer, and you want it to solve an a problem that's easy to solve when you have the answer, then you've got an NP hard problem. Surprisingly, we Computer Scientists are good at naming things, because NP hard problems are, in fact, very hard, that is, unless you already know the answer. So for instance, say you want your calculator to solve Sudoku. Well, if you know the answer, then it's easy to solve, but you don't know it, then it's hard, in fact unless you know the answer, then it's not easy. Therefore, Sudoku is NP hard, very hard in fact, as you might expect. Then take for example, calculating the nth digit of pi. That's also very hard, because pi never ends, but if you already know the nth digit, then it's very easy, but remember how I said it was hard if you don't already know the answer? That's what makes this an NP hard problem. So then you might think, well what if I just solve the hardest NP hard problem, then I'll have solved all of them? But sadly that doesn't work, because if you have some problem, you can always make a harder problem. For example rather than just asking, how do I compute this, you might instead ask how do i compute this twice? And viola, you've got a twice as hard problem, so therefore, there is no NP hardest problem. Your other choice then might be, well what if I solve all of them? And you know what? That's very hard. In fact, solving an algorithm is an NP hard problem, so in order to solve it, you would have to solve it, which is just impossible. I know this might all be very hard to understand, in fact I just passed my computational theory for Computer Scientists class last semester, and even I have troubles sometimes grasping all of the fancy math stuff, like proof by interrogative and all that stuff. So don't feel bad if this is all hard, because it's really NP hard!

I wasn't posting this to be a troll or for fun, I wanted a serious discussion about this. I find it very interesting, I think about this everyday I just didn't know it was P=NP until today. I always think to myself i'm sure there is an easier way to solve this problem. I am talking about any problem not just in life not just math. There has to be easier way to do things than what we think is the easiest way to do those things.

>I think about this everyday I just didn't know it was P=NP until today

Why must you turn this board into a house of lies?

I understand it, it's easy for me to understand I am going to school now to become a computer scientist, I have about 3 years left. It's weird I suck at math but I know I have to learn it. When it comes to computers, it's very easy for me I have been around computers since I was 6 now I am 27. I guess I have the right mind for a computer scientist you could say.

I really do think about it everyday.

>I think about this everyday I just didn't know it was P=NP until today.
We don't need another thread about it from someone who's just come upon the subject. Sorry but this thread isn't happening.

well certainly that would solve P = NP, but what does N equal?

It's my understanding of the p=np problem that the only question is if a computer is capable of answering every question with a solution.

To this I say yes.

Please keep in mind that I'm very ignorant and have only read a few paragraphs from the Wikipedia article on P=NP.

P = NP

P/P = N

P/P = 1 therefore N=1

there, I cant believe you are so stupid

but if P = 0, that wouldn't work
it must mean that there is a quantum superposition in P = NP where P equals 1 and N equals 0 at the same time, and the true pursuit of the problem is to collapse the wavefunction and determine which one it is
0? or 1
and funny enough..
binary is entirely made out of zeros and ones.
thats why this is such an important question for computer science

n can be anythin ^0

It's a question like "when is something alive".

We're asking an absurd question, you can't find a difference in 'states' we assigned.

read something along the lines of why do we expect to find this in ZFC?
I know what ZFC is (pop-sci knowledge) but I don't know what is meant by this statement.

Is anyone here skilled enough to create a model where P=NP and see if it applies in reality?

I thought Sudoku P solution was found by some dude recently. I am not sure if its Sudoku, it might be something else tho. My phone can solve a sudoku puzzlr pretty quickly, or is the problem with the scale?

But what you said about "not knowing the answer" is in interesting. What do you mean by this?

Say I have some very long polynomial that map N->N I know that the answer must be some integer but I dont know the exact answer. So I let a computer solve it by brute force.

Is this considered hard? Or not hard? Because I dont know the answer exactly, but i know pretty much all of thr answer's property.

>there still are people who think this is witty or funny

Ive solved this!
P != NP
because:
P = 16
NP = 14*16 = 224
16 != 224

where is my prize?

>Well, if you know the answer, then it's easy to solve, but you don't know it, then it's hard
This is the most retarded formulation of "verifiable in PTIME" I have ever read, please end yourself

N=1
Is everyone in this thread a fucking troglodyte?

Serious answer here:

Intuitively I believe that P = NP. I am inclined to think this way based on my observations of human potential and my belief in continuous improvement, which under the processing power of a supercomputer would become greater on the order of many magnitudes. Donald Knuth also thinks the same. Without further ado, I will present you the solution path I have on approaching a proof to the problem:

Legitimate and constructive criticism and suggestions are much appreciated in advance.

1/?

>which under the processing power of a supercomputer would become greater on the order of many magnitudes.
What would the question of whether P=NP have to do with processing power?

A while ago, I attended a mathematical conference and the lecturer was talking about random walks inside a black box and how to minimize it using Markov chains and other models. So it's a stochastic model, I decided to expand it onto an exponentially converging system to serve as a basis for an NP problem solution.

If you'd check Bayesian Probability, it's focused on a probability that has no set truth or falsity value. Its subset called de Finetti theorem is my favorite and it extends into Quantum Information. You should definitely read Bruno de Finetti btw.

If you'd check Lamdba Calculus's proof of "Undecidability of equivalence", it says: en.wikipedia.org/wiki/Lambda_calculus

"There is no algorithm that takes as input two lambda expressions and outputs TRUE or FALSE depending on whether or not the two expressions are equivalent. This was historically the first problem for which undecidability could be proven. As is common for a proof of undecidability, the proof shows that no computable function can decide the equivalence. Church's thesis is then invoked to show that no algorithm can do so.

Church's proof first reduces the problem to determining whether a given lambda expression has a normal form. A normal form is an equivalent expression that cannot be reduced any further under the rules imposed by the form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression e that closely follows the proof of Gödel's first incompleteness theorem. If e is applied to its own Gödel number, a contradiction results."

2/?

Why can't computers solve NP in finite time? There is no infinite processing power.

This paper pretty much sums up my views on probability
faculty.fuqua.duke.edu/~rnau/definettiwasright.pdf

Independently I had placed Gödel's incompleteness theorem next to the NP as an analogy, that both can be verified but not solved (of sorts).

So I believe if you use probability simply as an indicator then it's very useful because it simplifies a lot of things, allows you to filter unrelated solutions out which is huge in computational complexity. But, in order to objectively assess the solutions, you need a mathematical and logical foundation. CS people like to strictly run machine learning as an optimization, which always blows up in extremely complex problems. It's useful in cases like Dijkstra's shortest distance calculations etc.

The solution path should look like this: take axioms and classical logical (a priori / bottom-up - premises may not be correct) approach, form the simplest solution foundation, see where it comes short, use the frequential probability to quantify the % of complete computation on a finite time, then leave it, use the heuristics on a complex problem by heuristic algorithms (a posteriori/ top-down - premises are correct), start breaking down the problem by optimization, use Bayesian probability to quantify the % of a complete result (100% accuracy instead of good enough approximations), and leave it.

IF the complete computation & complete result = 100% -> P=NP.

Basically this approach to verify NP problems. en.wikipedia.org/wiki/Probabilistically_checkable_proof
I would try to solve a non-deterministic NP problem by transforming Bayesian inference probabilities into frequentist inference. I don't know where to start with the mathematical proof though.

2/2

>Intuitively I believe that P = NP
Same here, it stands upon some platonic ideas but I think it will do the work.

Your understanding is wrong. P=NP is about the difficulty of solvable problems, not which problems are solvable at all.

The question is whether all problems that we believe require more than polynomial time to solve are actually able to be solved in polynomial time.

Computers CAN solve NP in finite time. The problem is that, for large inputs, that finite time might be many times the age of the universe.

It's foolish to intuitively believe P=NP. Every other area of science has shown that there are physical limits that restrict just how efficient things can be.

There IS a limit to how efficient computation may be, and it seems as though NP might be that limit.

you are right, allow me to rephrase my statement - what i meant by finite time is in acceptable length of time in human scale.

Doing computations faster doesn't change the algorithmic complexity.

>physical limits that restrict just how efficient things can be.

I'll expand this a bit for the sake of the argument if you wouldn't mind:

I like to make analogies to explain things because it simplifies the understanding of a problem. In this case, I'll pretend the physical limit you've mentioned is an analogy to the efficiency of an operation in an industry.

There are two ways to improve(or maintain if you take lesser returns theorem into account) profits as you know. One is cutting costs and the other is R&D. However, due to the base requirements of a production, you can only cut costs so much. When a product that requires a production line that exceeds your facilities' capacity (NP), the production can take years.

Then what is the other approach we can take? As I've mentioned, find another way to approach it. Don't get stuck in the physical improvement, but carry this over to logical improvement.The rules of inference is used in computation is that it takes a premise (input), runs it over syntax (process) and returns a conclusion (output).

Inductive reasoning, and natural deduction hold premises true and work from there, which also produce "probable truth" conclusions. Based on my own understanding of symmetry I've constructed in the pic related, the three paths that lead to understanding the Computational Complexity problem, and I can say that the solution of the P vs NP problem will be solved through a method, that immediately selects values of probable truth, which are then churned for ultimate truths and within these sets of truth values comes an AI, which runs on Abductive Reasoning, to decide the solution path.

I have developed a rigorous mapping method that jumps between layers of a priori (frequential) and a posteriori (bayesian) approaches, slicing the gap between them in each interval. I'm trying to make this slicing to converge in exponential time, and I will claim a proof based on Probabilistically Checkable Proof.

This is the R&D.

You are indeed right again, but you're talking about computability, while P vs NP is a problem of computational complexity.

Bumping for more replies to these.

Platonic computation knows no bounds

Delicious new pasta

..which is interesting, because it goes back to the incomputability of the mind imbedded in spacetime geometry, which is an hypothesis for the Hard Problem of Consciousness. Holding human brain and computers complete analogues of each other was the starting point of my P vs NP journey, although I was not aware of the problem back then. Which kinda explains why I was blown away when I first saw it.

Perhaps the holographic universe could be another link to it. String theorists shit their pants when it comes down to this.

> Why can't computers solve NP in finite time?
It isn't a question of finite versus infinite, it's a question of polynomial versus something worse (e.g. exponential).

True, I've admitted making a mistake using the word finite there, as you can see here Yet, having computers with infinite processing power would crunch through any exponential time problem with ease. We're dealing with this problem because we are physically limited.

Which brings me back to my point hereNow we're traversing over to /x/ territory but I'd like to point out to an interesting quote by Carl G. Jung:

From the Letters of C.G. Jung, regarding the observation of the synchronicity phenomenon:

>If we consider the psyche as a whole, we come to the conclusion that the unconscious psyche likewise exists in a space-time continuum, where time is no longer time and space is no longer space. Physics has reached the same frontier. Since the one line of research proceeds from within outwards and the other from without inwards, and there is no hope of our reaching the point where two meet, there is nothing for it but to try to find points of comparison between the deepest insights of both sides.

Just look at how remarkably similar his description is to the P vs NP problem as I've stated above! Continued:

>Today it is a question of finding conclusive proofs. If one wants to work in this field, one encounters the further difficulty that the statistical method of science, which alone furnishes adequate proofs, stands in a relation complementary to synchronicity. This means that when we observe statistically we eliminate the synchronicity phenomenon, and conversely, when we establish synchronicity we must abandon the statistical method. Nevertheless, the possibility remains of collecting a series of individual instances of correspondence, each instance throwing its own light on the phenomenon of synchronicity.

This description of Jung is in parallel to the physics developments of his era, namely the Quantum Physics. What he described here, is basically the collapse of the wave function. Pic related

I'll admit i'm not the most computer literate, and i still don't really get the P=NP problem, would someone be able to explain it in laymans terms?

Which brings us back to the tools at hand, as the P vs NP problem is no longer a local optimization but a global one:

>Direct Monte-Carlo sampling
for the frequential statistics operations

>Bayesian optimization
for the black-box functions

And where these meet:
>Markov chain Monte Carlo (MCMC) sampling

The quantum computation that would run these operations, according to the synchronicity vs statistics analogy as outlined by Jung in , would face the problem of unitarity. Thus, the problem will be required to be solved in Hilbert Space through antiunitary operators, and its application to computer science is beyond me.

This is where I leave you, have a good day.

N(on)P(olynomial) are certain problems that the solution is verifiable (checked when the answer is given), but not solvable in P(olynomial) time. The solution of these problems often reach exponential time and are impossible to reach an answer in many, many years.

The question is: if you can verify these solutions in polynomial time, can you solve them in polynomial time as well? The above explanation is from a CS standpoint, and most of the CS academicians refute the possibility of P = NP. Yet, neither a proof nor a counterproof is given.

Class P is set of problems which can be solved in polynomial time. I.e. the time taken to solve a problem of size N is proportional to N^k for some k.

Class NP (non-deterministic polynomial) is the set of problems which could be solved in polynomial time if you had an "oracle" to make decisions for you.

Problems in class NP correspond to "search" problems, where at each step you have a choice of what to try next. An example is determining whether a logical proposition is true or false.

With a deterministic computer, the only algorithm guaranteed to find an answer is to try all of the combinations, in some order. This requires exponential time. E.g. if there are two choices at each step, then there are 1024 possible combinations for the first 10 steps, 1024x1024=1048576 combinations for the first 20, and so on).

But if you "knew" which choice to make at each step, the time would only be proportional to the number of steps.

The question P=NP is whether these two classes are the same, i.e. is there a trick which would let us solve the NP problems in polynomial time. Intuition says "probably not" (i.e. the NP problems seem inherently harder), but there's no proof of this.

NP actually stands for Non-deterministic Polynomial time but your explanation is on point apart from that.

>N(on)P(olynomial)
it's nondeterministic polynomial

So to put it very simply, it's essentially about whether or not theres a hard limit on how quickly we can solve problems?

Yes.

The interesting thing about NP is that there are a class of "NP-hard" problems which are provably at least as hard as any NP problem, in that any NP problem can be modelled as an analogue of the NP-hard problem, so any solution to the NP-hard problem would also be a solution for every NP problem.

Some of these NP-hard problems are themselves NP problems (i.e. if you could make the right choice at every step, you could solve them in polynomial time). These are called NP-complete problems.

So if you could find a polynomial time solution to any NP-complete problem, you can solve every NP problem in polynomial time, and thus P=NP (i.e. class NP would be a subset of P).

I think they are talking about computation not computers.

It's essentially about whether certain problems are inherently difficult, or whether we just haven't figured out how to solve them quickly yet.

If you were to bet on it, most mathematicians would probably bet on "fundamentally difficult" (i.e. "P=NP" is false).

>proof by interrogative

p=p
p=/=n
p =/= np

Bump

>Donald Knuth also thinks the same
lel

> I believe p=np
So you actually believe if I strum a chord in a guitar I'm the next virtuoso?

I believe you have the potential for improvement, yes.

Care to share your readings list about this topic, I'm interested.

NP is in EXPTIME (just brute force the solution), so it's not gonna be worse than exponential time.

I believe you lack a fundamental understanding of p vs np.

You claim there is no limit to the efficiency of our algorithms. This is so obviously false. That would imply that with enough cleverness, we would instantly know the answer with a single operation....on any problem. I.e. constant time.

P vs np is not actually about computation time. Everyone agrees that a faster computer can solve a problem faster. P vs np is about the number of calculations that need to be done to solve the problem. No matter the speed of your computer, it's still the same number of calculations.

You're right and wrong. Yes, I've lead to a misunderstanding with the reply you've quoted so I should've omitted it in the first place. However, the NP-complete problems are theorized to be solvable by Non-deterministic Turing Machines and this is exactly where I'm getting with my solution track. It's not the problem that's getting simpler, it's the computer that's getting craftier and it's not just as simple as an infinite processing power would imply, it's quantum computing and all the other newly explored boundaries that will shift the computer engineering paradigms as we know it.

But I can't because is inherently right. I don't have an academical knowledge on this matter, just snippets from here and there that I piece together. I would be doing you a disservice by advising you anything really, it's not my place to do so.
,

this

I feel people who say non polynomial have no clue about the problem

>not_this_shit_again.png

>So then you might think, well what if I just solve the hardest NP hard problem, then I'll have solved all of them? But sadly that doesn't work, because if you have some problem, you can always make a harder problem.

...no.

We assume certain axioms (ZFC) to do math. It's possible (but unlikely) that unsolved problems could actually be impossible to prove either way with those axioms. It's a cop out though, sort of like the anthropic principle and the string theory landscape. It's like saying "welp, we haven't found the answer yet, so it must not have one!"

You have no idea what you're talking about, learn more real math first.

Also, why isn't natural deduction under deductive reasoning?

Ok this was the answer I was expecting.

As for Natural Deduction I wanted to highlight Rules of Inference, since mathematical induction = deductive reasoning, but I agree it should have been done better.

You want to do something with a thing.
Somethings take time to do and will take more time if there are more somethings within that something.
As you do that something with more things; it takes longer to do.
Some somethings take longer to do than others as you do it with more and more things.
Many somethings are the same as other somethings but dressed up a little bit.
Are the somethings that take longer to do than others as you do it with more things actually the same as the others but dressed up a little?

Randomly generate an answer. It l͟i͟k͟e͟l͟y won't be the correct answer. But it still could be. And if it were it would have been solved pretty fast.

I see, so basically you want to talk about solving np problems with different methods...not np vs p.

What you described has nothing to do with the problem of np vs p.

nice pasta m8

I have yet to hear a single argument against P=NP which I find convincing. As such, for now, I choose to believe P=NP, but for NP problems either the constants or the exponents are pointlessly huge in the polynomial.

Consider something like primality testing. We have a provably polynomial solution in AKS but for specific *ranges* of primes we can blow that the fuck out with different tests. I can easily imagine exponential algorithms with small constants that beat polynomial algorithms with large constants just on the set of real-world problems we'd like to tackle.

It would surprise me to find that proving P=NP or not is independent of our axioms. Some people seem to enjoy thinking this though I don't understand what they think links P=NP to undecideability.

I do not expect to have a proof or refutation come up in my lifetime. My guess is we'll have decided GRH or the Collatz conjecture (pointless though it is) before this.

>for NP problems either the constants or the exponents are pointlessly huge in the polynomial.
>polynomial
>np

You obviously have no clue what you are talking about.

>I choose to believe P=NP

P = NP
P/P = N
N = 1

I solved it :^)

Why on earth would we solve the Collatz conjecture first? There's almost zero theory of discrete dynamical systems to use on it. Complexity theory is still pretty new but we know a lot about what techniques exist and where they work or don't work. Plus, there's probably 10x more people working on P=NP, it's way more important, more prestigious, and more fashionable.

Also, all the people ITT who are spewing ignorance about the problem, please read what Scott Aaronson has written about it, at least so you can get an idea of what it actually means:

scottaaronson.com/blog/?p=1720
scottaaronson.com/blog/?p=459

Who could solve the P vs NP problme among active academians?
We all know that if anybody is going to solve RH, that's gonna be Mochizuki or Perelman...

The last important result in theoretical CS was quasipolynomial graph isomorphism algorithm by Lazlo Babai and I don't think it was such a big deal... Could Lacy find out that there can be no polynomial algo for the graph isomorphism problem next? That would mean a problem that is in NP but no in P

>scottaaronson.com/blog/?p=1720
really silly blog post, aaronson got told big times by Motl. Treating a mathematical theorem as a scientific hypothesis where failure to find a proof constitute positive evidence of the absence of such proof is just ridiculous

Did you even read the post?

Motl made equivalent arguments for the Reimann hypothesis being likely true but said that these arguments don't work for discrete math or complexity theory because reasons.

I disagree that RH is likely to be true for that reason. If you're so sure about RH and P != NP why isn't there a proof?
Also accepting conjectures simply because we have a hard time refuting them doesn't contribute to our better understanding of mathematics, just makes us look like religious clowns

I think you have it exactly backwards. People that work assuming the truth of open conjectures often find interesting results. If they can can then turn implication into a biconditional, these results may open up different avenues for attack.

Refusal to consider the truth of undetermined propositions makes you look like a religious clown.

You don't have to believe the truth or the falsity of an undetermined theorem to consider what the implications of truth or falsity would be

One doesn't have to believe, no, but in practice we form conjectures based on things we do believe to be true. Such a network of beliefs inform our tactics for seeking truth. This is not religious in nature, even metaphorically. It's simply an observation of the interaction between psychology and logic.

Not if it solves NP in polynomial time.

(Please correct me if i'm wrong).
NP stands for Non-deterministic Polynomial, and P stands for Polynomial, and they are both classes of problems that can be solved by a machine (a RAM or a Turing machine or anything equivalent).
NP is the set of all the problems for which there exists an algorithm that gives the solution of the problem in a non-deterministic machine in polynomial time. A non-deterministic machine is like a normal machine but it has the ability (some special instruction if you want) to answer ALL decision problems in polynomial time. For example, if the answers of your problem are numbers (represented in binary), you can, for each digit ask the non deterministic machine if it'd be 'better' if you chose 1 or 0 as the value for this digit. Of course the problem (its formulation at least) shouldn't have more than one numerical answer. Now this is clearly equivalent to asking the machine to guess the answer of the question for us, so we'd just have to check if the given answer is a solution or not (in polynomial time of course in a non deterministic machine).
P on the other hand is the class of all the problems for which there exists an algorithm that gives you the solution in polynomial time on a normal machine.
It is clear that P is included in NP, because a non deterministic machine has all the abilities of a deterministic machine.
The problem is if the inverse is true, that, if NP is also included in P, and thus P = NP.
NP complete problems are the hardest problems in NP: solving any problem in NP can be reduced (in polynomial time) to solving one of these NP-Complete problems. You can see it as NP-Complete problems are the most general form of problems, and easier problems are "special cases" of them.
So, if you solve any NP-Complete problem in Polynomial time, you've solved all NP in polynomial time and proved NP = P.
There are many NP-Complete problems, and no one was able to find an algorithm for them in polynomial time.

looks like bullshit (and a long one!) to me kek

Do the number of puzzles able to be generated as functions of a polynomial time equate to the number of puzzles able to be generated as a function of non-polynomial time, assuming infinite time and tries?

>there are people who think that comment doesn't have several layers of irony

We're so post-irony that we mean what we say now. Get hip.

Gay

Some corrections:

>A non-deterministic machine is like a normal machine but it has the ability (some special instruction if you want) to answer ALL decision problems in polynomial time

No, a non-deterministic Turing machine is just a Turing machine that can follow multiple execution paths at once, as many as you want. So instead of having T amount of time resources it essentially has 2^T time resources for the same amount of time.

>For example, if the answers of your problem are numbers (represented in binary), you can, for each digit ask the non deterministic machine if it'd be 'better' if you chose 1 or 0 as the value for this digit.

Eh, this isn't really the problems they're talking about. A decision problem is where you have an algorithm that starts out with a number (say, written on the tape in binary) and decides whether or not it belongs to a particular set in a finite amount of time for any input. P = the set of algorithms that have a polynomial-time decision algorithm.

The other stuff you said is ok.

>No, a non-deterministic Turing machine is just a Turing machine that can follow multiple execution paths at once, as many as you want. So instead of having T amount of time resources it essentially has 2^T time resources for the same amount of time.
this is equivalent to lucky guesses in runtime

^Thanks for the corrections! I've watched a spooky video on youtube and some of the P vs NP article on Wikipedia, that's all my background.

Spooky video: youtube.com/watch?v=YX40hbAHx3s

Of course it doesn't you stupid fucking nigger, P=P N=N, on what fucking planet is the letter p equivalent to the letter n? This is why i fucking hate scientists.

How come x = vt?

>P
>NP

Both measurements of complexity.