I'm still working on rewriting this in formal language. Please look at the logic and reasoning, and not the language. Thanks.
1. Let A be a set of n integers of length o. For example: {...1101, ...0100, ...0111}.
2. Let f be an oracle such that f(B) reduces a set B of integers under xor in O(1) time. For example: {...1101, ...0100, ...0111} → ...1101 ⊕ ...0100 ⊕ ...0111 → ...1110. This proof is not dependent on the oracle, and it is only used for simplification.
3. Let the problem be: find subset C of A such that f(C) = 0.
4. The problem is NP (the solution can be verified in polynomial time; once you have C, call f(C) in O(1) time).
5. It is well understood that if x = a ⊕ b it is impossible to calculate the values of x, a, or b without knowing the values of all the others.
6. As such, given x, there is no algorithm that can select a from a set without also selecting b. That’s because the values of a or b can only be known after performing f({x, a}) = b, f({x, b}) = a, or f({a, b}) = x. You ALWAYS need to know 2 of the 3 to calculate the 3rd! The same goes for x = a ⊕ b ⊕ c; you need to know 3 of the 4 to calculate the 4th! 4 of the 5 to calculate the 5th, and so on!
7. The same logic applies to all subsets of A. Until you perform f(C) = 0, you gain no information about C. This is a very, very, very well understood property of xor, which is why it's used in cyphers. If f(B) ≠ 0, you are given no information about C!! Think about it: if x = a ⊕ b, but you don't know that, what does x ≠ a ⊕ c tell you? Nothing!!
8. As such, the only way to find C is with trial and error, which takes O( 2^n ) time.
10. Therefore, P ≠ NP.
0x51B4908DdCD986A41e2f8522BB5B68E563A358De