Why do many consider this man a great mathematician

he was a logician

Other urls found in this thread:

cs.nyu.edu/pipermail/fom/2017-April/020434.html
cs.nyu.edu/pipermail/fom/2017-February/020300.html
cs.nyu.edu/pipermail/fom/2017-February/020299.html
twitter.com/NSFWRedditGif

Understand the history of mathematics.

another thing

Why do many consider Godel the "greatest logician of the 20th century" or the "greatest logician since Aristotle"

he only had 1 result, the incompleteness theorems

> the incompleteness theorems
Which are false anyway.

[citation needed]

Have you even read his work yourself?

have you?

Yep. Have you read on the Halting Problem then?

Are you that guy who not long ago shared his crackpot ideas on general relativity, Godel's incompleteness theorem and the Halting problem?

Kek yes, it's me.

:D

kek

>he only had 1 result, the incompleteness theorems
He had the completeness theorem as well, which is less of a meme but far more useful in practice (e.g. for building proof checkers and compilers) and has important theoretical implications for automated theorem proving (not sure if it has effective practical applications though).

His line of argument was thoroughly mathematical in its nature and he limited mathematics. He used numbers and entities to prove things. It doesn't get more math.

logic simply didn't develop at all in the intervening period. It is not a disservice or a misrepresentation of the interval to characterize it as monks reading Aristotle who was literally wrong about absolutely everything that actually matters, anyway. Likewise in mathematics (specifically algebra), from Babylon through the early renaissance or thereabouts, the furthest that anybody got was fucking quadratic equations. Over and over and over fucking again, for two thousand+ years, until the society reached a certain critical mass.

Likewise mathematical logic. It just took a bit (three hundred years or so) longer to get going.

You should really brush up on your history of logic and mathematics. Your claims are all just layman misconceptions.

No, they aren't. That really is that it all amounts to. I studied mathematics, history and philosophy. This is not a knock on my education or my quality as a student.

Fictive logical premises don't count. On the other hand, try actually studying the history of mathematics in depth to deeply understand how depressing the early days are for their repetition and slowness and never-going-anywhere-new.

Aquinas and Pascal and Spinoza's normative feelings don't count.

I haven't seen. What's up with the halting problem?

Mathematics is just applied logic.

So was Godel the last great logician?

he solved one of the biggest open problems of his time

He also proved the consistency of CH.

You have a misconception about mathematics. Logic is a subfield of mathematics.
He was a pure mathematician and one of the best mathematicians we've ever seen. You aren't a mathematician, so obviously you wouldn't understand anything he did.

independence*

>independence
Well that's kind of the same thing, isn't it? Since independence simply means that both CH and its negation are consistent.

>Mathematics is just applied logic.
>Logic is a subfield of mathematics.
One of us is going to have to change. [math]\blacksquare \! \blacksquare \! \blacksquare \! \blacksquare \! \blacksquare \! \blacksquare \! \blacksquare \! %Or we could simply learn to live with the circularity.[/math]

>Well that's kind of the same thing, isn't it?
No. "LEM is consistent with a logic" and "LEM is independent of a logic" are two different statements.
>Since independence simply means that both CH and its negation are consistent.
If any only if ZFC is consistent.

In WKL_0 + Con(MAH). Let T = ZFC + V = L + {c_i: i in
Z} is a medium strong set of indiscernibles. Here medium strong means
ordinal indiscernibles with respect to all formulas with parameters
below any indiscernible below the indiscernibles being used, and also
that ordinals define by indiscernibles are smaller than the next
higher indiscernible .T is consistent because Con(MAH), Let M be a
model of T and let M' be the submodel of elements definable with
parameters c_i, i in Z. M' is an elementary submodel of M.

D is the usual On x Q[-1,1], and

if we lower by 1, we get below kappa_i. Now suppose alpha
= t(c_i1,...,c_in). has t(c_i1-1,...,c_in-1) < kappa_i. Then by
indiscernibility, t(c_i1,...,c_in) < kappa_i+1.

We now define j:On into On. Here j is external and On is internet to
M'. Let alpha = t(kappa_i1,...,kappa_in). Take j(alpha) =
t(kappa_i1+1,...,kappa_in+1). By indiscernibility, j is well defined
and j is strictly increasing. Clearly j maps each interval [0,kappa_i)
strictly increasing onto [0,kappa_i+1) by the previous paragraph.

Let W be the initial segment of On of ordinals below every kappa_n, n
in Z. Then W is left closed right open, and any statement with
parameters from the kappa's and parameters from W remains unchanged
when we add 1 to the subscripts. This is because the indiscernibles
are medium strong.

We now return to S* containedin D^k. Let D*' be the common initial
segment of

MAXIMAL EMULATION EMBEDDING/2. MEE/2. Finite f::Q[0,1] into Q[0,1] is
ME usable if and only if it is strictly increasing, and
i. There is no 0 < p,q < 1 such that f maps 0 to p and q to 1.
ii. There is no 0 < p,q < 1 such that f maps p to 0 and 1 to q.

We now prove MEE/2 in ACA'.

DEFINITION 3.3.2. f::Q[0,1]^k into Q[0,1]^k is t-balanced if and only
if the following holds.
i. f is strictly increasing.
ii. fld(f) is of the form 0 = p_-t < ... < p_-1 < p_1 < ... < p_t, =
1, t >= 2.
iii. f(0) > 0 and f(1) < 1.
iv. f[{p_-t,...,p_-1] containedin {o_-t,...,p_-1}
v. f[{p_1,...,p_t}] containedin {p_1,...,p_t}.
vi. For all 1 = 2.

Proof: Will be in the ms. QED

LEMMA 3.3.4. Every t-balanced f::Q[0,1] into Q]0,1] is embed usable.

Proof: In ACA'. Let f,t be as given, and E containedin Q[0,1]^k, r >=
1. We build a series of maximal emulations of E in the sense of the
Q[-n,n]^k, for positive integers n. This time we start with a maximal
emulation in Q[-1,1]^k. We extend it to a maximal emulation in
Q[-2,2]^k, and so forth. Let n_1 < n_2 ... < n_t be indiscernibles
from Z+ for the resulting maximal emulation S containedin Q^k in the
following sense. Given any two k-tuples from +-{n_1,...,n_t}, whose
absolute values are order equivalent, with the same signs (+,-) in the
same positions, membership in S is equivalent. Now use the maximal
r-emulation S' = S intersect Q[-n_t,n_t]^k and form
(Q[-n_t,n_t],

Speaking of him, I have a question:
Wouldn't it be more logical to eat and take the risk of death by poisoning than to not eat and take the guarantee of death by starvation?

But I'm not the world's greatest anything so I might be a few dimensions beneath the chess he was playing.

Also I admit his proofs are alien to me; I've just heard people bring him up about the afterlife being real. What's the deal with that?

he used up all his logic at the end :^)

>We build our usual r-maximal emulation S* containedin D^k of a given E
containedin Q[-1,1] which is greedy with respect to

cs.nyu.edu/pipermail/fom/2017-April/020434.html

cs.nyu.edu/pipermail/fom/2017-February/020300.html


cs.nyu.edu/pipermail/fom/2017-April/020434.html

>cs.nyu.edu/pipermail/fom/2017-April/020434.html
Huh, I didn't expect my shitposting to lead to me actually learning something. Thanks.

>cs.nyu.edu/pipermail/fom/2017-February/020299.html
>Emulation is viewed as an equivalence relation on relevant entities, which means that the entity and its emulation have "structurally, the same small parts". A maximal emulation is an emulation that cannot be enlarged or enriched and still be an emulation. The symmetry asserts that certain parts are structurally the same.
Reminds me of the various types of equivalences you'd get in category theory (not that I claim to be an expert in that).
I wonder if you could bypass the need for emulation by using a structural set theory right from the beginning (instead of a material set theory like ZFC), thus allowing the "internal" and "external" emulations to coincide (or at least satisfy some natural form of equivalence).

Welp, "Oh okay, thanks for rephrasing the question, I will start with how the Halting Problem was not proved after I take a screenshot of the flawed proof. Imagine a function both_one(f,x,y) for f(a) either 1 or 0
if f(x) = f(y) = 1 then both_one(f,x,y) = 1

you can create now another function called paradox:

function paradox(x):
if (both_one(x,x,x)==1) return 0;
else return 1;
end paradox

Here is the problem for paradox(paradox): if both_one(paradox,paradox,paradox) is 1 then paradox(paradox) returns 0, thus paradox(paradox) and paradox(paradox) can't be both 1. And if both_one(paradox,paradox,paradox) is 0, then paradox(paradox) returns 1, then both_one(paradox,paradox,paradox) must be 1. This is a contradiction, but the function both_one(f,x,y) exists, and is easy to create unlike the function halt(function,input), so this proves that this kind of recursion is wrong

A good way of creating consistent recursion is using algebraic technique i.e:
x = 10
2x = 10 + x
x = 5 + x/2
"

No problem, glad I could expose you to something new.

re: your question-
No idea myself, but you're asking good questions.

he showed the continuum hypothesis was independent of ZFC

>Aristotle who was literally wrong about absolutely everything that actually matters, anyway.
Talk about being an ignoramus. Science is nothing but the Aristotelian school of philosophy without final causes.

no one gives a shit what you studied. most mathematicians who are relevant don't agree with you, your appeal to authority is pretty pathetic compared to them.

The ancient Greek during the classical and hellenistic eras advanced mathematics more than the Babylonians could have ever imagined. Also, in per capita terms, classical and hellenistic Greece shits all over the entirety of post-Renaissance Europe. It's not out of a sense of romance that Galton estimated the average IQ of ancient Greeks to be 2 standard deviation above the average IQ in his time.

You really have no idea what you're talking about.

>easy to create unlike the function halt(function,input)
What does being easy or hard to create have to do with whether the recursion is valid or not?
Both are creatable and demonstrate the paradox of the halting problem, though for different kinds of systems.
More precisely, the "easy" function looks like it essentially uses the liar paradox to prove that no logical oracle can settle the truth-value of a sentence that essentially says "this sentence is true", while the "hard" sentence proves that no mathematical oracle can settle the question of whether a particular Turing machine halts on the input that essentially says "this machine will not halt".

>the """paradox""" of the halting problem
Get the fuck out, underaged shit.

The idea of the false proof of the halting problem was to say we can't create a general function to evaluate if it halts. The contradiction in the proof was aiming to say the function is impossible, however, you can arrive at the same contradiction with a function that you Can create ie both_one(f,x,y)

Yes, and?
If P is the statement "a function halt(FUNCTION, INPUT), that outputs 1 or 0 depending on whether FUNCTION halts on INPUT exists" then the proof of the halting problem shows that P is false.

You've attempted to create an oracle "both_one" but then demonstrated that it leads to a contradiction since the value of "paradox(paradox)" is undefinable. This doesn't refute the halting problem at all; if anything it is consistent with the result that no such oracle exists.

>It is consistent that no such oracle exists
>apply the same to a function you know exists like both_one(f,x,y) and it leads to contradiction as well
>"Nah bro, it is consistent"
Yeah, keep lying to yourself

Not that guy but the comparison is made between the oracle function which determines if a function has loops and the both_one function which determines if the values of the function on both given inputs is 1.

The problem here is that the quoted/screenshoted proof is not actually the proof, but a (bad) analogy for the proof, and a perfect example of why dumbing down math should always be avoided because it's not actually possible and the only thing it does is give brainlets the impression that they understand something they clearly do not.

This is interesting. What's your objection to general relativity?

>a function you know exists like both_one(f,x,y)
Ah, I see the issue now.
The existence of the function "paradox(paradox)" does not imply that both_one doesn't exist. It merely implies that both_one doesn't have the oracle property.

Of course in order to formally define what it means to "have this property" you'd have to specify precisely the kinds of functions f that are allowed to be the first argument of both_one, as well as their domains.

That's true, but if there's an error we should be able to pinpoint where the unsalvageable mistake lies (and I doubt it's something like a type mismatch between function and data, since that's salvageable: it should be something more fundamental). Identifying it will probably help clarify the actual proof as well.

He was a mathematician. Proof theory and formal logic are forms of mathematics.

>type mismatch
No such thing as type for a turing machine. All inputs are formalised as strings.
>Ah, I see the issue now.
I doubt it.
>It merely implies that both_one doesn't have the oracle property.
No.
One problem is that that both_one doesn't halt for all inputs. Let's say f enters an infinite loop when given x, then both_one(f, x, y) will never halt/will never return anything.

Paradox is one such function. Paradox(Paradox) doesn't return anything.

It goes like this: Start paradox(paradox)
1. paradox calls both_one(paradox, paradox, paradox)
2. both_one calls paradox(paradox) to determine what it outputs
3. paradox again calls both_one(paradox,paradox,paradox)
4. both_one again calls paradox(paradox)
and so on ad infinitum.

Or in short, both_one(paradox,paradox,paradox) is neither 1 nor 0. It's undecidable. The program never stops.

So congratulations user, you have proven that there exists no program which can determine whether another program outputs some given datum.

There definitely is a problem, the only issue is where exactly it lies and what implications it leads to.
Here's an attempt to streamline the crux of the argument and possibly make the source of error clearer:

Let Apply be a 0,1-valued function that takes as input a 0,1-valued function f, and a value x in the domain of f. It returns Apply(f,x) = f(x).
Let Neg(x) = 1 - Apply(x,x) which is 0,1-valued. Then Neg(Neg) = 1 - Neg(Neg) -> Neg(Neg) = 1/2 which is absurd.

If this is a fair representation of the argument, it's more subtle than I had anticipated. I'm leaving the thread to get some sleep, but I'll mull it over and return once I've thought of something.

What's the domain of your function both_one? This is your problem.

The problem stems from Special Relativity. Pic related is why a clock ticks slower when in movement. There is no need for time dilation, oscillation will always depend on movement. You can go to General Relativity and talk about the evidence of clocks ticking faster at higher altitudes measured by prograde satellites, but understand that here on Earth we experience more gravitational acceleration which by the same logic of velocity altering oscillation frequency should make our clocks tick slower. The clock can't know potential gravitational energy, and time doesn't stop when a clock stops.

No, this is not a representation of the argument. 1. You're still thinking in terms of typed data. It's irrelevant.
2. You're thinking in terms of equations. Programs are NOT equation. Drop the equality sign and replace it with "outputs".

Neg(Neg) is not 0.5.
Neg(Neg) has no output. Neg(Neg) outputs 1 - Neg(Neg), so Neg calls itself again with itself as an input to see what it outputs, thus entering an infinite loop.

It is very easy to write programs that enter infinite loops. You may write some programs that enter infinite loops without you noticing. That's why the halting problem (a program that determines whether a program enters an infinite loop) being undecidable is such a big result.

I forgot to add a sarcasm tag at that post. I think you have no idea whatsoever what you're talking about. I am

Do you really think the high precision clocks they use in space or in science experiments actually have a pendulum swinging back and forth? That wouldnt even work in space.

Come on dog. You arent this dumb.

Any oscillation slows down with speed, mate, that is the point. I guess the only exception would be oscillation exclusively towards the direction of movement, which is realistically impossible. Do you think atomic clocks don't rely on oscillation?

I suppose I should have been clearer: I'm not trying to reproduce a proof of the halting problem; I'm trying to reproduce the argument in to identify where the error lies.
I get the impression from your post that you think the error is a subtle one that can only be discovered by writing out a full proof of the undecidability of the halting problem (UHP) in a completely rigorous manner, e.g. precisely defining the Godel enumeration and all state space diagrams. (I apologize if that's a mischaracterization.) I don't think that's the case here; it seems like the error is more fundamental and should be possible to illustrate with a simple example.

But I guess it wouldn't hurt to try your suggestion of framing the argument in an untyped, non-equational form.

Let Neg: [math] x \mapsto 1 - x(x)[/math] (where - is the monus operator, I suppose). Then Neg(Neg) is not well-defined.

So essentially boils down to a statement of Russell's paradox (which uses the same diagonal argument as the actual proof of UHP), followed by claiming that such a construction is unsound and hence that the actual proof of UHP is unsound as well.
(Unfortunately, the logic behind Russell's paradox is sound, so the purported refuation of UHP doesn't work out.)