Set theory, logic and linguistics

Alright guys, here's the deal:
Lately I've been thinking more and more about Godel's incompleteness theorems and other "paradoxical" or unintuitive results in math. Whenever I come across them the general attitude is "yeah, it's unintuitive, but the proof was formed logically and consistently so it's correct". But the more I think of some of these unintuitive results, the more their formulation seems flawed to me.
The main thing most of these results have in common is that they rely on inner paradoxes that are defined by the person who formulated the proof (for instance, Godel shows how to form a statement that is both true and unprovable). But my argument is this:
These paradoxes are always linguistic, not mathematical. They rely on objects and ideas that are not properly defined, in a manner that leaves place for potential contradictions. The only reason these theorems and proof seem correct is that they usually include an example of a self contradictory case (which supposedly supports the proof) whose origin is of a linguistic type. These linguistic paradoxes mostly arise from improper definitions and self reference.
Let's look, for example, at the proof of Cantor's theorem. It's based on defining the idea of a set that contains all elements that are not members of their corresponding subset. But who's to say that this set is even properly defined? Mathematicians are so used to the idea of "let there be..." that they usually just state the existence of objects without first proving that they are well defined and do not lead to possible contradictions.
(part 2 in next comment)

Other urls found in this thread:

youtube.com/watch?v=wGLQiHXHWNk
whitman.edu/mathematics/higher_math_online/section04.10.html
cs.kent.ac.uk/people/staff/sjt/TTFP/
twitter.com/NSFWRedditVideo

(Part 2)
I know this issue was already referred to, especially in the context of Russell's paradox (again, why did people assume that the idea "set of all sets that don't contain themselves" is well defined?), and was allegedly solved by formulating ZFC and other axiomatic systems. But the only thing these systems did was decide on "stricter" rules, without dealing with the problems itself (one can still use objects and statements that are not well defined).
I think that in order to give a truly consistent and complete math, math needs to develop a linguistic theory that deals with how mathematicians form statements and definitions. I'm not talking about logic; I'm talking about a real linguistic theory that specifies under what circumstances mathematical statements are well defined. (The bad new is that it'll probably also force math to ditch Godel's incompleteness theorems and other ideas that'll be regarded to as "not well defined").

So at this point I have to admit that I don't really know what I'm talking about - I'm not even close to being an expert. But what do you think about this idea? Has it already been done and I just don't know? Is it possible? Is it worthwhile? Do you have any suggestions to problems that might relate/counter examples to reject my idea?
Starting next semester I'll start taking courses in axiomatic set theory and advanced logic and linguistics in order to start working towards a proper understanding of the subject (with the goal of one day creating this postulated linguistic theory). Until then, I'd love to hear your thoughts!

Mathematics is a subset of linguistics. If you believe in a "True Mathematics" that exists beyond the scope of any form of language or formalized logic then you're already implicitly buying into the fundamental idea of Godel's incompleteness which is precisely THAT no formalized system of Mathematical arithmetic can prove all properties of the system.

As will all things mathematical, Godel works off of axioms, which you can think are neither pertinent or well-reasoned. One could easily argue that the theorem is not a limitation of logic and reason, but rather is a limitation of "elementary arithmetic" of distinct and finite operators. But it is a proved theorem within its well-defined scope. (And on that note there is no such thing as "proving something is well-defined." The concept of being well-defined is more fundamental than the concept of a proof. After all, proofs are the things most subject to being well-defined or not well-defined.)

Pfftt, Gödel's theorems are simply inspired by the liars paradox: "This statement is false" which is False and I can easily explain how. But more interestingly, the paradox comes from our acceptance of "This statement is True", which is also False. Better yet, I can refute the Halting Problem's proof here: youtube.com/watch?v=wGLQiHXHWNk

I will start with how the Halting Problem was not proved after I take a screenshot of the flawed proof.

imagine a function both_one(f,x,y) for f(a) either 1 or 0
if f(x) = f(y) = 1 then both_one(f,x,y) = 1

you can create now another function called paradox:

function paradox(x):
if (both_one(x,x,x)==1) return 0;
else return 1;
end paradox

Here is the problem for paradox(paradox): if both_one(paradox,paradox,paradox) is 1 then paradox(paradox) returns 0, thus paradox(paradox) and paradox(paradox) can't be both 1. And if both_one(paradox,paradox,paradox) is 0, then paradox(paradox) returns 1, then both_one(paradox,paradox,paradox) must be 1. This is a contradiction, but the function both_one(f,x,y) exists, and is easy to create unlike the function halt(function,input), so this proves that this kind of recursion is wrong

A good way of creating consistent recursion is using algebraic technique i.e:
x = 10
2x = 10 + x
x = 5 + x/2

Algebra applied to "this sentence is false" is simply:
x = -1
x = 1

In terms of sentential logic, you have to differentiate a statement from a non-statement, otherwise you end assigning True or False to sentence elements eg Chicken = True, Blue = False, without any adequate context... But you don't have to differentiate, as long as you are consistent, and this is useful when such differentiation is unpractical: you simply assign a True or False value to any sentence. Effectively, however, all true sentences will be true statements, and all false sentences will be either false statements or non-statements. Take the sentence "Is false", [to be continued]

The idea of the theory is to limit the mathematician's ability to use any sort of statement. Godel's theorems use certain types of statements that are both true and unprovable, but the statements themselves are not necessarily well defined ,so the proof itself is meaningless - because it assumes the existence of statements that are invalid.
Again, the point of the theory is to identify the linguistics properties of certain statements that will be regarded to as "well/not well defined" (you could easily call these properties arbitrary, but they have a certain purpose). Statements which are not well defined will be counted as "meaningless" and any theorem/conclusion that is based on them will not be held as valid.
I think that the general direction is to form a more precise formulation of mathematics that has a very limited and controlled variety of valid structures, sort of like a programming language. In programming, if you input a statement that is not well defined, the computer recognizes it right away and just disregards it (doesn't matter if the statement is generally true or not). E.g, if you could translate the idea of "set that contains all sets that don't contain themselves" into a programming language, the computer would recognize the fallacy right away. The theory would act similarly - it would point out to statements that cannot be interpreted properly, based on several criteria (and again, you can always say "why choose these criteria over the others", just like in axioms. But the point is to deliberately fix certain errors in math, i.e statements that can lead to ambiguity, so arbitrary or not, that is how I choose to formulate the theory)

"Is false" is clearly a non statement since it lacks the subject, the function parameter. And so is "Is true". Now, does "This statement is false" create a parameter to pass to itself or not? If we simply substitute "This statement" for "This statement is false" once, we still get ourselves in a situation where there is a parameter missing: "This statement is false" is False. And since this goes up ad infinitum, "this statement" never gets passed to the false() function, making it perfectly null: false() ie "is False.". Now, if we Do always differentiate a statement from a non-statement, we expect to Not assign values of True and False to sentence elements like Dog, Chicken, Eye, without a context; making them effectively Undecidable sentences, that is, simply non-statements.

You make a very good point.
My idea is to handle statements just like the liar's paradox and any other related case. Just like you showed, it's easy to generate statements that lead to linguistic contradictions that seem mathematical. The theory is supposed to handle this problem by identifying these sorts of statements and automatically marking them as invalid. Imagine that the moment you form problematic statements like these, the theory would immediately run an error message.

>This statement is false

That doesn't qualify as a statement in the first place because it isn't one.

Yeah but the point is that is can be even "This sentence is false" and it yields a False value.

Please go read Wittgenstein before you burn your brain. He already "formulated" this problem in Tractatus and essentially resolved it in Philosophical Investigations

You get a unreadable sentence by writing the following text once without doubles quotes, followed by a colon, followed by the same text between double quotes marks, followed by a period:"You get a unreadable sentence by writing the following text once without doubles quotes, followed by a colon, followed by the same text between double quotes marks, followed by a period".

This sentence is self referential and talks about itself, says that it is an unreadable sentence.

Now notice that you can replace "unreadable" by *anything* you want, especially:
1°) wrong
2°) not written on this website
3°) unprovable
4°) unprovable without using at least 100 000 words
5°) implies the existence of Santa-Claus if it is true
6°) implies the existence of the easter bunny if it is provable

And see what happens.
Self reference is a direct result of the possibility to express simple syntactic manipulations with the language you're using, and be sort of duplicating a pattern as above. In most formalizations of mathematics it is always possible (encoding formal formulas with numbers and describing syntactic operations using only elementary arithmetics).

That's right. This is one of the biggest criteria I think the theory will have for "not well defined".

I read a book that talked briefly about his work (only 30 pages or so) and I'm not sure if I'm coming to this from the same philosophical angle as he did. But I'd still like to read more about it. Do you have any good suggestions?

>But my argument is this

you simply summed up gödel, escher, bach and sold it as your own idea? for what purpose?
also didn't read the rest of your post.

Heard about this book, but I'm not sure what it's about and I don't really read pop-sci. Are you saying it's about linguistic paradoxes in math? Is it any good?

i wouldn't call it paradoxical or unintuitive. a layman can more or less understand why it's true.

the theorem itself is pretty trivial, but the proof is based on an unintuitive linguistic contradiction.

mind refreshing my memory? what part is unintuitive?

>Let's look, for example, at the proof of Cantor's theorem. It's based on defining the idea of a set that contains all elements that are not members of their corresponding subset. But who's to say that this set is even properly defined?
It's based on a set that by definition causes a linguistic contradiction. I'd say it's pretty unintuitive (any use of a set that's not properly defined is meaningless and based purely on linguistic "creativity".)

the cantor diagonal set?

in what why is it not properly defined?

You're confusing two different proofs of two different ideas, I'm not talking about the diagonal argument. Here, have a read (Haven't actually read this article so I'm not sure if it's any good, just posted the first thing I found on google):
whitman.edu/mathematics/higher_math_online/section04.10.html

and to answer your question, the set defined in the proof is not properly defined because it's clearly self referential.

>not properly defined because it's clearly self referential

except it is possible to formally define a self-referential sentence. godel's proof would not have worked if this were not possible.

in the incompleteness theorem, IIRC (which is a big if) the sentence isn't explicitly self-referential, but it refers to itself through universal quantification.

or rather, quantification in general.

Most people do not get that self reference is possible because they do not know what recursion is, nor fixed points in computer science.

That's the whole point man - I want it to be regarded as a linguistic syntax error. That's why I wrote that the theory would dismiss Godel's theorems.

It's still practically self reference. But in any case, my goal is to handle any problematic linguistic syntax, I'm not limiting the theory strictly to self reference (I want to identify the linguistic sources of these "paradoxes", whatever they may be).

Do you seriously think that CS has some magical understanding of things that math doesn't?
Techincally, you could use fixed points, recursion and other means of self reference to construct an infinite number of "valid" statements (in this context, valid = do not lead to contradictions) just like you can use them to construct an infinite number of "invalid" ones. It's a matter of how you choose to use your syntax.
But my idea is that any linguistic form that could poitentially lead to contradictions should be disregarded in proofs (even if it is true in some cases).
Take a look at the following sentence:
"This sentasence has a typo"
Even though it doesn't contradict itself (so by some definitions it's valid), it still contains elements that could potentially lead to a contradictions (even if in this case they don't), so the sentence (according to the linguistic theory) should be disqualified as "not well defined" and not used in proofs and such.

>Do you seriously think that CS has some magical understanding of things that math doesn't?
CS is a subset of math, it is studied exactly as a another topic.
>But my idea is that any linguistic form that could poitentially lead to contradictions should be disregarded in proofs (even if it is true in some cases).
then you have to ban elementary arithmetics. The first Gödel incompleteness theorem assumes almost nothing.

It makes just one fatal assumption - that any Godel number represents a valid statement. Yes, it can EXPRESS a statement, but my point is that not any statement is valid (and specifically not the ones he showed how to construct in order to prove his theorem).
Again, this could always be settled as a philosophical dispute, but I think there's more to it than that.

The meaning of that term "valid" of yours is unclear. If it is only another word for a "well formal formula" then you are wrong, the fixed point procedure used actually builds a well formed formula.

If it is something else, then it won't be definable formally, otherwise we could build a self referential sentence which assert "I'm not a valid sentence" and you'd get the same problem again.

That's the whole point - I want to reformulate the term "valid" or "well defined" from a linguistic perspective. Any self referential statement that would try to contradict theory would be, by definition, invalid, and therefore unaccountable for.
But I do understand your argument; I'm just not currently using any "existing" definition of valid or well defined, so we're just talking about two different things.

[eqn]
\begin{matrix}
T & Y & P & E & & T & H & E & O & R & Y \\
Y & & & & & & & & & & \\
P & & & & & & & & & & \\
E & & & & & & & & & & \\
& & & & & & & & & & \\
T & & & & & & & & & & \\
H & & & & & & & & & & \\
E & & & & & & & & & & \\
O & & & & & & & & & & \\
R & & & & & & & & & & \\
Y & & & & & & & & & & \end{matrix}
[/eqn]

Yeah, that's about the direction I had in mind. The difference is that the theory is supposed to be more of a linguistic type.
But still, do you maybe have some recommendations to good books about type theory?

Unlike the traditional forms of the Liar Paradox, Gödel’s proof does
not involve self-reference, There are no first-person pronouns or other vehicles of selfreference
in the conceptual repertoire of arithmetic. Rather, the paradox arises from the
fact that the technic of Gödel’s number system assigns to numbers a double identity, as
numbers simpliciter and as codifications of numerical expressions. A paradoxical Gödelian
sentence is not like Epimenides uttering ―I am now lying‖ but like Dirty Harry claiming (as a
replique in a movie) ―In this situation even Clint Eastwood could not help smiling.‖(If he
maintains a stiff upper lip, he could be falsifying his own statement.)

>But still, do you maybe have some recommendations to good books about type theory?
You'll almost certainly want a second opinion since everything I know of type theory is entirely self-taught, but Thompson's book (full PDF available from below link) will probably be the most useful for you; it's the one book I wish I had when I started out.

cs.kent.ac.uk/people/staff/sjt/TTFP/

>That's the whole point man - I want it to be regarded as a linguistic syntax error. That's why I wrote that the theory would dismiss Godel's theorems.

well, in the logical theory godel uses, the statement is well-formed. you can use a theory where it's impossible to prove the incompleteness theorem, but any such theory will be less powerful that the one godel used.

if you believe in the validity of the the axioms and inference rules that godel used, but not the incompleteness theorem, then the problem lies with you.