Propositional Grammar Parsing

Veeky Forums is anyone working on creating a library to read the logical proposition embedded in language? I'm talking things like, "whether you want it or not," and other such clauses that denote a discrete Boolean if/then premise.

And I don't mean people using machine learning to approximation an understanding of natural language up to and including logic. Just a complete library that processes things out and translates intuitive language into phrase components separated by logic. This should be possible, because there are only a finite number of logical constructs in the grammar of any given language.

Secondarily, if you're not aware of any such effort or library (or even just an API), then what would this process actually be called? All I can think of to denote/convey this concept is the phrase "propositional grammar parsing," which seems a mite unwieldy if someone were actually trying to create such a thing.

Or is it only Veeky Forums related if machine learning is involved and this belongs on /g/?

Other urls found in this thread:

books.google.com/books?id=Z0q9depdhZgC&pg=PA47&lpg=PA47&dq=metalinguistics metatext&source=bl&ots=BS1QZWqHuW&sig=wVS1RkP19ot5NXpSpBSbAZldq6E&hl=en&sa=X&ved=0ahUKEwiB_420263YAhXIqVQKHcT4AvQQ6AEINjAD#v=onepage&q=metalinguistics metatext&f=false
eric.ed.gov/?q=metatext&ff1=subMetalinguistics&id=ED563344
eric.ed.gov/?q=metatext&id=EJ668073
cs.cmu.edu/~rwh/pfpl/2nded.pdf
demo.clab.cs.cmu.edu/NLP/
twitter.com/SFWRedditVideos

Figured it wouldn't hurt to ask /g/ too.

Any logical statement can be converted into logic symbols. That would probably be the easiest way to parse the information.

The big problem here is that the English language can often be vague. Even if a statement is grammatically correct and true, that same sentence could also be interpreted as a different sentence that is also grammatically correct but false.

Yeah I'm not looking for state validity, just a way to process out the logic itself. An assertion would be equivalent to a condition, for the sake of creating a program from an essay, for example. Obviously it would need to understand tons of natural context before it could decide which if/then clauses should actually be while or for loops or whatever.

Most of this is about propositional logic itself, not scraping it from natural language. I'll look into context-free parsing, but if it's not what I'm talking about I'm gonna be pretty disappointed.

I don't understand. What are some desired input/output pairs of your theoretical parser.

Either of the examples in probably said it best.

I don't necessarily know the desired input/output, just something that preserves the logic of a statement without necessarily parsing all the way into the disparate logically-separated subclauses.

>I don't necessarily know the desired input/output
Then you're blowing hot air.

Let me pander to your unique autism then.

The desired input in English. Anything else can have undefined results. The desired output is any grammar that can be immediately recognized as propositional logic.

I'm asking if anything like this exists. I don't know where you get off saying that's blowing hot air. If nothing like this exists, or you don't know the terminology that would be used to discuss this concept, you can say so.

Something like this?
"whether you want it or not..."
->
whether "you want it" V "not," -> "..."
->
A V ~A -> "..."
->
"..."

Good call deleting , it was extremely retarded and betrayed you for a complete dilettante.

Something like that. Awhile ago I was thinking about how to explain the concept of a lack of causation, and it occurred to me that it requires a sort of reverse-if/then. Not in a "if ~Q then ~P" way, but a literal "this type of logic does not have a relation" type statement. Awhile later I wrote something that used the word "whether," and when I read back the statement I realized that I'd used that kind of negative reasoning in my assertion. "Because" is a word I'd instantly have recognized as relating to conveying logic, but I'd never thought of "whether" as such a word. So I don't really know how to process whether-based statements of logic, but yes, something like that.

>Awhile ago I was thinking about how to explain the concept of a lack of causation
Any statement form q
is equivalent to
true -> q

That's kind of the problem. There's no good way to just tell the world mu.

What you're asking for is a 1:1 mapping of logical equivalencies between the subjunctive mood/idoms and syllogistic reasoning.

Unfortunately, and perhaps counter-intuitively, this does not seem to exist.

In LSL, LMPL, and First-Order Logic, equivalencies are possible when the resultant truth-table values are identical - for example, A->B is logically equivalent with ~B->~A (this is specifically called the contrapositive of A->B).

But "whether you want it or not" is more complicated:

Let W = You want it.

Clearly the only way to express it would be W v ~W, but that's just a restatement of the principle of bivalence (for any A, A v ~A). And really, when it's used colloquially, it's a way to tell someone that something is inevitable, so you'd really have to include "Let X = Something will happen." and restate it as (W v ~W) -> X, which, since W v ~W is always true (because the principle of bivalence is a central tenet of any basic logical system with which human beings are used to communicating), it's tautological: The syllogisim is true if X is true, and false if X is false (because the only way for a conditional statement to be false is if the premise is true and the antecedent is false).

So in that case, it's actually redundant to say "whether you want it or not," and it would be logically appropriate to simply excise the useless phrase altogether.

The hardest part of symbolic logic is isolating the atomic sentences, and I think what you're running up against are a lot of the tone-markers and rhetorical flourishes by which we communicate mood, speaker, and audience.

>W v ~W
That's valid output. The purpose isn't to process the reasoning of any statement, but to process out the parts of it that can logically correlate to formal reasoning premises. Detecting tautologies would be done at a different level, outside the context of the thing that just rearranges parts of the sentence for easier processing.

I'm not trying to make an AI here. I just want to know if there's anything that would, for example, let me download Wikipedia and scan out all the uses of logical transitions to study the semantics of the word "whether." Someone trying to put together a formal reason "translator" might even overlook several words, and I'd have to end up adding in several translation rules myself. But if it's never been done before, it'd be easier to make it from the ground up than try to force it into some existing grammar parsing framework.

Well, linguistically speaking, the logical transitions in English are usually considered what they call "metatext," and include things like "whether" and "therefore." This term may help in your search, but I don't know offhand if specific research has been done to attempt a "translation" between metatext and logical structures... I mean, "therefore" is clearly an operator, and I imagine many examples of metatext would equate somehow to a form of logical operator, but it might be possible that there are a number of them that are simply equivalent (like "whether" and "if," perhaps), and could render more efficient processing of text. Is this what you're talking about?

>metatext
Can you link a sample where this usage of the term is used? All I can find from a quick Google suggests that metatext just refers to anything that talks about a given text. It sounds like you understand what I'm talking about, but are you sure you're using the right term? Adding in "linguistics" doesn't yield any results that appear to use your usage of the term. Is there a subfield you study this in where 'metatext' just ends up referring to the logical transitions?

Bite me faculty.cs.tamu.edu.

Kid, you would make a perfect Computer Scientist.

>imagine abstract concept
>locate three-word descriptor for it
>search Google for hours seeing if it exists
>change search terms many times to see if anyone is even talking about it
>give up because search engines can't tell you if something doesn't exist
>repeat

Story of my life.

Well, the last time I really did research on this was for a paper I wrote a decade ago in grad school (which, of course, I can't seem to find), but I pulled up some stuff on ERIC and found some other resources...

books.google.com/books?id=Z0q9depdhZgC&pg=PA47&lpg=PA47&dq=metalinguistics metatext&source=bl&ots=BS1QZWqHuW&sig=wVS1RkP19ot5NXpSpBSbAZldq6E&hl=en&sa=X&ved=0ahUKEwiB_420263YAhXIqVQKHcT4AvQQ6AEINjAD#v=onepage&q=metalinguistics metatext&f=false

eric.ed.gov/?q=metatext&ff1=subMetalinguistics&id=ED563344

eric.ed.gov/?q=metatext&id=EJ668073

The overarching subject would be "metalinguistics," which originally comes from contrastive linguistics (differential analysis of two or more languages). You might consider the L1 as English (or whatever "natural" language you choose) and L2 as a computer language or even LSL, LMPL, or FO.

>metalinguistics
>contrastive linguistics
>L2 as a computer language
Thanks, that's actually a huge help.

I don't think comparing natural language with computer languages would work well because programming languages are generally far more expressive than natural language, concept-wise, but any other formal language would be a valid basis for comparison. My first thought when I made this thread was that there's a lot to learn by doing a meta-analysis between languages based on their respective logic particles, but it didn't occur to me that that specific practice might have its own name under linguistics. Maybe my mythical anticausal transitive is out there in one of the languages of the world somewhere, and maybe someone has even found and studied it already.

It looks like a lot of the more advanced concepts in linguistics end up in books, so I definitely have my work cut out for me in researching this.

>This should be possible, because there are only a finite number of logical constructs in the grammar of any given language.

Are you sure about this? I feel like it needs to be proven...

So do I, it's why I asked.

We might communicate logic to each other in ways we don't fully understand, using grammar principles that have yet to be named or realized. I certainly felt like it was a revelation when I noticed that "whether" was a logical transitive. I'd never thought of it like that before, so there's definitely a possibility that there are other such mechanisms out there and it's not nearly as finite as I imagine.

>clauses that denote a discrete Boolean if/then premise.

you mean if/then?

In addition to literally every other case expressible in the English language, and other languages too if there's time left over, yes.

natural language inherently contains ambiguities, that are not resolvable even with a great enough context, for example, people just intuitively discriminate between boolean OR and XOR, when the word "or" is used in a sentence.

>humans think in logical propositions
>human speech is logical propositions

Brainlet
Literally nobody on Earth talks like an autistic mathematician except the mathematicians themselves

Yeah, that's the big one. I generally think that the word 'either' is used as a modifier when it's meant to be XOR.

I'm not saying it's how everyone talks. Wait. How did you read that from literally any post ITT? Explain your reasoning, because it's not apparent.

There's a good book on programming language theory, including parsing logic you may be interested in to narrow your problem cs.cmu.edu/~rwh/pfpl/2nded.pdf

Natural Language Processing (NLP) could also be what you want but you haven't really defined it, so here you go demo.clab.cs.cmu.edu/NLP/

You also do not want to use Booleans. There is no information carried by a Boolean beyond its value, to make use of a Boolean you have to know its provenance so that you can know what it means. So now you have to keep track of this information AKA the state, or attempt to recover it using any number of program analysis techniques all of which is notoriously difficult. The only thing you can do with a bit is to branch on it, and pretty soon you’re lost in a thicket of if-then-else’s, and you lose track of what’s what.

Type theory fixes this, which is why we have things like proof assistants in order to do propositional parsing.

>proof assistant
I guess that's what I'm ultimately hoping to make.

If you can programmatically detect all the logic in a given body of work, you'd be able to isolate out the parts of the information that are redundant to your natural context and compress the remaining information into a dense "upload" packet, refined for your unique consciousness.