Anyone into theoretical linguistics? What do you think about age-old debates like cognitive vs generative grammar?

Anyone into theoretical linguistics? What do you think about age-old debates like cognitive vs generative grammar?

Opinions on pic related? Fuck his 9/11 tinfoil hat theories, what about universal grammar?

Do you think that people are born with the innate capacity for language?

Does anyone even giving half a shit about topics like this?

Other urls found in this thread:

nature.com/neuro/journal/v6/n7/full/nn1077.html
researchgate.net/publication/10696400_Broca's_area_and_the_language_instinct
sciencedirect.com/science/article/pii/S0732118X07000207
ling.upenn.edu/~ycharles/papers/tlr-final.pdf
plato.stanford.edu/entries/innateness-language/
link.springer.com/article/10.1007/BF00413661
youtube.com/watch?v=TwZ-vIaW6Bc
twitter.com/NSFWRedditVideo

>Do you think that people are born with the innate capacity for language?


It's really hard to argue against it in view of certain evidence

nature.com/neuro/journal/v6/n7/full/nn1077.html


To the great majority of theoretical linguists, this debate is still a few years away in terms of relevance. The truth is we don't really understand the relationship between language and the brain well enough.

do philosophers have anything to show for besides black and white pictures of them looking pensive? i don't think their work age very well given that philosophers live long enough to see their work become irrelevant.

>t. state school engineer

Neuroscience has BTFO of old theories like Chimpsky's. Universal grammar is nonsense, we have an "innate" ability to pick up language but there is no grammar "built in" to our brains.

Completely wrong. There are certain logically possible errors that children never make in language acquisition. Take the famous auxiliary inversion case.

In forming yes/no questions in English, we move an auxiliary verb to the beginning of the sentence. For instance
>John is a farmer --> Is John __ a farmer

Some questions have more than one auxiliary, so which one do you move? Well, here's the kind of evidence available to children:
>John is a farmer --> Is John __ a farmer
>John is looking for something that is on the table --> Is John __ looking for something that is on the table
>John is hoping that his mom will let him play on her phone --> Is John __ hoping that his mom will let him play on her phone
>John would like Mrs. Jones to let him play outside --> Would John __ like Mrs. Jones to let him play outside

In these cases, and in virtually every example to which children are exposed, it is the first auxiliary that moves. If the children were learning by some kind of trial-and-error method, we would expect them to make the mistake of adopting this hypothesis, that the first auxiliary moves, until proven false. But that's not what happens. Children never do this:
>The boy who is holding the ball is happy --> Is the boy who __ holding the ball is happy
They know that it is the structurally highest, not the linearly first, auxiliary that moves, and they correctly produce sentences like
>Is the boy who is holding the ball __ happy

I believe counterexamples like these don't occur in child-directed speech at a level significantly above that of speech errors.

yeah child language tells of a lot of different stuff. Children are fucking great at some of the grammatical problems

except there is plato's problem that does not answer for that.

Is that you, Lakoff?

Stop thinking about elephants, dude

In ENGLISH children don't make these mistake, but they happen in languages with very different grammar, such as Chinese.

Different languages have different syntax. Chines learners exhibit systematicity just like learners of any other language. Although it is true that the concept of UG has changed since it was first considered, there is pretty solid evidence that language is learned in very particular, specific ways.

Well, you could argue it is not much of a science, however, studying the subject matter comes of great benefit when one wants to go into law and justice or looking for a serious political career. Also a psychologist or art student will find it good to at least read about it in some depth.

>chinese children make this mistake
citation needed

Neurolinguistics is a meme that gives us absolutely no interesting information on language.

>theoretical linguistics
This just makes me wonder what sort of experiments experimental linguists do

Much of it is pretty trivial, and requires sampling judgements from subjects. All this ever does is confirm what we intuitively know is true. In its current state, much of experimental linguistics is a meme. Of course, this does not include work on acquisition, which would be interesting if the tiny samples used didn't render dubious any statistical result which is obtained.

Self-Paced Reading, Eye-Tracking, ERP, MRI.

>All this ever does is confirm what we intuitively know is true

Stay dumb, Charlie Brown.

>They know that it is the structurally highest, not the linearly first, auxiliary that moves
So your argument shows that children are aware of the structural level of words/parts of sentences when learning grammar, instead of the very simplistic rule "the first auxiliary moves" that you proposed at first. It's literally just a strawman argument.

>children are aware of the structural level of words/parts of sentences when learning grammar


Where does this knowledge come from?

memes have killed linguistics

From observing other people speak, of course.

If the universal grammar you guys are talking about is literally just the brain's ability to hierarchically organize information, then I agree with that. I assumed the proposed innate language ability would be a bit more sophisticated and specific to language.

>From observing other people speak, of course.


You cannot discern internal structure from phonological production. This is why there is syntactically ambiguous sentences like:

>I saw the girl with the telescope.

If you want something more sophisticated about UG, read this

researchgate.net/publication/10696400_Broca's_area_and_the_language_instinct

>You cannot discern internal structure from phonological production. This is why there is syntactically ambiguous sentences like:
>>I saw the girl with the telescope.
Not in that example, but in the first example you gave (The boy who is holding the ball is happy) you clearly can.

How so? Either we have an internal structure parser or we don't. What is blocking it sometimes?

And how come children never make mistakes when raising auxiliaries, but they do in other cases? If stimuli was the main factor in learning language, how come children go through the same learning phases with the same type of mistakes at the same time? Do they all receive the same stimuli?


I think you have very little idea about this topic, seriously.

>How so?
Because one interpretation just doesn't make sense.
>[[The boy who is holding the ball] is [happy]]
vs
>[[The boy] [who is holding the ball is happy]]

>What is blocking it sometimes?
I guess the fact that some sentences are actually semantically ambiguous, while other sentences are only linguistically ambiguous but semantically you can resolve the ambiguity.

>And how come children never make mistakes when raising auxiliaries, but they do in other cases? If stimuli was the main factor in learning language, how come children go through the same learning phases with the same type of mistakes at the same time?
Examples?
>Do they all receive the same stimuli?
I'd say yes, roughly. At least all stimuli are in the same language following the same grammar.
In any case I'm pretty sure children's mistakes can be explained in other ways, for example innate difficulties in the mental model that you have to develop to learn a language. It's not surprising that people with similar brains learning the same language will make the same mistakes, you don't need universal grammar for that.
Unless, once again, you're willing to call *any* innate learning ability as applied to language acquisition a "universal grammar", which would seem pretty pointless IMO.

>I think you have very little idea about this topic, seriously.
While I'm no linguist, there are certainly a lot of respectable linguists who disagree with UG, so I woulnd't be so quick to dismiss my opinion really.

I have a moderate interest in linguistics from a computer science perspective. Grammars in general are a very interesting area. NLP also looks cool.

I wish compulsory education focused more on teaching grammar through rules, almost axiomatically (within measure), rather than through examples.

The point is that sentences like
>The boy who is holding the ball is happy
appear at an extremely low rate. Children effectively aren't exposed to those data. The computationally simplest hypothesis a child learner can make as to how auxiliary inversion works is that the linearly first auxiliary moves. This hypothesis is consistent with effectively all data to which they are exposed, so one would expect it to be at least viable to child learners until it was ruled out by trial and error. However, this hypothesis is NOT viable to children, as evidenced by the fact that no child makes the mistake mentioned above.
The innate, language-specific knowledge is that language uses ONLY hierarchical structure, and never linear order, even though using linear order is sometimes simpler.

>I guess the fact that some sentences are actually semantically ambiguous, while other sentences are only linguistically ambiguous but semantically you can resolve the ambiguity.


Those sentences are not semantically ambiguous. They are syntactically ambiguous. Big difference.


And once again. Where are the children obtaining the structural information from the sentence that allows them to recognize a higher auxiliary regardless of linearity? There is nothing in phonological form that even clues you to that. It's like saying children can extrapolate addition from multiplication, so why even bother teaching them addition. It does not work that way. And yet, language does. It needs to be some sort of a priori ability. The nature of it is what is at stake and that is the debate in linguistics, not its existence.

Your examples don't make any sense. What do the different structures
>[the boy who is holding the ball][is happy]
vs
>[the boy][who is holding the ball is happy]
have to do with auxiliary inversion? of course the first is plausible and the second implausible, but that's irrelevant. the question is why children never raise the linearly first auxiliary, when that's the simplest hypothesis consistent with all data available to them.

>there are certainly a lot of respectable linguists who disagree with UG
that might be the impression you get from reading wikipedia or wherever you're getting your information, but it's really not true. there are a lot of respectable linguists who disagree with CHOMSKY, but very few who really deny UG. even many of Chomsky's most adamant detractors accept UG. for example, UG is taken very seriously in categorial grammar, which is strongly opposed to Chomsky's current theories.

How do you know for sure they never heard anything like that before? Ur assuming shit

Because they can actually do it even with nonce words.

>The point is that sentences like
>>The boy who is holding the ball is happy
>appear at an extremely low rate. Children effectively aren't exposed to those data.
I don't buy this. Got any sources?
>The computationally simplest hypothesis a child learner can make as to how auxiliary inversion works is that the linearly first auxiliary moves.
Strawmanning again. Just because you think it is simplest doesn't make it the actual most simple hypothesis in the context of a brain learning language. For example, "linearly first" would be a pretty stupid choice since sentences are actually tree-like structures.

>The innate, language-specific knowledge is that language uses ONLY hierarchical structure, and never linear order, even though using linear order is sometimes simpler.
You can keep repeating linear order is simpler but that does not make it so. I would say recognizing the tree-likeness of language is explained perfectly fine by the brain's regular learning ability, without resorting to some language-specific UG.

>And once again. Where are the children obtaining the structural information from the sentence that allows them to recognize a higher auxiliary regardless of linearity?
From hearing people speak. Unless you've got some sources detailing the amount of stimuli that do or don't exhibit the grammar rules that you say are unlearnable, I don't buy the poverty of the stimulus argument at all. And again, stop regarding "linear" as the base-case for everything.

>The nature of it is what is at stake and that is the debate in linguistics, not its existence.
Then what do you think of this:
>If the universal grammar you guys are talking about is literally just the brain's ability to hierarchically organize information, then I agree with that.
and this:
>Unless, once again, you're willing to call *any* innate learning ability as applied to language acquisition a "universal grammar", which would seem pretty pointless IMO.
Don't you think that at some point it stops being language-specific, and there's no reason to invoke UG at all?

>the question is why children never raise the linearly first auxiliary
Because it makes sense to treat [the boy who is holding the ball] as a single semantic unit, and therefore, to not change the words within that unit around if the meaning of the unit doesn't change. That's what I'm trying to explain by grouping those words in brackets.
>of course the first is plausible and the second implausible, but that's irrelevant.
I don't think it's irrelevant at all, since children might use this semantic information to determine which auxiliary inversion rule is actually correct.

Not the guy you're replying to, but elaborate.

All that research is pretty sub-par. Experimentalists are not only incompetent at statistics, but more importantly, the methods they use barely give us any insight into language. Much like mathematics, you don't need experiments to make sense of language, you only need to rely on your innate knowledge of it.

>Because it makes sense to treat [the boy who is holding the ball] as a single semantic unit, and therefore, to not change the words within that unit around if the meaning of the unit doesn't change. That's what I'm trying to explain by grouping those words in brackets


What the hell is a semantic unit?

Also, it doesn't make sense to change a "unit", whatever that is, according to whom, to you? Extractions are incredibly common in languages. They are found in relative clauses, wh- questions and focus structures, to name a few. We modify structures all the time. Seriously, you sound like you don't know what you are talking about.

...

>Not the guy you're replying to, but elaborate.
sciencedirect.com/science/article/pii/S0732118X07000207

Not an argument.

>their work becomes irrelevant
Everyone constantly returns to older philosophers for insights or guidance to problems of today, what the fuck are you talking about?

Neither is yours. I get it that your ego cannot take that some linguists are doing something akin to hard science, but try to take it like a grown-up. If you have no interest in the field, just let it go, but making such blanket statements as yours looks childish.

>What the hell is a semantic unit?
The subject of the sentence in this case, maybe I should've just said "noun phrase".
>Also, it doesn't make sense to change a "unit", whatever that is, according to whom, to you?
Yes. I didn't mean in general, just in this case.
When you want to transform "The boy who is holding the ball is happy" into a question, it absolutely doesn't make sense to change the words of "the boy who is holding the ball" around, since you don't want that part of the sentence to change meaning in the question.

Once again. Where do you base your criterion of "doesn't make sense" on? Why doesn't it make sense? What happens if you do? How about relative clauses where you can and do extract? Why can we extract from certain NPs and not others?

And more importantly. How does a 3 year old know all this?

>Where do you base your criterion of "doesn't make sense" on? Why doesn't it make sense? What happens if you do?
- Certain words in a certain order have a certain meaning
- You want to keep the meaning the same
- Therefore you don't change the words or the order they appear in
I don't know what's so difficult or strange about this, seems very straightforward to me.

>How about relative clauses where you can and do extract? Why can we extract from certain NPs and not others?
Could you give some examples, since I'm not sure what you're talking about.

>And more importantly. How does a 3 year old know all this?
From observing other people using language.

I actually do some experimental linguistics. I just think most work in that field is crap.

I write with [a pencil].
[What] do you write with [ ].
[A pencil], I write with.

I just extracted [a pencil] from its original PrepP. Why can I do that here, but not here?

I saw the bag with [a pencil] on the table.
[A pencil], I saw the bag with on the table.

>From observing other people using language.

You have already been shown that even for sentences they have never heard before, children make no mistakes. If you don't want to accept that, it's on you to prove that research wrong. But just an FYI, "because it makes sense" is not a very good argument.

Including yours, I assume.

You assume wrongly. Also, comparing experimental linguistics to 'something akin to hard science' is ridiculous. It's totally modeled on experimental psychology (a field in which I've also worked), where statistical incompetence and p-hacking is the norm.

Nigga I'm just trying to explain to you how I imagine that a kid might learn a certain grammar rule from examples, obviously I don't have a complete working model of how people learn language but that's not necessary just to make the case that UG is stupid.

So, in other words, you have no fucking clue what you are talking about and are basing your opinion on guesses and hunches and stuff. Gotcha.

A major problem with the debate between cognitive and generative linguistics is that fact that members from each school are very ignorant of the ideas of the other. I myself am completely trained in generative grammar, and hardly know anything about cognitive grammar. The converse is true of people trained in cognitive grammar. This leads to a complete lack of dialogue between both schools, which prevents any of them from improving their models by taking into account objections from the other.

>linear order isn't simpler than hierarchical structure
Try writing a program that does this and see whether it's simpler to use structural depth or linear distance. In these trivial cases there's not much difference, but once you get to even remotely more complex examples linear order is much simpler.

>For example, "linearly first" would be a pretty stupid choice since sentences are actually tree-like structures.
A tree-like structure encodes two types of information, hierarchical structure AND linear order. So why, barring any knowledge of UG, would it be a stupid choice to consider a rule operating on the linear order of the structure without even computing the structure, seeing as linear order is directly implicated by the data, and structure is not?

>Got any sources?
ling.upenn.edu/~ycharles/papers/tlr-final.pdf
estimate that the relevant data occur at a rate of 0.068-0.045%, which is more than 40 times lower than the rate needed to be reliable.

>even the most basic knowledge of syntax is not necessary to make the case that UG is stupid
Not really sure how to respond to this.

I do believe in universal grammar, but I think Chomskys description of it is very wrong.

Just like the UG proponents, apparently :^)

>Try writing a program that does this and see whether it's simpler to use structural depth or linear distance. In these trivial cases there's not much difference, but once you get to even remotely more complex examples linear order is much simpler.
Try doing the same but starting with a human brain that already understands some simpler aspects of human language.

>A tree-like structure encodes two types of information, hierarchical structure AND linear order. So why, barring any knowledge of UG, would it be a stupid choice to consider a rule operating on the linear order of the structure without even computing the structure, seeing as linear order is directly implicated by the data, and structure is not?
A child learns language not just by looking at a lot of data and extracting each rule from that, separately. There's already a mental model that understands simpler sentences like "Bobby happy'" or "I see a dog". I don't think it's warranted to assume that this mental model would store language and its meaning linearly, seeing as literally nothing in the brain stores information that way.

Sorry, I thought this was a response to the boy holding the ball yesterday but I see it's an example for extracting relative clauses.
Not sure how to say it formally but it seems two different senses of the word "with" are used here.

>You have already been shown that even for sentences they have never heard before, children make no mistakes. If you don't want to accept that, it's on you to prove that research wrong.
I don't disagree with the research per se, just with the conclusion that it must be due to UG.

LMAO at the actual state of linguistics
Nice field you have there, kid

>try writing a program with a mysterious object that no one understands
This is like arguing that physics is wrong because we could be living in a simulation.

>A child learns language not just be looking at a lot of data and extracting each rule from that, separately.
Right, the child also has innate knowledge that it doesn't have to extract from the data. I've already explained why already knowing simpler sentences doesn't make a difference here. The simplest rule would be a linear one, unless the brain has some mysterious property that defies theory of computation, as you seem to ardently believe. It doesn't matter if they know that sentences have structure, that doesn't make the linear order hypothesis implausible.

>nothing in the brain is stored linearly
This is ridiculous, of course many things must be stored linearly. As I pointed out, phrase structure trees encode structural AND linear information. Unless you deny that, then there is simply no reason that a grammatical rule, in the absence of any UG principles, could not be postulated which operated on linear order alone.

>This is like arguing that physics is wrong because we could be living in a simulation.
The mysterious object I described is exactly the thing that learns language, therefore you can't simply assume it will work like some retarded simplification based on computer programming, and therefore you can't assume that the "linear first" rule is the rule that's *actually* simplest in the case that matters, i.e. a human brain learning language.

> the child also has innate knowledge that it doesn't have to extract from the data.
Got any proof? :^) And especially proof that this innate knowledge is language-specific?

>The simplest rule would be a linear one, unless the brain has some mysterious property that defies theory of computation, as you seem to ardently believe.
What do you mean with "theory of computation"? Never said any such thing.
>It doesn't matter if they know that sentences have structure, that doesn't make the linear order hypothesis implausible.
If we assume the child has knowledge of hierarchical sentence structure before it has to decide between the two hypotheses, it's not unreasonable to assume that it will use this knowledge to decide between them. I'm not claiming to know how a brain encodes language but choosing the linear hypothesis because it sounds simpler when you describe it in a certain way is a shit argument.

>This is ridiculous, of course many things must be stored linearly.
Guess I was exaggerating a bit there.
>Unless you deny that, then there is simply no reason that a grammatical rule, in the absence of any UG principles, could not be postulated which operated on linear order alone.
Sure, it could be, but we have no reason to assume that this is how learning actually works.

You can't handle the linguist master race.

The main thing you do not seem to bear in mind is that the child hears sentences where auxiliary raising could be following linear or hierarchical order order and some where it needs to follow hierarchical.

And yet, in their own production, they never choose linear. Why is it that they systematically disregard one of the options they hear? That is the question that needs answering.

>hear (A or B) and (B)
>choose B
Why is this surprising again?

This response doesn't make any sense when considering what the other user said. I don't think you understand what's being said to you.

What am I missing? In some cases it could be linear or hierarchical (but it should always be linear), in other cases it could only be hierarchical. Why is it strange that a child learns that it's hierarchical, when that hypothesis is compatible with all data, while the alternate hypothesis is only compatible with some of the data?

My bad, I had misunderstood your answer.

*should always be hierarchical

Concerning UG, I think it's pretty trivial in its most loose characterization. That there should be an upper- and a lower-bound on what languages can be seems only natural. Whether this stems from language specific modules or general learning mechanisms is an empirical question, but this plays no part on how we define the set of all possible languages.

Because this means they must have access to hierarchical structure, something that cannot be obtained from observation of the input alone. Where are they getting their hierarchical parser?

It seems perfectly possible to me that children can learn that language is hierarchical without innate knowledge of language.
"Bobby loves peanuts" is not just a linear sequence of letters, it is a statement about Bobby, peanuts, and Bobby's relation to peanuts. Knowledge like that should not be left out when considering what data is available to a kid who learns a language. From there to
>Bobby who loves peanuts is in the garden
to being able to decide correctly that the question is
>Is Bobby who loves peanuts in the garden?
and not
>Loves Bobby who peanuts is in the garden?
is not such a stretch IMO, since the data about Bobby and his relationship to peanuts is already internalized, and the kid will know, even before getting into all this auxiliary fronting business, that "loves" is a verb relating to Bobby and peanuts, and not about being somewhere.

>Where are they getting their hierarchical parser?
Besides simply learning it, it could also be a product of how cognition in general works, in a way that's not specific to language.

This article
>plato.stanford.edu/entries/innateness-language/
mentions an alternative theory of language learning by Tomasello which sounds much more logical to me.

I'm not the user you're speaking with, but here's an article arguing that there is a false dilemma between connectionist and nativist views of language. It is fairly old though, and I'm not sure how it holds up in light of more recent incarnations of connectionism.

link.springer.com/article/10.1007/BF00413661

I think that until we have a model to experiment with language on we cannot begin to understand this problem.

Personally I am of the opinion that language is the operating system of the brain: it is hard to have thoughts that you do not have words for, and easier to have ones which your language facilitates.
Massive leaps towards synthetic ethical systems, which are the first step in a road towards empirical governance.

Reply to me if ur luke smith faggot memer i know ur reading this

>I think that until we have a model to experiment with language on we cannot begin to understand this problem.
Some experiments are being done, e.g. (taken from the plato.stanford article):

Saffran, Aslin and Newport (1996) found that 8 month old babies were able to learn where the word boundaries in an artificial language occurred after a mere 2 minutes' exposure to a stream of artificial speech. The stream consisted of 3-syllable nonsense words (bidaku, padoti, golabu) repeated continuously for 2 minutes (bidakupadotigolabubidakugolabi …etc.). The stream was constructed so that the ‘transitional probability’ of two sounds X#Y was equal to 1 when the sounds formed part of a word, and equal to 1/3 when the sounds spanned a word boundary. In two minutes, the infants had learned to discriminate the ‘words’ (like bidaku) from the ‘non-words’ (e.g., kupado).

Saffran, J.R., Aslin, R.N. and Newport, E.L. (1996). “Statistical Learning by 8-Month-Old Infants”, Science, 274: 1926-28.

>"Bobby loves peanuts" is not just a linear sequence of letters, it is a statement about Bobby, peanuts, and Bobby's relation to peanuts.

What you are referring to is the argument structure, which is also a much studied topic in first language acquisition. It actually comes later in the development, and children are not aware of it early on.

Regarding relative clauses, you are not aware of how hard they are to process and how much later children produce them. You are examining them through the eyes of a grown up and not bearing in mind that these structures are much more complex internally. Seriously, what you are saying might sound logical to you, but to any syntactician, it sounds like a very naive analysis. Just think how hard these progressions get.

>The mouse ran away.
>The mouse the cat chased ran away.
>The mouse the cat the dog bit chased ran away.

Saying that moving from a simple sentence to a relative clause is "not such a stretch" should tell you that language might not be as self-evident as you think. Your framework requires many accommodations that have little support in experimental data. We can argue about the nature of UG, but its existence is hardly put into question for a reason.

Tomasello is hardly a linguist. He suffers from the same lack of knowledge of basic syntax and semantics as most psychologists who have tackled the subject of language acquisition. Even someone like Adele Goldberg, who is a linguist and is very much on the other end of the UG spectrum, is very cautious about outright denying it.This is her quote

>There is widespread agreement that we are not born “blank slates,” that language universals exist, that grammar exists, and that adults have domain-specific representations of language. The point of contention is whether we should assume that there exist unlearned syntactic universals that are arbitrary and specific to Language.

>Personally I am of the opinion that language is the operating system of the brain: it is hard to have thoughts that you do not have words for, and easier to have ones which your language facilitates.
Not all thought is encoded linguistically. Non-linguistic mental processes include very spacial tasks, such as recognizing faces. This is very apparent when considering people with aphasias who remain extremely proficient in those tasks. The brain is in fact heavily (but not completely) lateralized between verbal and non-verbal reasoning. Language may be one of the media through which thought is encoded, but I'm wary of claims that language is fundamental to the brain's operating system.

Chomsky, the self proclaimed expert on everything, needs to die already.

Can you believe this fraud is regarded by many as the "intellectual of our generation"? Sure he made some great contributions to linguistics, but his social/political commentary past the early 1990s has degenerated into complete anarcho anti white self hating voodoo nonsense

>It actually comes later in the development, and children are not aware of it early on.
But at the point where they're not even aware of this, they're also not correctly applying auxiliary fronting, so there's no problem of how they could learn or know how to do that.

>Just think how hard these progressions get.
>>The mouse ran away.
>>The mouse the cat chased ran away.
>>The mouse the cat the dog bit chased ran away.
The last one is basically never used in practice and grown-ups like me have trouble parsing that. The second one can be constructed from an internal representation that's possibly not sufficient to produce the last one, sort of like using a heuristic instead of a 100% correct algorithm. From introspection I'm pretty sure a lot of language works like that, though I realize it's not a very strong argument.

>Saying that moving from a simple sentence to a relative clause is "not such a stretch"
I was saying it's not such a stretch from a sentence with a relative clause to an auxiliary-fronted version of the same sentence (I'm way out of my depth with this terminology so please correct me if I'm talking shit). I still think that's the case, when you consider everything that's already learned and known before a child will even consider constructing such a question.

(cont.)

>Tomasello is hardly a linguist. He suffers from the same lack of knowledge of basic syntax and semantics as most psychologists who have tackled the subject of language acquisition.
It seems like his approach precisely does not require us to concern ourselves with the specifics of syntax:
>[Tomasello's theory] employs a different conception of linguistic competence, the end state of the learning process. Rather than thinking of competent speakers as representing the rules of grammar in the maximally abstract, simple and elegant format devised by generative linguists, Tomasello conceives of them as employing rules at a variety of different levels of abstraction, and, importantly, as employing rules that are not formulated in purely syntactic terms.

I can somewhat agree with the Adele Goldberg quote.
>language universals exist, that grammar exists, and that adults have domain-specific representations of language
Yes, but I disagree that these are inborn and language-specific.
I'm just taking this from wikipedia but it sums up my feelings pretty well
>According to Christiansen and Chater, "apparently arbitrary aspects of linguistic structure may result from general learning and processing biases deriving from the structure of thought processes, perceptuo-motor factors, cognitive limitations, and pragmatics." (489).
>[14] Christiansen, Morten H. and Chater, Nick. "Language as Shaped by the Brain". Behavioral and Brain Sciences, 31.5 (2008): 489–509.

This is not true. That's the point. Children DON'T hear cases where it must be structure (Is the boy who is holding the ball __ happy?) at a rate anywhere near the rate required to be reliable. See this paper, which I already mentioned
ling.upenn.edu/~ycharles/papers/tlr-final.pdf
In other words they're hearing (linear order or structure) but they don't even consider linear order as an option.

No one claims that general learning can't detect structure given enough data. What is being claimed is that there is a prediction of general learning that, because there isn't enough data available for learners to reach the correct conclusion, learners must make certain errors in order to reach the correct conclusion. Those errors do not happen.

>n other words they're hearing (linear order or structure) but they don't even consider linear order as an option.

So, they admit there is no way to really know how much input is enough to learn a structure, then they come up with an ad hoc computation that yields 1.2% and determine that that is the magic number--their words. Of course, this does not explain why children make mistakes on other structures, despite hearing over that magic number.

Also, you should probably read papers in their entirety.

>l"As pointed out by Fodor and Pylyshyn (1988) and others, a DDL model without innate knowledge, or learning priors, can do nothing but recapitulate the statistical distributions of adult input. But children often learn their languages in ways that clearly defy such distributions"

>Thus, we have one grammatical pattern that is heard frequently but learned
late, and the other that is heard rarely but learned early: if there is no innate
knowledge that primes childrenís language learning, how do we explain such
statistical discrepancies?"

This paper does not dispute UG, it only disputes some interpretations of UG.

The paper does not dispute UG at all, it's arguing in support of it.

The funny thing about this "master" of linguistics is he only knows English.

He also speaks Hebrew and Arabic, but that's completely irrelevant. Linguistics isn't about learning languages.

Bump

>It seems like his approach precisely does not require us to concern ourselves with the specifics of syntax:


This has a main problem: not every possible syntactic structure is interpreted in the same way. Somebody already mentioned Musso et al . It seems that our brain instinctively "reads" certain structures as language and others as "puzzles with words". This phenomenon does not sit well with Tomasello's conception of language structure as being just usage-borne. This, among other things, is why this is not a very popular view among non-linguists.

>Yes, but I disagree that these are inborn and language-specific.

Disagreeing is fine, but it is getting really hard to argue against any form of innate language capacity. The alternative requires way too many provisions that seem very much ad hoc.

I don't think anyone is really arguing against there being an innate language capacity. The question is whether this capacity is rooted in a language specific module or emerges from general learning capacities.

That is the question. And phenomena like the apparent knowledge that infants have of hierarchical structure of language or their ability to parse words at such early stages seems rather specific. I have never heard of any other area where such skills are apparent.

I don't have a strong opinion on the matter, although to be fair to usage based linguistics, I will say that research areas such as deep learning were not very well developed when Chomsky developed the 'poverty of stimulus' argument. Whether or not one agrees with the premises of deep learning for language acquisition, it should be admitted that the project is very interesting and should be pursued.

I agree. Chomskians have been revising the actual content of UG pretty much since its inception. The name is now probably a misnomer and it should probably be changed to Language Acquisition Device.

The precise meaning has certainly changed, but's it's always been a misnomer which easily confuses people

This absolutely. It's unfortunate that a great idea like UG has been so often discarded based on nothing more than its name.

>tfw had to conduct an Empirical research study during my second year of university
>couldn't fucking think of anything to do
>tell my professor my idea, which was a fucking study on memes, give her "drop spaghetti" as an example of why (((this community))) I'm examining is worth exploring
>her stoic German expression breaks into a cackle
>"hahahah ok do it user :))"
>tfw get the highest grade in the class

Stop flouting relevance.

I'm sorry user I just thought it was an amusing anecdote.

No place for amusement on Veeky Forums.

>Do you think that people are born with the innate capacity for language?
The potential for it yes, I believe is not really debated, but it's still something you have to learn.

Left to their own devices, for instance, deaf people will not create a language for themselves. They'll communicate through mime, but without someone to teach them sign language, they never develop any sort of linguistic communication. They do not reach the point where they associate symbols with objects, and give things "names".

There was a pretty wide study of this involving deaf children who grew up in developing nations with no access to sign language. There was at least one particularly interest case where a previously "languageless" individual learned sign language as an adult, and was able to describe an entirely alien thought process based on images he had before he made that breakthrough. (Try to imagine thought without language, and you'll get some idea as to how alien).

Before, he was part of a group of languageless individuals that spent much of their time together, but he broke off all social contact with them after learning sign language, saying that "he couldn't go back to that darkness".

Not that deaf people won't develop their own unique sign languages when briefly exposed to someone who can sign and teaches someone the fundamentals, as has happened a few times.

Those who grow up languageless, even in cases where there's no abuse or malnutrition involved, sometimes never pick up certain cognitive abilities, such as the "where will Sally look for her doll", or the "to the right of the blue door" tests. So it seems even certain forms of analysis are language dependant, and have to be learned at a young age.

Clearly, there must have once been some sort of fluke where an individual made this break through on his own, but generally speaking, it seems language is not instinctual, being more nurture than nature, even if there are certain universal verbal signals.

Stop. Look up Nicaraguan Sign Language.

>Fuck his 9/11 tinfoil hat theories

Here's Chomsky talking about how 9/11 conspiracy theories are stupid.

youtube.com/watch?v=TwZ-vIaW6Bc

That was the result of a school being set up for deaf children where a few of them learned the fundamentals of signing and propagated that while making it their own. Not a spontaneous language.

>Not that deaf people won't develop their own unique sign languages when briefly exposed to someone who can sign and teaches someone the fundamentals, as has happened a few times.
It's thus not unheard of. (That wasn't meant to be a pun.)

But generally, children with no such exposure never develop language. This is true even of children who can hear, but are socially isolated. That, however, usually entails severe abuse and being stimulation deprived to boot, so such individuals don't make for good studies.

What 9/11 tinfoil hat theories are you talking about?

He usually dismisses 9/11 conspiracy theories as "distracting" from the "real problems".

He's more infamous for his left-libertarian and anti-zionist views, but yeah, we agreed we weren't going to talk about that.