Tuesday, 10 December 2013

Language change: The descriptive versus the prescriptive approach



Introduction

Even though language is always in a state of flux, there are certain linguistic features that are unpopular and derided by those who do not use these features. It could be that a particular pronunciation of a word is heavily stigmatised, or a new usage of an existing word might be deemed incorrect.

Linguistic prescriptivism is the practice of championing one variety or manner of speaking a language over another. It includes judgements on what usages are socially proper and politically correct.

It is evident that some degree of standardisation is needed in language. A standardised variety of a language is necessary if the majority of the language's speakers are to understand television programmes, newspapers, books, journals and other methods of mass communication.

For language learning, we also need a standardised variety of language, even if it is a little stilted and formal. If you decide to learn business French from a series of books and DVD's, for example, it would be no use if the books and DVD's teach a moribund regional variety that has fewer than 500 speakers!

'How do you say tour d'Eiffel in French?'

The main aims of linguistic prescription are to specify standard language forms either generally, or for specific purposes (e.g. what style and register is appropriate in, for example, a legal brief?) and to formulate these in such a way as to make them easily taught or learned. 

In all areas of language, it is meaningful to describe some usages as, at least, inappropriate in particular contexts. It is helpful to draw workable guidelines for language users seeking advice in such matters. 

Descriptivism versus prescriptivism

Problems begin to arise when language prescription clashes with the discipline of Linguistics, which advocates the descriptive approach. Linguists observe and record how language is actually used in the real world, not how it 'should' be used or how people always get it wrong.

Linguists study language objectively; that is, they do not pronounce value judgements on, say, a new meaning of an existing word that is gaining traction among younger speakers, or the way in which lower-class people from inner-city London pronounce a certain sound.

Also, linguists are concerned that certain sentiments linked to language prescriptivism may imply a view that some forms of language are incorrect, improper, illogical, lazy or of low aesthetic value (Edwards, 2009,)  when in reality, these ideas cannot be backed up with empirical research.

Prescription is, by nature, resistant to language variation and change, because these phenomena tend to wreak havoc with the neat and orderly rules that you find in grammar manuals and style guides.

Even the standard, prestigious language isn't set in stone. Pronunciations, spellings and meanings of words have changed over time, even in Standard English. Any attempt to write a grammar for Standard English without allowing for variation is futile, because it ends up creating a sterile language that nobody actually uses.


Language change is not harmful, and is never responsible for the death or decline of a language. We do not have to like or endorse a particular linguistic variable, but we do need to be careful when we slap a value judgement on a linguistic variable, and by extension, on the group of people who use that feature.

Frequently, a group of speakers will be branded as stupid simply because they use a proscribed linguistic feature. This attitude is very common on the internet today, but two big assumptions are being made here: first, that there really is a link between intelligence and the use of a certain linguistic feature. The second assumption is that the linguistic variable in question really is incorrect in an objective way. 

It is true that certain features are associated with the lower class. The sociolinguist William Labov carried out a famous study in New York City in 1966. He wanted to see if there was a link between rhoticity and social class.

Rhoticity is the pronunciation of the <r> sound in fourth floor as a distinct consonant; non-rhoticity means that the <r> is not pronounced, but may lengthen the previous vowel instead. Most Americans, Canadians, Scottish and Irish people are rhotic; by contrast, most English people, Australians and New Zealanders are non-rhotic.

In 1966, New York was variably rhotic (as it still is.) The rhotic pronunciation of <r> is standard in American English, whereas non-rhoticity in New York was associated with the lower social class. Labov went into three different department stores, each associated with a particular social class, and asked various people in the store where the 'fourth floor' was. He made a note of whether or not the individual spoke with a rhotic or non-rhotic pronunciation, and then he asked them to repeat it, in order to achieve a more careful articulation.

Sure enough, there was a distinct correlation between social class and rhoticity in New York. Non-rhoticity was more common among the lower class than in the middle and higher class. 

Can we then say that non-rhoticity implies stupidity? After all, it was (and still is) associated with the lower-class in New York, so surely it is a reliable way of telling that somebody is uneducated? 

No. Of course not. Even the people belonging to the middle and higher classes in New York City were non-rhotic at least some of the time. Being non-rhotic in New York does not mean that you're stupid or poorly uneducated. There is nothing inherent to non-rhoticity that implies that those individuals who have that feature are lacking in mental faculties in some way.

After all, non-rhoticity is the norm in England. Today, rhotic accents today are chiefly found in the South-West of England (i.e. the 'West Country,') but rhoticity is not the prestige pronunciation. The pronunciation without [r] includes the prestigious centre of London (Mesthrie et al., 2009.) 

Furthermore, there is some association between living in the West Country and being a bit backward and uneducated, but this is just a stereotype. Does it mean that rhotic speakers in England are necessarily uneducated and stupid? Of course not!

The Wurzels - famour for popularising the 'scrumpy and Western' genre of music

The presence or lack of rhoticity may reveal a few things about the speaker's origin and social class. New Yorkers might even make a conscious effort to adopt rhoticity, just as people from South-West England might choose to eschew it, in order to identify with a higher social class.

This is also true for other linguistic variables. Speakers of an English variety that uses double negatives might make an effort to weed double negatives out of their speech in order to avoid ridicule, and to be identified as a member of a higher social class.

Objections to non-standard language usage

As I mentioned earlier, you might think that a certain linguistic feature is objectively wrong because it is incorrect, improper, illogical, lazy or of low aesthetic value.

I am now going to look at all of these objections, and explain why none of them hold water.

Incorrect language:

There is a difference between non-standard usage and a genuine speech error. When somebody makes a true speech error, they tend to be aware of it, and they might correct themselves spontaneously: 

'Here are the knives of forks - I mean, knives and forks.'

'I can't find the jickle par - the pickle jar.'

All native speakers of a language know the grammatical rules of their first language(s). While it's true that people continue to make spelling mistakes into their adulthood, the spoken language is primary. Unless you have a genuine language disorder, you learn your native language(s) from your parents/caregivers, and as a child, you will pick up language patterns from your peers. 

Many people are raised bialectally; that is, they learn one variety of the language at home, and a different variety at school. That is why children are frequently told off at school for using non-standard features such as double negatives and verb forms such as 'brung' for 'brought.' 

It's not that the child acquired language incorrectly, and needs to be taught English all over again. Instead, they acquired these non-standard but ubiquitous features from their peers or family members.

The point is, you can't learn language incorrectly, unless you spend the first eleven years of your life with absolutely no social interaction whatsoever. The dialect you learn as a child might be non-standard, but that doesn't mean it's wrong. We must not confuse non-standard language with incorrect language.

In some parts of America, people sometimes say 'on accident,' in contrast to the standard usage, which is 'by accident.' To many of us English speakers, 'on accident' sounds strange, and probably wrong, because it is unfamiliar. I've never heard anyone say it in the wild here in the UK. However, 'on accident' seems to be pretty common in some areas of the US, especially the north-west (click here for a funky coloured map showing its distribution.)

Is 'on accident' a simple matter of using the wrong preposition with the wrong noun? After all, grammar manuals and internet blogs will tell you that it's 'on purpose' and 'by accident'. Surely 'on accident' is a language error? Well, the problem with this line of reasoning is that prepositions can vary, even within the same phrase. You can say that 'John is different to Mary', but you can also say that 'John is different from Mary.' Some people even say that 'John is different than Mary.' All of them mean exactly the same thing.

In this case, the preposition doesn't have much meaning of its own, as evidenced by the fact that three different prepositions are in variation in the 'different to/from/than' construction. Likewise, 'on accident' in comparison to 'by accident' doesn't change the meaning of the phrase. It's simply a variation.

Also, the fact that nobody (to my knowledge) says '*to accident,' or '*for accident' is further evidence for the fact that 'on accident' is a genuine linguistic variable and not just a speech error that young, uneducated people make.

Improper language:
  
Objections that a linguistic feature is improper is usually linked to the notion of appropriateness. Of course, if you are giving a speech to a live audience, or writing a doctoral thesis, you want to be understood by all of your audience. In that situation, you might make sure that your speech or writing conforms mostly to the standard.

However, when you're at home with your close family, you will speak the usual language of the home. This might be quite divergent from the standardised variety of English that you would use when giving a university lecture, for example. Similarly, if you're down the pub with your mates (or in the bar with your buddies), you're going to speak your local variety of English. It simply wouldn't be appropriate to speak the Queen's English in that situation1

Nevertheless, there is no compelling reason why anyone should shed all traces of their regional dialect in the spoken language. Until fairly recently, the BBC enforced that all newsreaders must speak in RP. All regionalisms were banned. 

Wisely, the BBC changed their stance. Newsreaders, reporters and TV presenters are now free to speak in their native dialect. Does this mean that the BBC has devolved into an unintelligible mess; a hodge-podge of the worst variety? I'll leave this question unanswered (!) but the promotion of regional dialects has not stopped its listeners from being able to understand what people on the BBC are saying.

This has not stopped a whole slew of complaints written to the BBC for improper language use of one of its presenters. Oh yes, listeners will complain at the drop of a hat. 

This is completely ludicrous because there is not a single presenter who is unintelligible or even remotely difficult to understand. Perhaps one of them will slip in a double negative on Match of the Day, or maybe a newsreader had the indiscretion to use one glottal stop too many.

On a more serious note, though, we shouldn't confuse appropriateness with correctness. These are two separate things.

Illogical language:

Certain linguistic variables are criticised on the basis that they are supposedly illogical.

Perhaps a certain grammatical construction doesn't make logical sense, or maybe it adds unnecessary redundancy.

As humans, we like things to be neat and orderly, but language doesn't work in that way. It's full of inconsistencies, variation and redundancies.

Consider double negatives. If we were to judge language by how logical it is, then yes, double negatives would be incorrect. After all, don't we all know that two negatives cancel each other out?

In Standard English, this is true. In many other dialects of English, it is demonstrably not true. Saying 'I don't need nothing' in many non-standard English varieties doesn't mean that you don't need anything. It means that you really don't need anything. Here, the extra negative word is intensifying the other, not cancelling it out.

Outside of English, various languages do allow multiple negatives in one clause, where the negative effect is intensified rather than cancelled out. Languages where multiple negatives intensify each other are said to have negative concord. Portuguese, Spanish, Persian, Russian, Serbian, Afrikaans, Latvian and Greek all have negative concord. 

Consider this example from Serbian:

Niko      nikada    nigde   ništa   nije   uradio

For the benefit of those of you who don't speak Serbian (there's always someone!), the above sentence would literally be translated as ‘nobody never did not do nothing nowhere’.

In Standard English we would say ‘nobody has ever done anything anywhere’, but the (perfectly grammatical) Serbian equivalent has no less than five negative words in one sentence.

Are we to claim that Serbian and all other languages that have negative concord are illogical because they use multiple negatives to intensify the meaning? I am not suggesting that we should use double negatives in formal speech and writing, but we do need to recognise that negative concord is a long-established feature in some English varieties.

Also, certain features are criticised for being redundant, and therefore illogical on that basis. For example, people frequently object to the use of word-final or double prepositions in sentences such as:

Where are you at?

Let's take the coats off of the hook.

Some people object to the use of 'at,' when 'where are you?' works just fine. 'Off of' is also frequently proscribed on the basis that simply saying 'off' conveys the same sense. 

'Off of' is more common in speech than in writing and, yes, it should be avoided in a professional context because it has a negative stigma attached to it. However, criticising these prepositions as superfluous demonstrates an imperfect understanding of how language really works. 

Every language is full of little words that have no meaning in themselves, but function as part of an idiom. According to the Merriam Webster dictionary (2013), 'off of' was first recorded in 1567, so it has some pedigree! It's hardly a case of young people ruining the English language by sprinkling extra prepositions into every alternate sentence.

It is unwise to proscribe a certain linguistic variable because it is illogical or redundant. Even little, apparently meaningless words can be used for emphasis. 

Furthermore, my native speaker intuition leads me to believe that 'where are you?' and 'where are you at?' don't quite mean the same thing. I think that 'where are you?' sounds a little more blunt and accusatory than 'where are you at?' but this is just my opinion and I could be wrong.

Also, 'where's your head at?' and 'where's your head?' certainly do not mean the same thing! The extra 'at' is idiomatic, even though it has little meaning in itself. 

This is what happens if you try to define the word 'at.'

Lazy language:

This argument tends to be intertwined with the nature of the speaker(s) who use(s) a particular feature. 

African American Vernacular English (AAVE) or 'Ebonics' is a variety of English that is associated with African-Americans, as its name would suggest. As a dialect of English, it does not enjoy much prestige, and is frequently regarded by speakers of Standard English as a badly spoken version of their language, marred by a profusion of ignorant mistakes in grammar and pronunciation (Pullum, 1999.)

If common opinion is to be believed, AAVE is a lazy and a corrupted version of Standard English. As Lanehart (2001) points out, it is no coincidence that the speech of a traditionally despised and marginalised group of people should also be derided. Let's take a look at what sets AAVE apart from Standard English. Is it really just a lazy and debased form of English? 


A cursory glance at some of the features of AAVE might suggest that it is indeed simpler than Standard English. AAVE does not have the preterite (past tense) -ed ending characteristic of most English varieties. Does that mean that it is lazy?

Not so. Quite the opposite, in fact.

As Pullum (1999) points out, AAVE has its own set of rules that happen to be distinct from Standard English. AAVE features an optional tense system with four distinct past tenses and three future tenses. AAVE is able to express distinctions in terms of when in time an action took place; such distinctions cannot be expressed so easily in Standard English. 

Let's have a look at the four past tenses of AAVE:

I been flown it (pre-recent)
I done fly it (recent)
I did fly it (pre-present)
I do fly it (past inceptive)

Also, AAVE has a habitual aspect, indicating that the subject carries out an action on a regular, or habitual, basis. He be working Tuesdays is habitual, meaning that 'he works Tuesdays on a regular basis.' 

In Standard English, the habitual is not distinct from the simple present. Instead, we have to use an adverb such as usually or a phrase such as 'on a regular basis.' 

So is AAVE lazy? No, of course not. It's different to Standard English, and in some cases, it does seem like words are 'missed out.' In AAVE, the copula (the verb 'to be') is often dropped in the present tense, in sentences such as 'you crazy' or 'she my sister.' 

However, copula dropping is governed by rules; it doesn't just happen haphazardly. Only the forms 'is' and 'are' can be omitted. Likewise, the copula cannot be omitted at the end of a clause. For example, in AAVE, *Do you know who he would be ungrammatical. You'd have to say do you know who he is, just as in Standard English (Pullum, 1999.)

Besides, languages such as Russian, Hebrew and Arabic allow copula dropping in similar circumstances. These languages enjoy immense prestige in the countries in which they are spoken, so copula dropping is hardly an indicator of laziness or sloppy speech. Copula dropping is a way of economising language. If a word serves little function on its own, it is liable to being lost (perhaps 'off of' will become extinct at some point in the future!)

More broadly, language has a tendency to simplify in some areas over time. Inflections (word endings), consonants at the end of a syllable, unstressed vowels and redundant function words are frequently lost in languages across the world. 

One of the chief goals of language is for efficient communication, so if dropping a syllable from a word, or losing a suffix makes the language more efficient without creating excess ambiguity, then it is not detrimental to the language.

Low aesthetic value:

This is a very subjective standard to judge language by. Beauty in language, as in other areas, is an elusive concept that is extremely subjective and difficult to measure.

As of now, there is no objective and fair way to measure aesthetic value and beauty in language. 

People will often describe a language or dialect they don't like as 'guttural' or 'harsh.' You will often hear German being described in that way. In fact, German does not have any more so-called 'guttural' (velar or uvular) sounds than does French, which is often rated highly. Don't we all know that French is the language of love?

Not this thing again.

Attitudes towards regional accents are based more on social connotations and prejudices surrounding the location or social group associated with that accent than on the sound itself, as demonstrated by experiments using outsiders.

In the UK, the Birmingham (Brummie) accent and the London accent are two of the least-regarded regional accents in the country. The interesting thing is that American listeners, who do not recognise a Birmingham accent when they hear one, who know nothing about Birmingham and who probably don't even know where it is, do not find the Birmingham accent unpleasant at all. 

And everything they know about London leads them to find London accents highly attractive (Andersson & Trudgill, 1990, p. 136)

People who speak with a received pronunciation (RP) accent are commonly perceived to be more authoritative and intelligent than - but not as nice or trustworthy - people who speak in a local accent.

Experiments have shown that even the same speaker can be perceived differently depending on what accent they're using at that moment!

Let's not villify a linguistic variable, or even an entire accent or dialect, on the basis that it is unaesthetic or grating on the ear. These are subjective opinions that tend to be associated with prejudice.

Where to draw the line?

If a community of speakers use a particular word in a novel way, or they pronounce a sound in a certain way, then it is correct in that speech community. 

This is not to say that we should all embrace an 'anything goes' attitude towards language. One of the chief purposes of language, as well as to create and maintain social bonds, is to give and receive information. 

What good is a scientific journal that uses incorrect terminology? Would you want to read a newspaper article that contains no spaces or punctuation? Would you pepper a job interview with Cockney rhyming slang or obscure Shakespearean insults? Of course not. 

Nor would we say that it is fine to use the word 'persecute' instead of 'prosecute' in an essay, for example. That is clearly an error, because there is no cohesive group of English speakers who use the word 'persecute' to mean 'to start criminal proceedings against somebody.'

It is important that legal terms are used correctly in a court of law, or in a country's written constitution. Ambiguity in these instances could be damaging. Legal language is a distinct register that contains a high amount of fixed phrases and jargon specific to that area. Court proceedings also tend to be highly structured and ordered (unless you're in a kangaroo court. Then it's bad news.) However, does it matter if a lawyer or defendant speaks in a rhotic or a non-rhotic accent?

If a linguistic feature is in common and consistent use in a particular speech community (this could include the English language as a whole),then it's not incorrect, even if it is non-standard. Some features are not appropriate in a formal setting, just as it wouldn't be appropriate to put on an ultra-RP accent when you're down the pub with your mates.

For example, the use of the word 'literally' to mean 'figuratively', as in 'I'm literally over the moon' is frequently labelled as incorrect. But why? It's only 'incorrect' because it is not the original meaning of the word. However, if we were to use that logic, then we should also insist that 'silly' must only mean 'weak,' and 'nice' must only mean 'foolish.' After all, that is what these words meant about 700 years ago (Harper, 2013.)

Instead, this new usage of 'literally' is appropriate in informal situations, but not in, for example, academic writing or a serious newspaper article. Nearly all English speakers are familiar with this new usage of 'literally,' and I'm sure many of us have used it in that way.

Variation versus speech errors

Linguists do of course recognise that certain linguistic features are heavily stigmatised. The tendency of younger British people to pronounce /t/ as a glottal stop in words such as 'letter' and 'rattle' seems to be a good basis for some language pundits to declare the imminent death of the English language, whereas they themselves probably glottalise the /t/ in 'witness.' 

/t/-glottaling is associated with lower-class, urban speech. But there is nothing inherently lazy or wrong about using the glottal stop. These are just people's perceptions. A feature which is associated with the lower-class might be stigmatised, but sociolinguists are interested in the exact distribution of such variables. 

For example, do men glottalise more than women? Is it actually increasing throughout the UK? Are middle-class people glottalising their t's more than they used to? To me, these questions are far more interesting and insightful than to simply say that 'people shouldn't glottalise their t's. It's lazy and incorrect,' and have done with it.

You can see that there is a big difference between replacing a /t/ with a glottal stop and saying the word 'persecute' instead of 'prosecute' in a court of law. T-glottaling is familiar to most people in the UK, and it is, to some extent, linked with the lower social class. 

T-glottaling is familiar enough in the UK not to cause potential confusion or ambiguity in, for example, a court of law. On the other hand, mixing up the definition of 'prosecute' and 'persecute' is not associated with any particular group of people from a geographical region or social class. It's not a true linguistic variable.

Even linguists aren't immune to making value judgements. I think many of us have a tendency towards a 'not in my back yard' attitude towards language change. That is, we accept the fact that language change is natural, but doesn't it just irritate us when young people from Somewhereville pronounce that sound in that way?

Of course we don't have to like a new word or pronunciation that enters the language. We might rejoice when a neologism dies out within five years of its appearing everywhere on TV. But none can stem the flow of language change.

Conclusion

To sum up, it's time we abandoned the idea that there is a 'right' way and a 'wrong' way of using language. Instead, we should be analysing language use in terms of appropriateness for the context. 

As I have shown, all attempts to villify a non-standard linguistic variable on linguistic grounds fall flat when scrutinised. Nevertheless, having a standardised variety of language is useful, and I do not advocate using a stigmatised language feature when it is inappropriate to do so.

In my next article, I am going to look at how and why words change their meaning over time, and the ways in which new words appear in the language.

Footnotes

1 - Unless it's a stag party, and you're dressed up as Queen Victoria. In that case, please continue.

References

Andersson, L. & Trudgill, P. (1990) Bad Language. Blackwell

Edwards, J. (2009) Language and Identity: an Introduction. Cambridge University Press: Cambridge

Harper, D. (2013) Online Etymological Dictionary. http://www.etymonline.com [Accessed 7/12/13]

Lanehart, S. (2001) Principles of Linguistic Change, II: Social factors, Oxford: Blackwell

Merriam-Webster online dictionary (2013) 'Off of.' http://www.merriam-webster.com/dictionary/off%20of [Accessed 8/12/13]

Mesthrie, R., Swann, J., Deumert, A., & Leap, W. (2009) Introducing Sociolinguistics: 2nd edition. Edinburgh University Press

Pullum, G. (1999) 'African American Vernacular English Is Not Standard English With Mistakes,' in R. S. Wheeler (ed,) The Workings of Language. Westport CT

No comments:

Post a Comment