jump to navigation

On the trouble with “Global Warming” 6 December 2010

Posted by Oliver Mason in linguistics.
add a comment

Global warming is a real danger to life on the planet. As I write this, another extremely cold bahn_wetterwinter approaches, with snow and ice (and -9 degrees) already starting in late November. Global warming? WTF!?

The term “global warming” is obviously problematic, for several reasons, two of which I will discuss here: firstly, the climate is a complex system, and secondly, the climate is not the weather. Both reasons have links to linguistics, which is my justification to talk about them on this blog.

Climate is a complex system

Chaos theory was discovered by meteorologists, running climate models on computers. Small changes in the starting conditions result in largely different outcomes. That is why nobody can really predict reliably what is going to happen, as there is no clearly visible link from A to B, the conditions today to the conditions at whatever time in the future. Reducing this to a simple statement such as “temperatures will increase globally” is dangerous, as climate is not that simple itself, and you then get people who demolish your argument on the grounds of inaccuracy.

Language can be seen as a complex system as well; it is influenced by so many factors that it is not possible to make any predictions about how language will change. Any statements such as those on the bad influence of text messaging on the English language are clearly not appropriate; broadly general statements of this kind miss the point about the varieties and different language communities that make up the “English” language.

The climate is not the weather

This point is somewhat related: ‘weather’ is what we’ve got now, but ‘climate’ is a broader, more general tendency. So while we might indeed have a cold winter, if we have a correspondingly hotter summer, the average annual temperature might indeed rise, even if it doesn’t feel like that as you shiver your way to work in the morning. And this year is a single event, which in context might be an outlier if it becomes really warm next winter. Weather is somewhat unpredictable and chaotic, otherwise the Met Office would be out of work.

Global climate would also mean that it could become colder in Western Europe, while other regions of the Earth heat up, and the people of Tuvalu will have have a different view about melting ice caps than American farmers in Arizona.

Michael Halliday compares langue and parole (or competence and performance) with climate and weather: while we can observe one (weather/parole/performance), the other can only be perceived indirectly (climate/langue/competence) through studying the former. But essentially they are different views on the same phenomenon, one short-term and one long-term.

A solution?

Coming to the point of this post, I would suggest abandoning the term “Global Warming” in favour of “Climate Change”. Change can go in different directions, and so it is harder for climate-change-deniers to win easy points whenever the weather is colder, and it also emphasises the climate as opposed to the weather. This might seem like a simplistic point similar to the political correctness debate, but lexical choices when representing reality in language are really important.

And thus we have moved from complex systems via Halliday’s view of langue and parole to Critical Discourse Analysis.

Sentence Disambiguation – Modality to the Rescue! 12 November 2009

Posted by Oliver Mason in linguistics.
add a comment

I’m currently reading a new book on iPhone development, iPhone Advanced Projects by Apress. I will probably talk about that book in a later post, but today I will just focus on one sentence I came across on page 212:

I also adore the capability that I have to flag articles from folks I follow on Twitter and save them to Instapaper.

This sentence has (at least) two readings, which are probably only obvious to a linguist (and who else would care?); I highlight the differences by adding commas:

  1. I adore the capability, that I have to flag articles…
  2. I adore the capability that I have, to flag articles…

In the first case you adore the capability. And the capability is that you have to do something (flag articles). Sounds rather odd, doesn’t it? The second case is more clear-cut and easy to understand: you can flag articles, and that’s the capability you have and adore.

So in terms of pattern grammar, you’re either looking at N that or N to-inf with capability. If you consult the Cobuild Dictionary, you’ll find that capability only occurs with the second pattern, the to-infinitive, so that you can rule out the first reading.

Another possibility would be to look at it in terms of modality: here we could argue that capability prospects a modality of ability, but have to expresses obligation; the two don’t go together. Hence the first reading sounds odd, as a capability does not usually force you to do anything, but rather enables you. It could, however, be used to signal sarcasm or irony, as in (the obviously made up) I really like that my new computer gives me the capability to have to save my work every five minutes. This is clearly an odd sentence, suggesting that modality works along similar lines as discourse prosody as described by Louw (1993) [for the full reference follow the previous link].

Here we have discussed two ways of disambiguating a sentence, one based on grammatical properties (or typical environments), and one on a non-syntactic phenomenon (modality). Pattern grammar allows us to identify what the typical usage would be, whereas modality explains to us why the first reading is at odds with the corresponding words. Now all we need is a ‘pattern grammar’ for modality!

Collocations – Do we need them? 3 March 2009

Posted by Oliver Mason in linguistics.
add a comment

The concept of collocation was introduced in the middle of the last century by J.R. Firth with his famous quote “You shall know a word by the company it keeps”. Words are not distributed randomly in a text, but instead they stick with each other, their ‘company’. Starting in the late 1980s, the increased interest in collocation by computational linguists and others working in NLP has lead to a proliferation of methods and algorithms to extract collocations from text corpora.

Typically one starts with the environment of the target (or node) word, and collects all the words that are within a certain distance (or span) of the node. Then their frequency in a reference corpus is compared with their frequency in the environment of the node, and from the ratio of frequencies we determine whether they’re near the node by chance or because they’re part of the node’s company. A bewildering variety of so-called significance functions exists, the oldest probably being the z-score, used by Berry-Rogghe in 1973; later, Church and Hanks (1991) popularised mutual information and t-score, which now seem to have been displaced by log-likelihood as the predominant measure of word association.

The problem is: all these metrics yield different results, and nobody knows (or can tell) which are ‘right’. Mutual information, for example, favours rare words, while the t-score promotes words which are relatively frequent already. But apart from rules-of-thumb, there exists no linguistic justification why one metric is preferable to another. It is all rather ad-hoc.

Part of this is that collocation as a concept is rather underspecified. What does it mean for a word to be ‘significantly more common’ near the node word as opposed to be there just by chance? In a sense, collocations are just diagnostics: we know there are words that are to be expected next to bacon, and we look for collocates and find rasher. Fantastic! Just what we expected. But then we look at fire, and find leafcutter as a very significant collocate. How can that happen? What is the connection between fire and leafcutter? The answer is: ants. There are fire ants, and there are leafcutter ants, and they are sometimes mentioned in the same sentence.

This leads us to an issue which I believe gets us on the right track in the end: the fallacy of using the word as the primary unit of analysis. In the latter example, we’re not dealing with fire and leafcutter, we’re instead concerned with fire ants. Once we realise that, then it is perfectly natural to see leafcutter ants as a collocate, whereas we would be surprised to find engine, which instead is a collocate of the lexical item fire.

So, phraseology is the clue. If we get away from single words, and instead consider multi-word units, then we also have an explanation for collocations. Single words form part of larger MWUs, together with other single words. So leafcutter often forms a unit with ants, as does fire. More generally, MWUs such as parameters of the model are formed of several single words, and here we can observe that parameters and model occur together. But they form a single unit of analysis, and only if we break up this unit by considering single words, then we can observe that parameters and model commonly occur together.

From this we can define a very simple procedure to compute collocations: from a corpus, gather all the MWUs that are associated with a particular word. Get a frequency list of all the single word items in those MWUs, sort by frequency, and there we are.

To conclude, collocation is an epiphenomenon of phraseology, a side-effect of words forming larger units. Phraseological units contain multiple single words, and those are picked up by collocation software, because those are the ones that commonly occur in a text together. And the reason for occurring together is that they form a single unit. Once we look at text in terms of MWUs, the need for collocation disappears. Collocation just picks out the constituent elements of multi-word units.

One could of course argue that this is a circular argument, that we are simply replacing a procedure to calculate collocations by one that calculates MWUs. But the difference between those two procedures is that MWU-recognition does not require complicated statistics (which I find hard to see justification for), but instead simply looks at recurrent patternings in language. MWUs are re-usable chunks of texts, which can be justified on the grounds of usage. Collocation is a much harder concept to explain and integrate into views of language. And, as it turns out, we don’t really need it at all.

References

  • Berry-Rogghe, G.L.M. (1973) “The Computation of Collocations and Their Relevance in Lexical Studies.” in The Computer and Literary Studies. Eds. A.J. Aitken, R.W. Bailey and N. Hamilton-Smith. Edinburgh: Edinburgh University Press, p 103-112.
  • Church, K., and Hanks, P. (1991) “Word Association Norms, Mutual Information and Lexicography,” Computational Linguistics, Vol 16:1, p 22-29.
  • Firth, J. R. (1957) “A Synopsis of Linguistic Theory 1930-1955” in Studies in Linguistic Analysis, Oxford: Philological Society.