by Tony McEnery and Andrew Hardie; published by Cambridge University Press, 2012
 

Answers to exercises: Chapter Seven discussion questions

Q7-1) What is the value of, and potential problems with, corpus-based functional-typological analysis?

We do see a value for typology in contrastive studies based on corpora drawn from a relatively small subset of languages and language families. They cannot replace typological surveys that cover more or most of the different language families of the world, but what they can do is provide a ground within which particular concepts can studied at a level of detail that cannot be accomplished via the normal typological survey method.

Typological surveys normally use as their basic data published grammars of the languages within the survey sample. This has a number of advantages – for example, it allows the typologist to work, straight away at a higher level of abstraction than the original author of the grammar did; items of information such as the Basic Word Order do not need to be worked out from direct observation, rather the observation of the original grammarian can simply be taken as a data point in the survey. However, being further from the raw data has disadvantages too, for example the inability to consider issues such as quantitative shifts in the usage of different features in different types of language context, or to assess the semantics and pragmatics of a function across many (that is, dozens or hundreds of) examples.

These kinds of data-rich analyses are exactly the kinds of things that, as we have seen, corpus methods excel at. On the other hand, analysing any lexical or grammatical phenomenon using corpus data is a drastically more labour-intensive undertaking than simply accessing and logging the analysis presented in a published grammar. Consider the ditransitive construction. A detailed corpus-based analysis of this construction alone could involve manual analysis of hundreds of examples, as indeed do the papers by Stefanowitsch and Gries that we cite on page 183. In a study spanning dozens of languages it is clearly not feasible to apply this level of detail to every single language – even if the necessary corpus resources are available for all the languages, which they almost certainly are not.

For this reason, the role of the detailed corpus analysis must usually be as a complement, rather than an alternative, to the typological survey. A detailed study on a small number of languages can be used to establish theoretical parameters for a wider typological study. For instance, Xiao and McEnery’s study of aspect in Chinese and English establishes a model of situation aspect which has at least a prima facie claim to be language-independent, given the genetic and geographical distance between the two languages involved. Further research could now be undertaken within a more traditionally typological paradigm to examine the extent to which this model actually does provide a reasonable framework for the analysis of aspect across all languages.

Moving to the final issue, of the feasibility and desirability of collecting corpora of all the languages typologists are interested in – which effectively means, all languages – clearly this is not feasible. Corpus collection for even a single language is a huge undertaking. Moreover, spoken corpus collection is orders of magnitude more difficult than written corpus collection – and, of course, many of the world’s languages lack a written form. Where it is possible, collection of naturalistic data that could be used within a corpus is an important part of language documentation. However, it is not always practicable. It would certainly be counter-productive to focus all effort on data collection and none on analysis. Without analytic work to demonstrate on an ongoing basis the value of the data in terms of theoretical and/or practical outcomes, the enthusiasm of the field for data collection would swiftly wane. So a balance needs to be struck; precisely what that balance should be is, however, a much trickier question!

Q7-2) Are corpus methods a better match for the functionalist-theoretical view of discourse or for the Critical Discourse Analysis view of discourse (or for neither)? Why, or why not?

Arguably, on a basic level, corpus methods are a much better match for the sense of discourse normally addressed in functionalist theoretical linguistics. That is because functional grammar’s use of the notion of discourse refers overwhelmingly to fairly short-range relationships, such as:

Corpus methods are usually a good tool for approaching such short-range relations – because we approach the text through a relatively narrow window, the width of the co-text in a concordance line, or of the span in the calculation of statistical collocates. We can go beyond these fairly narrow windows, but the basic corpus methods are at root local in nature. CDA is much more concerned with the text as a whole, with the arguments a text makes (explicitly or implicitly), with the social contexts of text reception and production – all factors which corpus methods are not, in their usual form, ideally suited to address. So at this level CDA does not fit well with corpus methods. Where corpus methods come into their own within a CDA framework is not at the level of analysing the social context of particular texts, but rather as a means to operationalising two important CDA notions: first, the idea that a discourse is not just a way of using language but also a way of thinking – patterns of usage on a large scale are in principle revealing of patterns of thinking that (a) are widespread and that (b) speakers are not consciously aware of; second, the notion of the incremental effect of discourse, for which corpus methods are an ideal tool – especially collocational approaches, which can look at the incremental effects of a multitude of cases of association which arte not within the scope of a single text.

In sum, then, corpus methods are fundamentally local, which is a better match to an approach to discourse which emphasises local relationships; but beyond the scope of the single text they also offer a way to approach global, incremental phenomena, which matches well a very high-level approach to discourses as ways of thinking about something.

Q7-3) Conceptual Metaphor Theory as necessary but not sufficient to explain metaphorical language in the corpus

Is it a problem for CMT that much of human metaphorical behaviour, seen on the large scale, is not explained solely by the processes that CMT theorises?

No – to be accepted, a theory does not need to explain everything; it just has to explain as much as possible, as parsimoniously as possible (relative to alternative theories), without being falsified by the evidence. A classic case of a full explanation requiring multiple theories is contemporary physics, where two unrelated theories (those of Relativity and Quantum Mechanics) are required to explain the observable universe, with n either being sufficient on its own. Physicists actively seek a so-called “theory of everything”, that is, a single theory that will explain everything currently explained by Relativity and Quantum Mechanics. If it is found, such a theory would be better than the existing pair of theories because, as an explanation for the same set of observations, one theory is more parsimonious than two. But that does not mean that there is something “wrong” with the existing theories; they are both excellent theories which explain a lot of data based on very few assumptions. Likewise, the fact that CMT does not explain everything about metaphorical language does not mean it must be rejected as “wrong”. Unless we assume that there must be a single “theory of everything” for language, which would be a deeply dubious assumption, there is in fact no especial reason to expect that CMT would explain everything about metaphorical language.

Can the following two ideas be reconciled? (i) it is often the case that the metaphorical usage of a particular word is limited to specific phraseological contexts, that are presumably learnt as idiomatic chunks and not analysed by speakers; (ii) conceptual mappings between a source domain and target domain occur at the conceptual level, not the linguistic level, and are productive, leading to many different metaphorical expressions.

We would say that it is likely they can be reconciled – but the reconciliation is not straightforward. The problem is that, if the metaphorical mapping takes place at the conceptual level, we might reasonably expect the mapping to emerge in all the ways in which the source concept can be put into words. Conversely, if a conceptual mapping emerges only in certain phraseologies that have been learnt as unanalysed chunks, then it is reasonable to say that this mapping has not actually taken place in current speakers’ minds. Exactly how the phraseological constraints on the expression of metaphor operate is, itself, an area for detailed theorisation; to our knowledge, no comprehensive account has yet been proposed.

Is CMT compatible with the neo-Firthian theory that is used to explain the context-dependent nature of metaphor – especially the stronger versions of that theory, as espoused by Louw and Teubert, for instance?

No, because CMT assumes cognitive entities – such as a mind in which concepts can exist, and mappings between those concepts – which are explicitly rejected by stronger forms of neo-Firthian theory. (Strands of neo-Firthian thought that do not utterly reject the possibility of conceptual entities are, by contrast, compatible with CMT.)

 
Tony McEnery Andrew Hardie

Department of Linguistics and English Language, Lancaster University, United Kingdom