Separate test instruments need to be designed for pilots and air traffic controllers.
In radiotelephony communications, although pilots and air traffic controllers interact with each other over the radio, they perform different tasks, have distinct communicative purposes and are exposed to different constraints and cognitive workloads. They do not share the same perspectives and mental processes. For example, in radiotelephony communications, pilots mostly need to listen to ATC instructions, respond to queries, describe problems and interact with different ATC units through all phases of flight. Controllers on the other hand, are required to listen to and communicate with multiple pilots on the same frequency as traffic passes through their area of responsibility. Controllers mostly need to monitor traffic, issue instructions, understand pilots’ problems in non-routine situations, provide information and advice, and solutions.
These different job functions shape the kind of language skills and knowledge needed by each profession for communication. Put simply, the type of language pilots are required to produce is what controllers need to understand and vice versa. This has obvious implications for test content and test design.
Therefore, separate test instruments need be designed for pilots and air traffic controllers to reflect their distinct language, context and communication needs.
Clearly defining the purpose of a test and itsis an important initial step in test instrument design, as these decisions will affect the whole test development process.
In Language for Specific Purposes (LSP) testing, a well conducted needs analysis of thereveals the topics, contexts, task characteristics and that test-takers need to produce or understand, and the strategies and cognitive processes they engage with.
What distinguishes a LSP test from a more general purpose test is theof the test and the degree of of test tasks.
Test-takers’ performance in these tasks can then be considered as evidence of language ability in the specific purpose domain – that is, in real world communication contexts.
What does this mean for test design?
The structure of test instruments for pilots and air traffic controllers may look similar, however, the content and task requirements need to differ and reflect the language needs and communicative contexts associated with each profession.
Next ➟ ICAO Statements & Remarks
ICAO Statements & Remarks
The following statements from ICAO Document 9835 (2nd Edition, 2010) are related to this issue.
|22.214.171.124.||The purpose of a language proficiency test is to assess test-takers’ use of language based on their performance in an artificial situation in order to make generalizations about their ability to use language in future real-life situations. Because of the high stakes involved, pilots and air traffic controllers deserve to be tested in a context similar to that in which they work. Test content should, therefore, be relevant to their work roles.|
|126.96.36.199.||A definition of test purpose that describes the aims of the test and the target population should be accessible to all decision-makers.
— What it means. Different tests have different purposes (as described in 6.2.5) and different target populations. If an existing test is being considered, it is important that the organization offering the test clearly describes the purpose of the test and the population of test-takers for whom the test was developed.
— Why it is important. A clear definition of test purpose and target population is a necessary starting point for evaluating the appropriateness of a test. The purpose and target population of a planned test influence the process of test development and test administration. For example, a test designed to evaluate the proficiency of ab initio pilots may be very different from a test developed for experienced or professional pilots; likewise, a test designed to measure pilots’ or controllers’ progress during a training programme may be inappropriate as a proficiency test for licensing purposes.
|188.8.131.52.||The test should be specific to aviation operations.
A further step toward providing test-takers with a familiar aviation-related context would be to customize the tests for controllers or pilots. Thus, controllers would have the possibility of taking tests using or referring to a tower, approach or en-route environment; similarly, pilots would be able to take tests suited to their aircraft licenses (ATPL, PPL, CPL). These should be seen as adaptations in the interest of the comfort of the test-taker, not as specialized tests of distinct varieties of language proficiency.
|3.2.2.||The context of the communication includes features such as:
a) domains (personal, occupational, etc.);
b) situations (physical location, institutional conventions, etc.);
c) conditions and constraints (acoustic interference, relative social status of speakers, time pressures, etc.);
d) mental contexts of the user and of the interlocutor (i.e. filtering of the external context through different perceptual mechanisms);
e) language activities (receptive/productive/interactive/mediating); and
f) texts (spoken/written).
|3.2.3.||The tasks and purposes of the users determine:
a) communication themes or topics;
b) dominant speech acts or language functions to be understood or produced;
c) dominant interactive schemata or speech-act sequences and exchange structures;
d) dominant strategies (e.g. interaction: turn-taking, cooperating, communication repair, etc.).
|3.3.2.||While pilots and controllers are communication partners, they approach the task from different perspectives, and therefore their communication differs in purpose and standpoint. Controllers, with an overall view of traffic within an airspace, are concerned with ensuring the safety of all aircraft in that airspace, with additional secondary consideration to the efficient management of their own workload. Meanwhile, flight crews are focused on the progress of their individual flight, with additional secondary consideration given to the efficiency and expeditiousness of that flight. This divergence of standpoint and purpose causes a certain degree of negotiation in radiotelephony communications and is one of the reasons why plain language is needed.|
|3.4.8.||Due to the different roles of the pilot and controller within the overall context of their activities, some functions are typically uttered exclusively by one or the other. These functions are marked (P) or (C) in the checklist in Appendix B, Part I. Other functions, marked (C/P), may be uttered by either speaker in the course of their exchanges. In the training context, this distinction will determine whether given functions need to be learned for comprehension, for production or for both comprehension and production.|
Next ➟ Why this issue is important
Why this issue is important
Both pilots and air traffic controllers communicate over the radio using ICAO standardised phraseology and plain aviation language, whenever necessary. Test instruments designed to assess the language skills of these professionals must, as closely as possible, mirror the communicative tasks that are required and performed by each of them in real-life.
A number of features related to the specific characteristics of pilots’ and controllers’ TLU situations highlight the need for separate test instruments:
- Operational context – pilots and controllers are separated in space and physically located in distinct places, interacting with different types of equipment and software, and subject to different constraints and interferences;
- Communicative purpose – controllers coordinate and communicate with multiple aircraft at distinct phases of flight in order to help them complete their flight plans, while at the same time accommodating traffic safely and efficiently. Pilots, on the other hand, perform a number of demanding tasks simultaneously, deal with a lot of information and must manage communication effectively with different controllers in different units and sectors as the flight progresses;
- Language functions – the intention of a speaker in producing a message is represented by a communicative language function. Due to the distinct roles of pilots and controllers within the overall context of radio-telephony communications, tasks that are specific to the jobs of one or the other require distinct language functions. For example, air traffic controllers will normally utter exclusively the following functions: give an order, give an amended order, give alternative orders, cancel an order, give permission or approval, deny permission or approval, forbid, etc. Conversely, pilots will usually announce compliance with an order, announce non-compliance with an order, request advice, request permission or approval, state preferences, request instructions on how to do something, etc. Yet, there are still other functions that may be uttered by both, such as: describe a state, request repetition, correct a misunderstanding, give clarification, etc. For this reason, the assessment of pilots and controllers should include, respectively, the specific language functions and contexts each of them needs to produce, to comprehend and the ones they use for both production and comprehension in the TLU domain;
- Test-taker’s background knowledge of the target domain – Pilots and air traffic controllers each possess a specific purpose background knowledge (i.e., frames of reference based on training and experience) associated with their particular roles that shape how they communicate in radio-telephony communication contexts.
These differences need to be reflected in the design of test tasks for each profession by paying attention to:
- Characteristics of the input: the prompt (i.e. contextual information about setting, participants, purpose, content, language, problem to be addressed) and the type of input (i.e. visual and/or aural material that the test-taker needs to process in performing the task) need to be selected with the appropriate level of authenticity and specificity in order to be engaging to test-takers and to establish the specific purpose context for pilots and air traffic controllers;
- Characteristics of the : what and how test-takers are expected to produce and how they respond for assessment purposes. The type of response test-takers produce in response to a prompt may be spoken or written, recorded in audio and/or video, typed on a computer. The response could require the choice of possible options or a limited or extended production of a text, and so on. The test task format and content needs to take levels of authenticity into consideration, and the response should also elicit aspects of the specific purpose language ability as well as reflect the type, range and use of language the test aims to measure.
Next ➟ Best Practice Options
Best Practice Options
Test Developers need to consider the following points when designing test instruments.
- Identify the target population specifically by identifying their role as either pilots or air traffic controllers as the basis for designing the test.
- Ensure the features of the communication associated with the TLU for the target population can be identified and reflected in the content and task types. These features need to be specified and detailed in a test specifications document.
- When it comes to test content, it is possible that some (for example, audio content used in listening test tasks) may be applicable to both pilots and air traffic controllers. However, the test task requirements (e.g. the questions in a listening test) should focus on the requirements for each perspective separately. For example, if an audio recording of a pilot and controller communicating in a radiotelephony context is included as input in a listening part of the test, comprehension questions in a test for pilots need to mainly focus on what the air traffic controller says. Conversely in a listening test designed for air traffic, there should be a focus on assessing what the pilots say in the radiotelephony communications. This ensures the test reflects real-world communication needs of both professions separately.
- It may be necessary to provide different forms of the test specific to the roles of different professionals within a target population. For example, if a test is developed for a population of air traffic controllers which comprises area, approach and aerodrome controllers, test-tasks that assess communication skills in radiotelephony situations may need to be developed separately for each of these three units. For example, it would not be valid or fair, to expect an area controller to engage in a role-play test task requiring them to take on the role of an aerodrome controller because they may not be familiar or experienced with the corresponding communication contexts this would involve.
Next ➟ External References
The following references are provided in support of the guidance and best practice options above.
“It is not enough merely to give test-takers topics relevant to the field they are studying or working in: the material the test is based on must engage test-takers in a task in which both language ability and knowledge of the field interact with the test content in a way which is similar to the target language use situation. The test task, in other words, must be authentic for it to represent a specific purpose field in any measurable way.” (p.6)
“Authenticity does not lie in the mere simulation of real-life texts or tasks, but rather in the interaction between the characteristics of such texts and tasks and the language ability and content knowledge of the test-takers. … authenticity is only achieved when the properties of the communicative situation established by the test instructions, prompts and texts is sufficiently well-defined as to engage the test-takers’ specific purpose language ability.” (p. 22)
Douglas, D. (2000). Assessing languages for specific purposes. Cambridge: Cambridge University Press
The author highlights how the language needs of controllers and pilots differ and how this is shaped by their job roles:
“Approach controllers must ensure that departing aircraft are guided from the runway up into controlled airspace and approaching aircraft are guided down to the runway safely. At busy airports, approach controllers communicate concurrently with multiple aircraft at differing stages of flight. The flow of information is primarily from the controller to the pilot. Because of the typically heavy workload, controllers provide multiple pieces of information in a single turn in order to balance accuracy and efficiency.” (p. 232)
“However, radio communication is only one small part of the work of flight crews in this sociotechnical system. The ability of flight crews to understand and manage radio communication is directly related to the cognitive load imposed on them by other tasks. Farris et al. (2008) describe the multiple concurrent tasks pilots perform that require memory and processing demands in terms of cognitive workload. They suggest that high cognitive workload may interact with language proficiency to affect the ability of flight crews to adequately interact in radio communication with controllers.” (p.235)
Moder, C. L. (2013). Aviation English. In B. Paltridge & S. Starfield (Eds.), The handbook of English for Specific Purposes (p. 227-242). Chichester, UK: John Wiley & Sons.
“It is important to note that tests are not either general purpose or specific purpose – all tests are developed for some purpose – but there is a continuum of specificity from very general to very specific, and a given test may fall at any point at the continuum” (p. 45).
“Specificity of content refers to factors which affect the level of specificity of a written or spoken text in an LSP test. Clearly there are a number of such factors, including the amount of field specific vocabulary, the degree to which the specific purpose vocabulary was explained or not, the rhetorical functions of various sections of the text, and the extent to which comprehension or production of the text required knowledge of subject specific concepts” (p. 46).
“As a way out of the dilemma of never-ending specificity on the one hand and non-generalizability on the other, we can make use, I suggest, of the context and task characteristics referred to above, which are drawn from an analysis of a target specific purpose language use situation, and which will allow us to make inferences about language ability in specific purpose domains that share similar characteristics” (p. 49).
Douglas (2001). Three problems in testing language for specific purposes: Authenticity, specificity and inseparability. In C. Elder, A. Brown, E. Grove, K. Hill, N. Iwashita, T. Lumley, T. McNamara & K. O’Loughlin (Eds.), Experimenting with uncertainty: Essays in Honour of Alan Davies (p. 45-52). Cambridge: Cambridge University Press.
“ESP assessment instruments are usually defined fairly narrowly to reflect a specific area of language use such as English for academic writing, English for nursing, Aviation English, or Business English, for example. Thus, ESP tests are based on our understanding of three qualities of specific purpose language: first, that language use varies with context, second, that specific purpose language is precise, and third that there is an interaction between specific purpose language and specific purpose background knowledge” (p. 368).
Douglas, D. (2013). ESP and assessment. In B. Paltridge & S. Starfield (Eds.), The handbook of English for Specific Purposes (p. 367-383). Chichester, UK: John Wiley & Sons.
“The extent to which LSP assessment developers include the test-taker’s background knowledge of the target domain in their construct definition is a key element of the resulting assessment’s interactional authenticity, since it is this aspect of a test task that makes it specific in the first place” (p. 74).
“In LSP assessment, the test developer should identify the range of functions expected in performance within a specific domain, ensure that a representative sample of these are elicited by the proposed tasks at the design and specification phase of test development, and then check from actual performances whether those predictions have been supported” (p. 82).
O’Sullivan, B. (2012). Assessment issues in languages for specific purposes. The Modern Language Journal, 96, 71-88.