Press Releases
22 April 2024
English language tests for entry to UK higher education: does it matter which tests universities accept?
The short answer is: Yes, it does matter which tests universities accept and at what cut scores (or minimum score required), at both overall and individual component level (i.e. writing, speaking etc.). This is because universities have a duty of care to the admissions standards of the institution. But, in many respects more importantly, they have a duty of care to the students they admit by ensuring that they pay due care and attention to the selection of tests that accept, and at what levels they set their entry standards, so that in achieving these scores, the university is confident that the students admitted will be able to successfully meet the challenges of their chosen degree courses – or at least, the university is conscious of what additional, scaffolded English for Academic Purposes (EAP) support the students will need on arrival, and that this is provided.
The longer answer is still: Yes, it does matter, but unfortunately, it is not as straightforward as it should be.
Every few years the UK government invites English language test providers to apply to have their tests accredited as SELTs, so Secure English Language Tests. Accreditation is a tough process, requiring assurances on test security, accessibility, availability, and validity, to name but a few areas. All of these are good things, but this accreditation alone does not necessarily translate into a reliably curated list for UK higher education admissions purposes. And indeed, as Highly Trusted Sponsors, universities can accept tests that are not on the SELT list, but it then necessarily follows that they must bear the responsibility that the additional tests that they may choose to accept meet the minimum requirements for a Student Visa.
This is where it all gets a little more interesting as there is no independent body that regulates or oversees the expanding range of English language tests on the market. When making their decisions, universities have to scrutinise the materials that the test providers themselves issue on the validity, reliability, and security etc. of their test and how they benchmark themselves against other such tests on the market. Sometimes this is backed up by rigorous research, as in the cross-mapping carried out by two of the most well-known tests on the market – IELTS (International English Language Testing System) and TOEFL (Test of English as a Foreign Language) – but for the newer players, or for those with smaller research budgets, universities have to ascertain for themselves whether the claims that the test providers make as to the validity, reliability, and comparability are accurate.
Selecting which tests to accept, though, is not as straightforward as it might otherwise seem. At the end of the day, test providers are commercial entities, who, like universities, have bottom lines to consider and so what might be spun in the glossy advertising for their test might not, when the details are scrutinised, translate into a test that’s a valid and reliable indicator of an applicant’s academic literacy proficiency in English.
This situation became all the more acute during the pandemic, which, as with all other aspects of life, forced test providers to swiftly pivot to online tests due to the closure of test centres. Many universities were consequently forced into accepting tests that, under normal circumstances, they would perhaps not have considered. A large number of these universities have since reversed their decisions and equally, there seems to be a preference for a return to tests taken in test centres, both on the part of the universities as well as the test takers themselves.
As life adapts to the post-pandemic new normal, the time is opportune to both reflect on what the pandemic has taught us, both negative and positive, about the potential for online English language testing – but also, more far-reachingly, to investigate the strengths and weaknesses of the range of tests on the market, both old and new.
At the end of the day, universities are, of course, free to stipulate which tests their applicants can take and the levels required in order to evidence that they meet the minimum academic literacy standards in English for their chosen degree programme. But this should be an informed decision. Universities need not only to know about the security, availability, and reliability of the available tests – but far more, they need to know about the validity of the test as a predictor of the applicant’s readiness, in terms of academic literacy in English, to do their chosen degree course. The university has a duty of care to do so. Accepting tests which are cheap, quick, and which have little such validity, whilst perhaps preferred by some test-takers, in the long run, as numerous universities found out during the pandemic, is a false economy. Many universities who accepted such tests subsequently reported that many of the admitted students did not have the necessary linguistic wherewithal to study through the medium of English – despite having test scores that providers asserted would evidence this.
This particular Gordian Knot is the focus of a research collaboration between the British Council, Cambridge University Press and Assessment, and the universities of Dundee and Cambridge, which came about due to shared concerns based on alarming stories and first-hand witnessing of serious issues in post-pandemic international student admissions. Ultimately, what we as a group are looking to do is to publish a guidance document and run a series of workshops for higher education institutions on how to choose a suitable proficiency test from those currently on offer, including what they assess, what they don’t assess, what levels might be best suited for undergraduate and/or postgraduate entry, their global availability, their accessibility, and of course, their security. Higher education institutions will, of course, make their own decisions as to which tests they accept and why and at which levels – but with many more players coming onto the market and many more online opportunities having been opened up due the necessitated move to online during the pandemic, what we are aiming to do with this research study is to assist in making these decisions more informed decisions.
The research is a mixed-methods study, investigating different tests at different institutions, perceptions of university personnel towards various language tests, and transparency around decision-making for test acceptance. This has involved: 1) an initial desk-based study on the range of English language tests higher education institutions accept for admission onto both undergraduate and postgraduate programmes, at what levels, as well as any recent changes there may have been in accepted tests; 2) a survey which looked to explore the decisions as to which tests are accepted and why from the perspectives of recruitment personnel, admissions, English for Academic Purposes support staff, as well as academics. Participants were asked to comment on what they saw as the strengths and weaknesses of the tests they accepted, and for those in direct contact with international students, so academics and EAP staff, what they see as the main challenges facing international students and by extension, what skills they should ideally come prepared with to meet these challenges. We are now in the process of carrying out focus groups with international students to gain insights into their experiences, both of the admissions testing requirements, but also into how they are navigating studying in the UK.
Preliminary results in the UK context indicate that processes around test acceptance are not uniform across institutions, and that there are often competing tensions within universities, so for example, between recruitment and academics, as to which tests to accept and at what levels. For example, 63% of academics who responded said that they were Extremely or Somewhat dissatisfied with the range of tests that their university accepts – whereas 67% of those responding who worked in recruitment were Somewhat satisfied. Additionally, almost half of the academics who replied noted that their institutions do not have dedicated academic English language and literacy support sections. According to responses from those universities that do, the majority of English for Academic Purposes (EAP) staff are involved in making suggestions or recommendations with respect to test acceptance, but these staff noted that despite their English language assessment expertise within their institutions, they were only one of many stakeholders consulted and their views were sometimes overruled. Some even noted that they were ignored during the pandemic and thus had no involvement in their institution’s current list of accepted tests.
The initial survey data has also shown that whilst for the most part higher education institutions are somewhat satisfied with the information available to them on current proficiency tests, this satisfaction rests mainly with the established players in the market, such as IELTS and TOEFL, namely, those test providers which regularly publish research on the validity and reliability of their tests. 20% of respondents commented specifically on the fact that they need more information on test comparability as there is some uncertainty as to the veracity of some of the claims made by newer tests on the market, especially when these claims are apparently based on their own cross-mapping research to IELTS and TOEFL. In one instance, for example, despite the claims on their website of one of the newer tests that they had carried out research to cross-map their test to IELTS, when probed further by a university, it turned out that this wasn’t the case at all – they had carried out a cross-mapping to the Common European Framework of Reference (CEFR) and then cross-mapped from this to IELTS. Such questionable research practices wouldn’t be accepted of novice postgraduates, so it’s all the more worrying that high stakes English language test providers are seemingly allowed to get away with it.
When it comes to changes to the range of tests accepted during the pandemic, 64% of responding universities (n67) said that they had expanded what they accepted – but of this 64%, 54% had since reversed this decision. The reasons given ranged from security concerns, to validity and apprehension about online testing, to perceptions that students who came in with some of the newly accepted online tests were not adequately prepared for the tertiary learning environment. When asked about their perceptions of the English levels of the students they taught, the academics who responded considered only 9% to be ‘Good’; 47% to be ‘Mixed/Varied’; and concerningly, 44% said that the level was ‘Poor’.
In probing these themes in the follow-up interviews, what has become evident is that in light of the actual English levels of the international students in the classroom, irrespective of their tests scores, there is a feeling amongst some academics that they need to ‘dumb down the curriculum’, coupled with a clear message that language proficiency alone is insufficient – academic skills in English are vital and it was noted by several that students who have attended pre-sessional academic English literacy courses anecdotally fare far better. In terms of test providers, proficiency tests from the big players, such as IELTS, TOEFL, and to a lesser extent, Pearson PTE and Trinity ISE, are generally trusted as reliable tests – but it was also noted that such tests may need to adapt to keep up, whilst not engaging in a race to the bottom if they are to remain trusted.
There are significant costs to both admitting institutions as well as to the students themselves when students are admitted with lower levels of language proficiency than are required to successfully engage with their chosen course. As mentioned at the start, universities have a duty of care, both to their institution, but primarily to the international students that they are looking to attract, when selecting the English language tests that they accept. Greater language assessment literacy (LAL) is therefore required by HEIs both in making such decisions but also in understanding what additional support international students may need on arrival in order to successfully acculturate into the English-speaking academic environment.
The aim of this research collaboration is to address this gap by developing a framework for universities to evaluate the tests they accept so that they can make informed decisions which are in the best interests of their own admissions standards – but arguably more importantly, are in the best interests of the international students they admit. Many of the claims that are being made by some of newer test providers don’t meet the standards of academic integrity that are the cornerstone to university teaching and research – so it’s about time that these were scrutinised.
Conferences
Past
- Symposium in Dubai, which included representatives from recruitment, admissions, English for Academic Purposes staff, and academics from around the world.
- Delivered our preliminary findings at the British Council research summit on The Future of English: Global Perspectives.
- New Directions East Asia Conference in Vietnam (October, 2023)
Upcoming
- Presentation at Language Testing Research Colloquium (Innsbruck, July 2024)
- Abstract to be submitted for British Council New Directions (Sub Saharan Africa, September 2024)
- Proposal submitted for Canadian Bureau for International Education Conference (Vancouver, September 2024)
- Proposal submitted for Enhancing Student Learning Through Innovative Scholarship (St. Andrews, July 2024)