Since it was introduced in the 1800s, standardised testing in Australian schools has attracted controversy and divided opinion. In this series, we examine its pros and cons, including appropriate uses for standardised tests and which students are disadvantaged by them.
It is generally reported that rural students are up to one and a half years behind their metropolitan peers in the National Assessment Program – Literacy and Numeracy (NAPLAN) and Programme for International Student Assessment (PISA) tests. They are also less likely to complete year 12, and half as likely to go to university.
However, there are two key problems with how these determinations are arrived at: firstly, cultural bias in tests, and secondly the problem of averages.
Cultural bias
If you ask a teacher in a rural school about the gap in achievement in NAPLAN, they tend to roll their eyes and say something like:
is it any surprise that our kids don’t do as well? A lot of the questions don’t have any relevance to their real lives.
Such questions include a literacy task asking a student to write a recount of a day at the beach – when they haven’t been to one – or a numeracy task using a train timetable – which they don’t use.
ACARA’s response would likely be that timetables are in the curriculum, therefore it is right to develop a test using them. However, the fact that timetables are in the curriculum doesn’t mean the curriculum is fair.
That is the underlying issue with standardised tests – they need a standard curriculum. We might want to benchmark students’ literacy and numeracy, but to do that we need to ask questions, and questions are always embedded in culture. The question is – whose culture?
The Australian curriculum has been criticised as being “metro-centric”, in line with teachers’ comments about the tests having no bearing to the students’ lives. While we tend to accept cultural differences for students of Aboriginal and Torres Strait Islander descent, and students from language backgrounds other than English, we often don’t consider rural kids to be different.
However, the international field of rural literacies has shown us that rural people use different literacy constructions. In spatial reasoning, a key numeracy skill, we know that rural people use different spatial dimensions when drawing maps – not like the city blocks common in NAPLAN tests.
If we continue to ignore these difference in the construction of standardised tests, we will continue to produce disadvantage for rural students.
The problem of averages
To have a standard to compare results against in standardised testing, there first needs to be a “standard”. How this standard, and average achievement, is skewed in countries like Australia, where nearly 70% live in capital cities. They skew the data to their own norm, reinforcing the cultural relevance (or irrelevance, in the bush) of the tests and curriculum and making these standards seem normal and just.
Typically, results are reported as “metropolitan” and then “rural” students, with achievement in one compared to the other. This approach, however, collapses a lot of difference and creates much of the problem. When we break down NAPLAN by the geographic classifications used by the Australia Bureau of Statistics (major city, inner regional, outer regional, remote, very remote) and control for socioeconomic background and Indigenous status we get something different. Instead, we find that the negative associations are with areas surrounding large cities, and actually get better the further away one goes from the city, until we hit very remote areas.
The problem is numbers and averages, and how we talk about places as “the same”. There is great socioeconomic diversity, and local environmental differences between, for instance, Port Macquarie and Dubbo.
We’re still asking the wrong questions
This year, NAPLAN tests have revealed that student performance has only improved slightly since tests were introduced a decade ago. While we are awaiting the final report, previous data have shown the gap between the top and bottom, rural and city has not improved significantly either. So all this money, and test anxiety experienced by children, has only reinforced what 40 years of educational sociology already told us: culture matters in education.
In the absence of sophisticated ways of measuring and reporting achievement, we fall back on old failed methods. All NAPLAN has done is reinforce a social gradient of advantage and disadvantage, and seemingly legitimise unequal outcomes. The process of schooling is deemed to be neutral, when in fact its process is the key issue.
Is it any surprise rural students seem to perform worse when to succeed they have to learn about a foreign place? Try finding a science text with examples from the country, or novels about rural Australia (the real ones, not the romantic ones). As a result, students have to mentally leave their rural place everyday and imagine themselves in another world.
Standardised testing relies on getting the underlying curriculum right. If that curriculum continues to legitimise the marginalisation of people or groups, we cannot say we got it right.
No comments:
Post a Comment