Wednesday, November 29, 2017

As students near graduation, career and technical education provides a boost

For the past decade or so, every American president has sought to use career and technical education – or CTE – as a way to boost achievement and prepare students for the jobs of tomorrow.

When the Bush administration signed into law the existing federal CTE policy in 2006, the goal was to increase “focus on the academic achievement of career and technical education students.”

Under the Obama administration, career and technical education was seen as a way to “prepare all students, regardless of their backgrounds or circumstances, for further education and cutting-edge careers.”

The current administration has taken the same stance – with the president stating in April that “vocational education is the way of the future.”

Academic research shows that taking CTE classes can benefit students by improving their odds of graduation, boosting their chances of participating in advanced math and science coursework, and increasing their earnings immediately after high school.

However, it’s not just CTE implemented in any old way that has proven beneficial. Rather, as a current education policy Ph.D. student who focuses on college and career readiness, I have discovered that the timing of CTE matters when it comes to high school completion and dropout prevention.

This conclusion is based on a research study that associate professor and education researcher Michael Gottfried and I conducted this past year, examining the impact that CTE has during different years in high school.

Later years really count

To conduct our study, we used a nationally representative data set that included more than 11,000 students in public schools around the country.

We found that CTE taken during the freshman and sophomore years in high school did not relate to a student’s chances of graduating from high school or lower his or her chances of dropping out. However, CTE in the junior and senior years related to both increased chances of graduating on time and decreased chances of dropping out.

Specifically, we found that taking CTE classes during senior year was associated with a 2.1 percent improved chance of on-time graduation and a 1.8 percent lower chance of dropping out. While these percentages may seem small, it’s important to keep in mind that this is for each CTE unit completed. In other words, a student taking two CTE units in his or her senior year would expect a 4.2 percent increase in chances of graduating on time. This CTE bump is similar to the benefit from participating in an academic club, which can reduce the chances of dropout by about 1.8 percent.

So why does CTE have an relationship to graduation and dropout for juniors and seniors, but not for freshman and sophomores?

Factors for success

Based on previous research, three potential factors explain how CTE may relate to high school graduation, as well as other positive outcomes for students. These three factors are skill-building, engagement and relevance.

During the early years of high school, participation in CTE courses may provide the building blocks for later learning by teaching college and career readiness skills and promoting engagement. Ultimately, we found that these early courses did not have a direct relationship with on-time graduation. However, students who take CTE early in high school are more likely to take CTE later in high school. Building the early skills matters, just not in direct relation to graduation or dropping out.

Taking CTE courses later in high school may also connect more closely with school relevance. As students get to the end of high school, they begin thinking about what’s next in life. For some, that future includes college. For others, they might go directly into a career. Whichever path a student is considering, CTE courses later in high school can help to show how high school is an important place to prepare for the next step.

Looking forward

We were encouraged by our finding that CTE is related to improved graduation rates and lower dropout rates, and we believe there are some important takeaways from these findings.

First, the timing of CTE matters. This should be taken into consideration in designing both high school CTE programs and CTE policies. Second, the results support increasing CTE in high school to encourage engagement and relevance. Finally, our findings support the renewal of current federal CTE policy through the Strengthening Career and Technical Education for the 21st Century Act, which recently passed through the House of Representatives and went on to the Senate. The new version of the policy tries to improve the alignment between employer needs and CTE programs. It also seeks to encourage cooperation between stakeholders and increase CTE participation for traditionally underrepresented groups, such as students with disabilities and minority students. Our findings support the reauthorization of the policy by showing that the existing CTE policy has succeeded in promoting the first step in college and career readiness – high school graduation.

Supporting part-time and online learners is key to reducing university dropout rates

The most recent statistics show first-year attrition rates in Australian universities are at 15%. This has caused the Minister for Education and Training, Simon Birmingham, to say universities “need to be taking responsibility for the students they enrol.”

Attrition does not mean dropping out. It just means the student did not continue their study in the following year. For example, attrition includes students who suspend studies due to personal circumstances, but return to study a later year. However, the evidence is that most students who discontinue their studies do not end up completing.


Read more: Which students are most likely to drop out of university?


How does Australia compare to other countries?

To analyse comparative performance, we looked at attrition rates in a number of countries, as well as regions within some countries. As with Australia, most countries focus their attention on nationals (that is, not international students) entering university for the first time.

Australia’s national attrition rate was 14.97%, with institutions ranging as low as 3.92% and as high as 38%. The best-performing state was New South Wales and the worst was Tasmania. Of 39 institutions, 12 had an attrition rate over 20%.



England, Wales, Northern Ireland and Scotland and Ireland all performed better than Australia.



Aotearoa, New Zealand, had an overall attrition rate of 16%, slightly higher than Australia’s. This was also the case with US public higher institutions offering four-year degrees, where the attrition rate was 17.7%.

Universities in Ontario, which is 40% of Canada, had an average attrition rate of 12.8% for full time students. But the overall attrition rate (which includes part-time students) would likely place this figure even closer to Australia’s attrition rate, though we can’t say this for certain.

What causes student attrition?

Many things affect student attrition, including age, socio-economic status, location and time on campus. Our study focused on three elements that have the potential to contribute to higher rates of attrition. The first is above-average student-to-staff ratios, as an indicator of student-lecturer interaction.

The second is above-average ratios of part-time enrolments, suggesting students are juggling study with work and personal commitments. The third is above-average ratios of external enrolments (such as students studying online), since these students have little or no access to the majority of on-campus support services.


Read more: Better academic support for students may help lower university attrition rates


The issues of part-time enrolments and external enrolments are closely related, as most students studying externally also study part-time.

We searched in the official higher education statistics for relationships between attrition and these three elements. That is, were attrition rates higher for universities that had more students per lecturer, or higher part-time enrolments, or more students studying externally?

We found some links between attrition rates and student-to-staff ratios. Some 15 universities had higher than average attrition rates when they also had higher than average student to staff ratios.

And nine universities that had better than average student-to-staff ratios also had better than average attrition rates.

But that still meant 15 universities bucked the trend. They either had better attrition despite having worse student-to-staff ratios, or the opposite.

There was a much stronger relationship between attrition rates and external enrolment ratios. Some 20 universities had below average attrition and external enrolment rates, and ten had above average attrition and external enrolment rates.

The correlation was even stronger between attrition rates and part-time enrolments, with 31 universities displaying a direct relationship between the two factors.

Looking at our international comparisons, we saw similar trends. The overall attrition rate in the UK was 9.8%. But this hid an attrition rate of 35.5% for part-time students. For those studying through the UK Open Universities (so, externally), the attrition rate was even higher, at 43.5%.

In the US, the attrition rate for part-time students was 37.2%. In New Zealand, it was 26%.

What type of higher education system do we want?

Students who don’t complete their courses are not only missing out on a personal opportunity, there’s also lost potential to society. Students and universities must aim to further reduce attrition. Universities are changing their admission, teaching and student support to increase their students’ success. But completion rates also reflect what kind of higher education system we want.

That said, Australia’s attrition rates are not unusually high by these international comparisons. We should accept a modest level of attrition so we can keep providing opportunities for part-time students and others who don’t fit the conventional mould. Students studying part-time, especially those studying externally, need specialised support to help them balance their studies with their work and life commitments. But they don’t need to see their opportunities for flexible study reduced, just so an institution can improve its retention rate.

How school has been used to control sovereignty and self-determination for Indigenous peoples

Tuesday, November 28, 2017

Want to solve our STEM skills problem? Bring in the professionals

How to encourage young people to make a habit of helping others

The idea of helping others, also known as social action, service or volunteering, is often held up as a virtue of national importance to British identity. It is at the heart of treasured programmes such as the Scouts, the Guides, or the Duke of Edinburgh’s Award, as well as new bodies such as the National Citizen Service (NCS) and the #iwill campaign.

But how do you encourage young people to form a habit of helping others that lasts throughout their lives? In new research, my colleagues and I found that the younger they started, the more likely they were to continue.

Helping others often brings benefits for individuals as well as broader society. It can develop desirable character qualities and life skills in the young people who take part. Research also shows that giving can often have a positive impact on well-being and mental health.

In a three-year study that surveyed more than 4,500 people between the ages of 16 and 20, we looked at which factors were associated with young people who have made a “habit of service”. We defined this as when a young person took part in service in the preceding 12 months and confirmed they would definitely or very likely continue participating in the next 12 months. Participants who had taken part in programmes such as the NCS, VInspired and Diana Award were invited to complete the survey.

We found that young people with a habit of service were more likely to have started social action at a younger age than those without that habit. Those who first got involved under the age of ten were more than twice as likely to have formed a habit of service than if they started when they were 16 to 18 years of age, as the chart below shows. They were also more likely to be involved in a wider range of activities such as volunteering, tutoring and helping to improve their local area and would participate in them more frequently.

Building character

Given the sustained interest in character education within the Department for Education – and the recent publication of a book by the former education minister Nicky Morgan on the topic – we were also interested how encouraging young people to make a habit of service relates to different types of character virtues.

Those with a habit of service identified themselves more closely with moral virtues such as compassion, honesty and integrity and civic virtues such as volunteering and citizenship than those who hadn’t developed a service ethic. They were also more likely to recognise the double benefit of undertaking service – that it helped develop their character as well as benefiting society more broadly.

We also found that, when young people had the opportunity to lead a social action project themselves and reflect on it afterwards, they were more likely to form a habit of volunteering. One of the most important factors in making a habit of this kind of activity was if the experiences were both challenging and enjoyable.

In line with many studies on volunteering, girls were more active participants and also more likely to have formed a habit of serving their community than boys. As were those young people who practised a religion. Parents and friends were also an important factor in whether a young person make a habit of service. Friends were a bigger influence than parents on the group of 16 to 20-year-olds we surveyed.

I hope that these findings will help those in the voluntary sector plan and deliver youth social action programmes which support young people to cultivate a habit of service. But the opportunities children and young people get to help others must also be meaningful to them, as well as contribute to broader societal flourishing.

Google’s translation headphones: you can order a meal but they won’t help you understand the culture

Starting next year, universities have to prove their research has real-world impact

Starting in 2018, Australian universities will be required to prove their research provides concrete benefits for taxpayers and the government, who fund it.

Education Minister Simon Birmingham recently announced the Australian Research Council (ARC) will introduce an Engagement and Impact Assessment. It will run alongside the current Excellence in Research Australia ERA assessment exercise. This follows a pilot of the Engagement and Impact Assessment, run in 2017.


Read more: Pilot study on why academics should engage with others in the community


Until now, research performance assessment has mostly been focused on the number of publications, citations and competitive grants won. This new metric changes the focus from inputs and outputs to outcomes. This is part of a continuing shift from quantity to quality, which began in earlier iterations of the ERA. The Engagement and Impact assessment reflects a significant change in thinking about the types of research impact we value and why.

For research to have an impact, it needs to be used or applied in some way. For example, health research aims to have an impact on health outcomes. For that to happen doctors, nurses and people working in health policy would need to use that research evidence in their practice or policy decision-making.

Despite the initial focus on commercial outcomes, the Engagement and Impact Assessment has evolved to include a range of impact types. It provides an important incentive for researchers in all fields to think about how to engage those outside of academia who can translate their research into real-world impacts. It also enables researchers who were already engaging with research end-users and delivering positive impact to have these outcomes formally recognised for the first time at a national level.

Community input

Including an engagement component recognises researchers are not in direct control of whether their research will actually be used. Industry, government and the community also have an important role in making sure the potential benefits of research are achieved.

The engagement metrics allow universities to demonstrate and be rewarded for engaging industry, government and others in research, even if it doesn’t directly or immediately lead to impact. Case studies were chosen to demonstrate impact because they let researchers describe the important impacts they are achieving that metrics can’t capture.

The case studies will need to include the impact achieved, the beneficiaries and timeframe of the research impact and countries where the impact occurred. They’ll also include what strategies were employed to enable translation of research into real world benefits.

The results will be assessed by a panel of experts for each field of research who will provide a rating of engagement and impact as low, medium or high.

Cultural impacts

The ARC has defined engagement as:

the interaction between researchers and research end-users outside of academia, for the mutually beneficial transfer of knowledge, technologies, methods or resources.

Impact has been defined as:

the contribution that research makes to economy, society and environment and culture beyond the contribution to academic research.


Read more: When measuring research, we must remember that ‘engagement’ and ‘impact’ are not the same thing


The definition of impact has been amended to include “culture”, which was not part of the definition applied in the pilot. This amendment speaks to concerns raised by the academic community around quantifying and qualifying impacts that vary significantly across different academic fields. It’s hard to compare, for example, the impact of an historic exhibition to the impact of astrophysics research on gravitational waves.

It’s also difficult to compare more basic or experimental research with applied research, such as health and well-being programs that can be directly applied in the community. Basic or experimental research can take a long time to lead to a measurable impact.

Classic examples of experimental research that had significant economic, health and social impacts that it didn’t specifically set out to achieve are the discovery of penicillin, and WiFi.

An addition, not a replacement

The traditional research metrics of grants, publication and citation, which work for basic, experimental and longer-time-to-impact research, are still in play. The Engagement and Impact Assessment has not been tied to funding decisions at this stage.


Read more: Explainer: how and why is research assessed?


A study of the impact case studies submitted to the UK’s Research Excellence Framework found high-impact scores were correlated to high quality scores. They concluded “impact was not being achieved at the expense of research excellence”. Previous research has shown research quality is an important enabler of the use of research.

Engagement and impact outcomes for a specific field of research at one university will be assessed against the same field at another university. This is also the case with traditional metrics and grants assessment.

Engagement will be assessed on four key metrics and an engagement narrative. These metrics are focused on funding provided by end-users of research such as businesses or individuals outside the world of academia who directly use or benefit from the the research.

The four metrics are: cash support (against Higher Education Research Data Collection categories) or sponsored grants from end-users, research commercialisation income and how much income is made per researcher.

The engagement narrative will enable universities to provide detail about how they are engaging with end-users. There is also a list of other engagement indicators universities can draw on to describe their engagement activity.

At times, the value of research has been publicly questioned. The Engagement and Impact Assessment will help the general public better understand the value of the research they fund.

Business schools have a role to play in fighting corruption in Africa

In 2002, the African Union reported that Africa lost about USD$148 billion through corruption every year. This represented 25% of the continent’s combined GDP at the time. Nothing much has changed.

Last year, the global business advisory firm KPMG estimated that if South Africa reduced its corruption by one point, as measured by Transparency International’s corruption perception index, it could add R23 billion to its GDP.

The thrust of these facts is that the lost monies could have been used to finance institutional development and reduce the constraints to doing business in Africa.

In spite of the effects of corruption on the private sector, businesses in Africa are relatively silent about the menace. Efforts to combat corruption are largely championed by civil service, non-governmental organisations and international development agencies.

A new study has linked the private sector’s silence to the inadequacy of business education in the region. It notes that business schools can play a vital role in the fight against corruption. They can do this by nurturing business students to become institutional entrepreneurs – people who will bring about institutional change – not only in Africa’s economic domain, but also in the political arena.

Schools can equip students and managers with knowledge and expertise to advocate for public accountability and good governance, as advocated by the World Economic Forum.

Why the private sector is silent

There are several reasons why private companies often remain silent about corruption. One is that some of them indulge in and benefit from corruption.

In Ghana, for example, a “create-loot-share” model of corruption persists. Politicians, public officers and businesses collude to create and profit from fraudulent acts, including inflated contracts. This is also common in Nigeria, where between 2009 and 2014, about USD$2 billion was salvaged from inflated contracts by the government agency set up to vet procurement.

Even multinational companies from the least corrupt countries gain from Africa’s corrupt political elite. For example, in 2011, UK’s Shell and Italy’s ENI paid USD$1.1 billion to Nigerian officials for access to an oilfield currently worth USD$500 billion. US tyre firm Goodyear paid more than USD$3.2 million in bribes to Angolan and Kenyan government officials in order to win supply contracts.

The pressure on companies to indulge in corruption is considerable in Africa. According to data from the World Bank, 71% of enterprises in Sierra Leone, 66.2% in Tanzania, 64% in Angola, 75.2% in Congo and 63% in Mali expect to give “gifts” to secure government contracts.

The ethical dilemma for business managers is that refusing to pay bribes can cost their companies contracts, licences and revenues. Essentially, good companies which do not yield to extortion may lose out to bad competitors who do. Consequently, most companies yield to corruption or stay silent. Speaking up can make them targets for political witch hunts and discrimination.

A much deeper reason for the private sector’s inactivity is that managers simply lack the political skills required to shape their business environments. This deficiency arises because the link between political institutions and economic markets has not received adequate attention in business schools. So the schools are turning out managers with good knowledge of business but inadequate understanding of public governance and inability to influence public institutions.

Most people, including business managers, feel powerless when dealing with corrupt government officials. They regard official institutions as too powerful to take on and see corrupt practices, such as bribery, as unchangeable. But with good education and training, this can change.

African business schools can help

Corruption is multi-faceted. So there is no simple solution for it. It must be battled on all fronts. Business schools can do their part by developing competence to confront corruption.

Some in Africa have already done so: three business schools introduced an anti-corruption programme sanctioned by the United Nations into their classrooms.

But much more can be done. Business schools should teach business-government relations, or corporate political activity. This is crucial because many business managers don’t know how to influence their political environments even though they are affected by government policies. Students and managers may be taught ethics in schools, but ethical values are difficult to uphold in contexts where corruption is highly endemic, such as Africa.

If the fight against corruption in Africa is to succeed, business managers must learn to engage public officers differently. The ability to do this can be developed in business schools.

Students and managers need to learn about political strategies that can change the way institutions work. Techniques for ad hoc management of bribery are no longer enough. Companies can, for example, present a united front against corruption so that none can be singled out for “punishment”.

The business community could also learn to self regulate by refusing to deal with corrupt companies, as was recently reported in South Africa. Collective campaigns for public procurement transparency can also prevent politicians from using the private sector to plunder State funds. Inaction breeds corruption, as seen in Kenya’s USD$1 billion Anglo Leasing scandal.

African business schools are valuable in the fight against corruption. They can take bold steps to review their curricula and promote active corporate citizenship. When they see what a difference they can make, the continent may begin to shake off a major hindrance to its development.

Monday, November 27, 2017

Faces of Dreamers: Jorge Reyes Salinas, California State University Student Trustee

This is one in a series of posts on individual Dreamers, undocumented immigrants brought to the United States as young children, many of whom are under threat of deportation following the Trump administration’s decision last month to rescind the Deferred Action for Childhood Arrivals policy, or DACA.


When Jorge Reyes Salinas was 10, his parents cobbled together enough money to leave Peru to start a new life in Los Angeles. They wanted a better future for their only son, who thought he was going to Disneyland.

Today, Reyes Salinas is a DACA recipient attending California State University, Northridge, and is the student appointee to the Cal State Board of Trustees. He recently spoke with the Los Angeles Times about DACA and other issues.

Asked about the uncertainties and pressures of being a Dreamer, he said that, “I can’t speak for every student, but I know it’s that pressure of proving that I’m worth that investment. Ever since I really understood my undocumented status, I’ve always been in fear for myself, for my family, for others like me.”

To read the full story, click here.

Universities are failing their students through poor feedback practices

Wednesday, November 22, 2017

Standardised tests limit students with disability


Since it was introduced in the 1800s, standardised testing in Australian schools has attracted controversy and divided opinion. In this series, we examine its pros and cons, including appropriate uses for standardised tests and which students are disadvantaged by them.


Educational assessment provides evidence of what students have learned and are able to do. Standardised educational assessments, such as Australia’s NAPLAN, are often used to judge and compare student achievement. They assess all students on the same content under standardised conditions.

The assumption is that this is fair, providing a level playing field for all students. Frequently, the focus of standardised administration becomes maintaining a test’s “integrity”.

What is “fair” for students with disability?

A major challenge in assessment is how to obtain evidence about learning from students with disability. We cannot know how these students are faring with their learning if assessments or tests are structured in ways that create barriers for them. We would not expect a student who is blind, for example, to complete a paper and pencil test.

Fairness can be thought of in two ways. It can mean procedures that treat everyone the same, or it can mean treating individuals according to their needs, to ensure a fair outcome for everyone.

For students with disability, the second way means removing assessment barriers that prevent them from achieving their best results. For a student who is blind, the obvious solution is a braille test, if they are proficient in braille, or a person or technological aid to read the test aloud, and record their response. This is known as an assessment adjustment (in Australia) or accommodation (in the US).

Reasonable adjustment protocol

Australian and international anti-discrimination law requires that reasonable adjustments are provided for students with disability to enable equal access to assessments as other students.

The law also protects the “integrity” of an assessment or certification so that a student with disability is not reported as achieving something that they have not. An underlying assumption of adjustments is that the effect of the disability and assessment barriers are “neutralised”, and the students participate in the assessment on the same basis as students without disability.

Standardised test adjustments often provide common options. For example, these might be the use of a scribe, reading questions (unless a test of reading) or instructions aloud, a support person, assistive technology, braille form, extra time, and rest breaks.

A major concern that policymakers, teachers, and some students have regarding assessment adjustments is that they should not provide unfair advantage for students with disability. Extra time is one of the most contentious areas. Guidelines are usually stringent.

For example, the NAPLAN guide for students who need testing in braille suggests an extra ten minutes per half hour for a writing test, an extra 15 minutes per half hour for a reading test, and an extra 20 minutes per half hour for numeracy, with more time if needed. For other disabilities, the protocol guidelines state:

it is recommended that no more than five minutes of extra time per half hour of test time be granted; however, in some cases, up to an additional 15 minutes per half hour of published test time may be provided.

Three issues with adjustments for students with disability

The first important issue is the goal of test adjustments. Do test adjustments, given their restrictions, only need to enable students to get a pass? Are they able to do their best?

The second, equally important, issue is the lack of empirical research evidence regarding the appropriateness of common test adjustments. For example, one US study identified optimal time extensions for all students with disability, on average, as one and a half to two times the standard test time. For students with visual or hearing impairments, two to three times the standard time is necessary. This creates another disadvantage for students with disability if the test has to be completed in one sitting. Other researchers have shown that simplification of standardised tests can improve results for students with disability, without losing validity or reliability.

The third issue is that disability discrimination can take many forms. It also occurs when no adjustments are available and students with disability are not able to participate in assessments. However, students with disability want to be treated the same as other students, even to the extent that some students do not want adjustments.

For example, in international tests such as PISA or TIMSS, students with intellectual or functional disabilities in mainstream schools who “would be very difficult or resource intensive to test” are excluded. In the last round of TIMSS in 2015, 2.1% of Australian students in mainstream schools were excluded. However, an estimated 18% of Australian students receive some form of adjustment in their education. Potentially, 16% of students who complete these tests may need test adjustments. What adjustments were provided, and how this impacted on their performance, and Australia’s overall performance are not known.

Three ways standardised testing can be improved for students with disability

First, research is urgently needed on the impact of test adjustment restrictions on how well students with disability are able to do.

Second, the extent to which conditions such as time restrictions affect all students, not just students with disability, needs to be reconsidered. As US researchers have noted, time and speed of response are not usually identified as components of what is being tested. The best solution is to eliminate the role of speed in testing. When more time is available, all students do better, but students with disability improve more.

Third, alternative forms of assessment should be provided, as allowed under US legislation, rather than tinkering with standardised tests.

Assessment should provide information on what a student has learned and is able to do. It should not focus on the integrity of a test to the extent that limits equitable participation by students with disability. Often, the outcome of standardised tests is reinforcement for students with disability that there are things they can’t do that students without disability can. Everyone should be given the opportunity to show what they know regardless of disability.

Tuesday, November 21, 2017

Standardised tests are culturally biased against rural students


Since it was introduced in the 1800s, standardised testing in Australian schools has attracted controversy and divided opinion. In this series, we examine its pros and cons, including appropriate uses for standardised tests and which students are disadvantaged by them.


It is generally reported that rural students are up to one and a half years behind their metropolitan peers in the National Assessment Program – Literacy and Numeracy (NAPLAN) and Programme for International Student Assessment (PISA) tests. They are also less likely to complete year 12, and half as likely to go to university.

However, there are two key problems with how these determinations are arrived at: firstly, cultural bias in tests, and secondly the problem of averages.

Cultural bias

If you ask a teacher in a rural school about the gap in achievement in NAPLAN, they tend to roll their eyes and say something like:

is it any surprise that our kids don’t do as well? A lot of the questions don’t have any relevance to their real lives.

Such questions include a literacy task asking a student to write a recount of a day at the beach – when they haven’t been to one – or a numeracy task using a train timetable – which they don’t use.

ACARA’s response would likely be that timetables are in the curriculum, therefore it is right to develop a test using them. However, the fact that timetables are in the curriculum doesn’t mean the curriculum is fair.

That is the underlying issue with standardised tests – they need a standard curriculum. We might want to benchmark students’ literacy and numeracy, but to do that we need to ask questions, and questions are always embedded in culture. The question is – whose culture?

The Australian curriculum has been criticised as being “metro-centric”, in line with teachers’ comments about the tests having no bearing to the students’ lives. While we tend to accept cultural differences for students of Aboriginal and Torres Strait Islander descent, and students from language backgrounds other than English, we often don’t consider rural kids to be different.

However, the international field of rural literacies has shown us that rural people use different literacy constructions. In spatial reasoning, a key numeracy skill, we know that rural people use different spatial dimensions when drawing maps – not like the city blocks common in NAPLAN tests.

If we continue to ignore these difference in the construction of standardised tests, we will continue to produce disadvantage for rural students.

The problem of averages

To have a standard to compare results against in standardised testing, there first needs to be a “standard”. How this standard, and average achievement, is skewed in countries like Australia, where nearly 70% live in capital cities. They skew the data to their own norm, reinforcing the cultural relevance (or irrelevance, in the bush) of the tests and curriculum and making these standards seem normal and just.

Typically, results are reported as “metropolitan” and then “rural” students, with achievement in one compared to the other. This approach, however, collapses a lot of difference and creates much of the problem. When we break down NAPLAN by the geographic classifications used by the Australia Bureau of Statistics (major city, inner regional, outer regional, remote, very remote) and control for socioeconomic background and Indigenous status we get something different. Instead, we find that the negative associations are with areas surrounding large cities, and actually get better the further away one goes from the city, until we hit very remote areas.

The problem is numbers and averages, and how we talk about places as “the same”. There is great socioeconomic diversity, and local environmental differences between, for instance, Port Macquarie and Dubbo.

We’re still asking the wrong questions

This year, NAPLAN tests have revealed that student performance has only improved slightly since tests were introduced a decade ago. While we are awaiting the final report, previous data have shown the gap between the top and bottom, rural and city has not improved significantly either. So all this money, and test anxiety experienced by children, has only reinforced what 40 years of educational sociology already told us: culture matters in education.

In the absence of sophisticated ways of measuring and reporting achievement, we fall back on old failed methods. All NAPLAN has done is reinforce a social gradient of advantage and disadvantage, and seemingly legitimise unequal outcomes. The process of schooling is deemed to be neutral, when in fact its process is the key issue.

Is it any surprise rural students seem to perform worse when to succeed they have to learn about a foreign place? Try finding a science text with examples from the country, or novels about rural Australia (the real ones, not the romantic ones). As a result, students have to mentally leave their rural place everyday and imagine themselves in another world.

Standardised testing relies on getting the underlying curriculum right. If that curriculum continues to legitimise the marginalisation of people or groups, we cannot say we got it right.

New research shows explaining things to ‘normal’ people can help scientists be better at their jobs

Why being a historian is about so much more than producing displays for museums

Monday, November 20, 2017

English test for international students isn’t new, just more standardised


Since it was introduced in the 1800s, standardised testing in Australian schools has attracted controversy and divided opinion. In this series, we examine its pros and cons, including appropriate uses for standardised tests and which students are disadvantaged by them.


International students’ English language skills is a perennial topic for debate. A number of changes were announced by Education Minister Simon Birmingham in early October. There has been some confusion about these changes in the media, which deserve clarification.

No new tests, just new standards

The press has reported that international students who have completed an English studies course will now have to pass a new English test for entry into university courses. On ABC AM, the tests were also portrayed as being a new, additional measure. In fact, there is no further standardised testing being implemented by the government, but rather a change in regulation.

Simon Birmingham has not corrected this misconception. In response, English Australia (a national body for English language providers) produced a media release to reassure potential international students that there is, in fact, no additional testing involved.

What has changed, however, is Standard P4, where English providers must each set their own formal measures showing that their outcomes match related pathways to university programs. Many English providers currently meet these requirements and engage in assessment using informal standardised English tests, benchmarking against other English providers, and (for the best providers) tracking students at university to verify the effective preparation of those students. The new guidelines have been made in direct consultation with the English language provider industry.

The expansion of the regulatory standards to the vocational sector is interesting, since it now applies to:

“all courses provided, or intended to be provided, to overseas students that are solely or predominantly of English language instruction.”

Previously, VET English courses did not have to maintain a class size of 18 or less, nor did they have to provide a minimum of 20 hours per week face-to-face class time. This is important because VET English courses can be used as a pathway into university, and the different practices means students have different English outcomes. Students with lower English skills have a lesser ability to engage with higher-level language required for study.

The changes will not, however, affect foundation programs that focus on academic skills, with a lesser focus on language skills teaching.

The problem with short courses

One issue these changes do not address is the pressure English providers are under to produce students with proficient English in short periods of time. A quality ELICOS provider that tracks the outcomes of their students may offer a course that takes 15-20 weeks, whereas another provider might offer a similar 10-week course elsewhere. Universities will accept both. There is no standardised framework to establish equivalence. The market favours the shortest course for the quickest university enrolment.

These short courses base their educational approach on the idea that it takes 10 weeks of intensive study to improve English by a certain amount, specifically an increase of 0.5 on the IELTS English test (scores range from 0 to 9). University students are usually required to have IELTS 6.0-7.0, depending on the course.

Often, students go to an English provider until they achieve an equivalent level of English to IELTS 6.0-7.0, but note that their levels are determined internally by the ELICOS provider itself, and the student can then go to university without being tested independently. The new ELICOS standards place greater accountability on English providers, since now they need to have formal processes to show that their student outcomes are of a similar quality to other measures or pathways used for tertiary admission. This step towards standardising, however small, is welcome, because it should reveal differences in outcomes for different types of English course.

This change may make ELICOS providers take into account that students get very different results in 10-12 weeks. This depends on how proficient students already are when they start the English course. Low levels of English can be improved rapidly, but this slows as IELTS scores become higher, especially approaching university-level English. This effect continues for international students who go on to undertake university study. By the end of their degrees, some students will have no change or even a lower IELTS score than when they started university: none seem to improve by more than 1.0 in their scores, even after three years.

Sink or swim

When reading media articles such as this, people may wonder how international students are even allowed into university if their English is not adequate.

It is true that universities have traditionally allowed students to enrol at a level of English where the IELTS test makers state “more English study is needed”. The university’s IELTS levels are then used as a benchmark for other university entry methods, such as ELICOS, and to set scores on comparable English tests (such as TOEFL).

Students who need to develop their English further will either sink or swim. Many students swim and tread water, but we hear a lot about those who sink, and some are driven to cheat. The changes in regulation do not deal with these issues.

Commissioning research on how long it actually takes for students at different levels of English to be ready for university study might be useful. This would provide the ELICOS sector with a realistic idea of how long their courses need to be, and the universities with a better understanding of how much preparation should be expected.

Universities, study agents, and overseas students also need to be made aware of how important it is to have solid English skills for university study and to become competitive in the workforce. This may help all parties understand that extra time spent on English language study may make all the difference for the future.

Moving Away From Data Invisibility at Tribal Colleges and Universities

By Christine A. Nelson

This blog is part of a series highlighting the findings from Pulling Back the Curtain: Enrollment and Outcomes at Minority Serving Institutions.

college-of-the-muscogee-nation-narrow


The invisibility of Native American [1] perspectives—those of Native students, researchers and their communities—continues to plague higher education, despite numerous calls for action from educational advocates across the country. A recent report from ACE, Pulling Back the Curtain: Enrollment and Outcomes at Minority Serving Institutions, confirms the challenges that other scholars have encountered in trying to be inclusive of Native perspectives: namely, a lack of data on Tribal Colleges and Universities (TCUs) and the students they serve.

Disaggregated numbers for TCU enrollment were unavailable for this report, given the low number of TCUs that participate in the National Student Clearinghouse (NSC), from where the researchers pulled the data. According to NSC, 84 percent of Title IV degree-granting institutions participate, whereas only 50 percent of TCUs (17 of 34) have reported student data at any one time.

The missing Native narrative and lack of data availability is not a negative reflection of NSC or TCUs, but rather a challenge and opportunity that institutions like TCUs face when establishing a foothold in the higher education landscape. Data availability for TCUs would improve the visibility of Native perspectives and drive higher education practice, policy and research toward improving the system of higher education for Native students and communities.

Invisibility in the Native Context

Invisibility, from a Native American perspective, has been largely framed in terms of student experiences, where Native students are not seen—their stories are either missing or misunderstood within higher education. Bryan McKinley Jones Brayboy uses an invisibility framework to describe how students navigate and persist at predominantly white institutions and how invisibility acts as a paradox. In one sense, invisibility marginalizes Native voices and experiences. In another sense, invisibility allows one to maintain cultural integrity while navigating potentially hostile spaces within higher education.

The book Beyond the Asterisks confronts the dominant narrative that positions Native students as an “asterisked” group in quantitative student data due to statistical insignificance. The authors instead use qualitative methods to demonstrate the multiple perspectives missing from the research literature, writing that the asterisk needs to be retired and Native student experiences need to be heard on a larger platform. Looking at the history and purpose of data through the invisibility framework contextualizes why the Native perspective has been absent and begins the discussion of how data can promote equity and visibility for TCU institutions and students.

History and Purpose of Data

After 500 years of multiple waves of colonization and educational approaches that favored assimilation, dedicated tribal leaders, tribal community members and educational allies began the tribal college movement in the 1960s and 1970s. Provoked by the fact that Native students were not persisting and graduating from mainstream institutions, TCU advocates sought to design institutions that best served individual tribal nations and their students. Today, 35 TCUs enroll nearly 28,000 full- and part-time students annually, and enrollment continues to grow incrementally.[2] Between the 2002 and 2012 academic years, overall TCU enrollment increased 9 percent.[3] TCUs are in the opportune position to influence and determine Native higher education practice, policy and research across the United States.

The notion of tribal self-determination is intricately tied to federal Indian treaty rights between the United States and tribal nations. At TCUs, tribal self-determination gives tribes the right to direct educational initiatives that serve their needs, which includes institutional data usage. In most mainstream higher education settings, the production of data informs policy formation and allocation of resources. TCUs do not object to the importance of demonstrating accountability and progress, but through tribal self-determination, they are in a position to widen the meaning and purpose of research and data by employing Indigenous paradigms that value cultural integrity.

In some ways, the invisibility of TCU data has now emerged as an opportunity to purposefully engage in a data usage dialogue that values and centers around cultural integrity: These institutions are able to challenge the dominant research and data norms that often frame Native American communities as “less than” and deficient. The power to structure research protocols has been historically in the hands of non-Native communities and the right for tribes to self-determine education, which includes data capacity building and usage, disrupts this relationship.

Considerations for Building Data Capacity

Addressing the issue of invisibility through data capacity building is not a simple solution for TCUs and the higher education industry as a whole. Areas to consider when promoting data capacity building are:

Data Use and Participation

As accountability movements continue to take shape within the higher education sector, TCU leadership are recognizing the need to be better stewards and users of data. This move toward promoting the collection and use of data is two-fold. On one hand, participation in national data warehouses has pragmatic implications to demonstrate the impact of TCUs. On the other, TCUs are in a position where they can (re)define data usage for the betterment of their communities.

Participation in national data warehouses will increase the visibility of TCUs and allow researchers to conduct more high-level analysis of TCU trends. Possibly more important is how data availability allows TCUs to understand and advocate their contributions to the wider higher education field, and better serve their students by further understanding their postsecondary pathways.

Cultural Integrity of Data

As participation in non-tribally controlled systems like NSC become more commonplace at institutions that serve Native communities, it is important to remain focused on cultural integrity. Currently, existing large data warehouses do not employ measures, frame data collection, and report the types of data that capture the unique contributions that TCUs make to their communities (e.g., cultural and language revitalization).

While collaborative efforts with tribally informed organizations, like the American Indian Higher Education Consortium (AIHEC) and the American Indian College Fund (AICF), have begun reporting data indicators relevant to TCUs and their communities, local work also has been happening with the support of tribally informed agencies. For the past three years, AICF, through external grants, participated in the Achieving the Dream’s National Reform Network, a student success initiative to build capacity at the community college level. Two tribal colleges, DinĂ© College (AZ) and Salish Kootenai College (MT), embraced a data-informed institutional culture that blended the pragmatic aspects of data with indicators that valued cultural integrity and tribal self-determination.

Resource Realities and Needs

As data usage continues to develop, we must consider the current resources available at TCUs. It has been well documented that TCUs face fiscal challenges that lead to a shortage of staff and faculty. These institutions frequently require employees to hold multiple administrative roles within the institution, and often the employee reporting institutional data has other responsibilities. Additionally, TCUs, unlike any other higher education institution, are required to report institutional data to the Bureau of Indian Affairs (BIA Form 6259); many times, supplemental reporting is also required to the tribal councils who charter the institutions.

Adding more responsibilities to a department or person without a commensurate increase in resources raises the question of whether or not participating in larger data warehouses is the best method for TCUs to build capacity. It also begs the question of how smaller and developing TCUs should begin the process of tribally self-determined data usage.

Improving Data Usage

A more collaborative and robust support network is needed to support TCUs on data usage. Currently, organizations like AIHEC and AICF continue to lead collaborative data initiatives by engaging with TCUs on a local and individualized level. Researchers and policy analysts that work within larger data warehouses need to engage in dialogue to understand TCU data collection norms so they may systemically question how their existing practices ignore or impede TCU data usage. Once actively engaged in this process, outsiders can begin to systemically and culturally understand how TCU self-determination, which is linked to federal public law (see P.L. 95-471), is distinct from the dominant standards of data collection in U.S. higher education. Researchers and policy analysts at large-scale national data warehouses need to be open to respecting the tribal sovereignty inherently embodied by TCUs and data usage. Failing to do so places an undue burden on TCUs to assimilate their values to meet dominant expectations and to continue the narrative that TCUs are deficient in their operations.

Next Steps

The path toward TCU data capacity building needs to be informed by the intersection of history and purpose. History informs how data typically serves the dominant narrative of higher education; in today’s context, data collection and sharing need to shift to accommodate tribal values that inform TCU operations and sustainability. This multifaceted approach helps both tribal and non-tribal entities view the process as collaborative and TCU-led as they work toward improving TCU visibility and data usage. This will allow outside researchers to contribute to the TCU narrative by 1) asking how research and data visibility is relevant to individual tribal communities and 2) working collaboratively with TCUs and their institutional researchers[4] to advance the tribal self-determination of data and its contribution to improving institutional visibility.


[1] Native and Native American are used interchangeably and include both American Indian and Alaska Native people within the United States.

[2] Derived from IPEDS 12-month unduplicated head count (AY 2013–14) for all students attending TCUs. Does not include Wind River.

[3] Derived from the IPEDS 12-month unduplicated headcount (AY 2002–03 and AY 2012–13) for all students attending TCUs. Does not include Comanche Nation, Muscogee Nation, and Wind River.

[4] Not all TCUs or tribal nations have institutional researchers or institutional review board protocols, but there is a growing trend for tribal communities to adopt formal requests for research (Hernandez 2004).

Faces of Dreamers: Fatima, Case Western Reserve University

This is one in a series of posts on individual Dreamers, undocumented immigrants brought to the United States as young children, many of whom are under threat of deportation following the Trump administration’s decision last month to rescind the Deferred Action for Childhood Arrivals policy, or DACA.


Last month, Fatima, a first-year Case Western Reserve University (OH) student brought to the United States as a one-year-old from Honduras, traveled to Washington, DC, to tell her story to lawmakers.

Fatima was one of dozens of Dreamers who went to Capitol Hill as part of an advocacy event organized by FWD.us to stress the importance of Congress acting quickly to pass legislation protecting their status and ability to legally work, attend school, and serve in the military. She told The Observer that DACA has allowed her to flourish academically and personally, and that she hopes to use her voice to all of those impacted by the DACA rescission.

“I’m trying to raise my voice and to share my story” Fatima said. “We’re humans and we’re American in every single way except for a paper to prove that…The goal is just to become U.S. citizens and to be Americans. Because we are Americans and we just need to have that on paper.”

To read the full story, click here.

The Positive Economic Impact of Historically Black Colleges and Universities

Title: The Positive Economic Impact of Historically Black Colleges and Universities

Source: UNCF

hbcus-make-america-strongA recent report from UNCF provides a glimpse into the economic impact Historically Black Colleges and Universities (HBCUs) have on the nation.

HBCUs Make America Strong: The Positive Economic Impact of Historically Black Colleges and Universities offers data on earnings, employment, and the economy for the nation, individual states, and institutions demonstrating that the economic benefits of HBCUs are substantial.

The 101 public and private HBCUs in the United States enroll nearly 300,000 students. Eighty percent of these students are African American, and 70 percent are from low-income families. According to the report, HBCUs produce a total economic impact of $14.8 billion. Every dollar spent by HBCUs and their students generates $1.44 for the local and regional economies.In addition, HBCUs have added 57,868 on-campus jobs and 76,222 off-campus jobs to the local and regional job market.

HBCU graduates working full time can earn an additional $927,000 over their lifetime because of their credential.

To download the full report, please click here.

Sunday, November 19, 2017

Universities need to rethink policy on student-staff relationships

The Human Rights Commission report, Change the Course: National Report on Sexual Assault and Sexual Harassment at Australian Universities, was published in August 2017.

In response, Australian universities have taken various actions to address sexual assault and harassment on their campuses. Most are directed at making universities safer places to study and live. Measures include introducing mandatory responding to disclosure and training for all staff, teaching students about consent, and increasing the number of specialist counselling staff.

Framing staff-student relationships

Universities should also review policy governing staff-student relationships. Across the sector, these relationships are framed as consensual and are couched in unhelpful, ambiguous language. We conducted a review of staff-student relationship policies in Australian universities and international policies. We found the following similarities across most institutions.

Staff are generally discouraged from entering into sexual relationships with students. Discouragement aside, universities recognise that these relationships may occur. Many universities express reluctance to interfere in the “personal” lives of staff and students. Most set out some conditions that should apply when the discouraged but inevitable relationships form.

Conditions may include the staff member disclosing the relationship to the university. This may lead to adjustments to the duties of that staff member, which are then outlined in varying degrees of detail. Where specified, these may include removing the staff member from any assessment of the student’s work. They may also not be able to make decisions regarding the award of scholarships or other distinctions. In the case of graduate research candidates, it may involve removing the staff member as senior or main supervisor. However, they may still be able to serve on the supervision team.

Many Australian universities then link this policy with their Conflict of Interest policy. This signals that the biggest concern about staff-student sexual relationships is the possibility of conflicts of interest emerging for the staff member. This does little to address the potentially damaging impact of these relationships on students, and on the learning and research environment for other students.

We need better professional standards

The health care sector has much clearer professional standards. For health care practitioners, professional boundaries are recognised as integral to good practitioner–client relationships. Accordingly, professional standards prohibit sexual relationships entirely. This lasts either for the duration of the professional association or for some period (up to two years in some cases) after the professional relationship has ended.

The Medical Board of Australia states:

A doctor should not enter into a sexual relationship with a patient even with the patient’s consent.

For psychologists and counsellors, this prohibition extends to former clients and anyone closely related to the client.

The code of professional conduct set out by the Nursing and Midwifery Board of Australia notes the vulnerability of clients under their care, and their relative powerlessness, must be recognised and managed. Sexual relationships between these professionals and current or previous patients are deemed inappropriate and unprofessional.

In comparison, universities have a relatively relaxed stance on these types of relationships. The ethical standards applied to other professions are explicit that the power imbalance is one where free consent can’t be assumed on the part of the client/patient. It is up to the practitioner to make sure professional boundaries are maintained at all times. Seeking sexual partners among their clients/patients puts their professional registration and their ability to practice at risk.

What would happen if we applied the same standards to university staff? If it is accepted that the imbalance of power between staff and students compromises the capacity of a student to provide free consent for sexual activity, and sexual activity without free consent is harassment or assault (as defined by law), then the current framing of staff-student “consensual” relationships by Australian universities is inappropriate. It is also inconsistent with the sector’s stated aim to focus on the interests and needs of students.

Universities should consider adopting professional standards like those in the health care profession. Their stated aim is to prioritise the welfare of students and their entitlement to learn and undertake research in a safe, respectful environment. If we are really to “change the course”, we need to do more than address student sexual conduct. We need to raise the bar for professional and ethical standards for all who work in this sector as well.

Diapers, potties and split pants: Understanding toilet training around the world may help parents relax

Most young Australians can’t identify fake news online

In September 2017, we conducted Australia’s first nationally representative survey focused on young Australians’ news engagement practices.

Our survey of 1,000 young Australians aged eight to 16 indicated that while roughly one third felt they could distinguish fake news from real news, one third felt they could not make this distinction. The other third were uncertain about their ability.

In part, we were motivated by the gravity of recent academic and public claims about the impact of the spread of “fake news” via social media – although we are well aware of arguments about the credibility and accuracy of the term “fake news”. In our study, we classified fake news as news that is deliberately misleading.

What we found

Age plays a role here. As children get older, they feel more confident about telling fake news from real news. 42% of Australian teens aged 13-16 reported being able to tell fake news from real news, compared with 27% of children aged 8-12.



We found young Australians are not inclined to verify the accuracy of news they encounter online. Only 10% said they often tried to work out whether a story presented on the internet is true. A significant number indicated they sometimes tried to verify the truthfulness of news (36%). More than half indicated they either hardly ever tried (30%) or never tried to do this (24%).



We also asked young Australians how much attention they pay to thinking about the origin of news stories, particularly those they access online. More than half indicated they paid at least some attention or a lot of attention to the source of news stories (54%). However, 32% said they paid very little attention and 14% said they paid no attention at all.

To us, the circulation of fake news on social media is troubling, given what we know about how social media platforms create news filter bubbles that reinforce existing worldviews and interests.

Even more concerning, though, is the way many social media platforms allow people with vested interests to push content into feeds after paying to target people based on their age, location or gender, as well as their status changes, search histories and the content they have liked or shared.

There is often no transparency about why people are seeing particular content on their social media feeds or who is financing this content. Furthermore, much online content is made by algorithms and “bots” (automated accounts, rather than real people) that respond to trends in posts and searches in order to deliver more personalised and targeted content and advertising.

Where are young Australians getting their news?

Given these concerns, we used our survey to ask just how much news young Australians get through social media.

With all the hype around young people’s mobile and internet use, it might come as a surprise that social media did not emerge as their top news source and nor is it their most preferred.

80% of young Australians said they had consumed news from at least one source in the day before the survey was conducted. Their most frequent source was family members (42%), followed by television (39%), teachers (23%), friends (22%), social media (22%), and radio (17%). Print newspapers trailed a distant last (7%).

However, this is not to diminish the significance of young people’s use of social media to consume news. Two-thirds of teens said they often or sometimes accessed news on social media (66%) and more than one third of children stated they did so (33%).

For teens, Facebook was by far the most popular social media site for getting news with over half (51%) using it for this purpose. For children, YouTube was by far the social media platform used most for news. 37% got news from this site.



What should we be doing?

There is no doubt that legal and regulatory changes are needed to address the issue of fake news online.

However, education must also play a critical role. Media education opportunities should be more frequently available in schools to ensure young Australians meaningfully engage with news media.

Media Arts in the Australian Curriculum is one of the world’s only official systematic media literacy policies for children in preschool to year 10, but it is being under-used. Our survey suggests only one in five young Australians received lessons in the past year to help them critically analyse news, and only one third had made their own news stories at school.

The curriculum also needs to ensure young people understand the politics, biases and commercial imperatives embedded in technologies, platforms and digital media.

Our survey shows that young people are consuming lots of news online. However, many are not critiquing this news or they don’t know how to. The implications of this are not necessarily self-evident or immediate, but they may be very wide reaching by influencing young people’s capacity to participate in society as well-informed citizens.

Options on the table as South Africa wrestles with funding higher education

A report into the feasibility of offering free higher education at South Africa’s universities has finally been released. It has been nearly two years in the making, developed by a commission of inquiry that President Jacob Zuma set up in response to nationwide fee protests.

The lengthy report provides an accurate diagnosis of the state of higher education funding, as well as the problems it faces. But its proposed solutions are problematic. Many of its limitations arise from a failure to properly integrate an understanding of public finance and public economics into the analysis and recommendations.

The Commission’s report gets two critical things right – even though neither will please student activists. The first is that planned student numbers are simply too high and should be revised downwards. The second is that the country simply can’t afford free higher education for all students given its other priorities and weak economy.

But its recommendations are poor. Models are proposed that represent, I would argue, a significant step backwards from scenarios developed by the Department of Higher Education and Training two years ago. The department’s scenarios are indirectly supported in another report that’s just been released, by the Davis Tax Committee.

The tax committee endorses a hybrid scheme for higher education funding. This would retain and increase grants for poor students’ university fees. It would use loans to fund the “missing middle” – students from households that earn too much to qualify for government funding but still can’t afford higher education. If South Africa’s concern is really about immediate improvements in equitable access to higher education for poor students, this is the option that should be receiving the most attention.

The Fees Commission report

I have argued previously that one reason for the current state of affairs has been excessive student enrolment, relative to appropriate standards and adequate resources. Yet various policy documents propose rapid increases to enrolment in the coming decades.

The fees commission correctly argues in its report that these projected enrolment numbers are unrealistic. It points out that such high student numbers threaten quality and make adequate funding even more unlikely. It recommends that the numbers be revised downwards.

The commission also does well in recognising that – given the state of South Africa’s economy, public finances and other important government priorities – free higher education for all – or even most students – is simply not feasible or desirable. It rejects both the possibility of fully funded higher education and the demand for university fees to be abolished. But it endorses the abolition of application and registration fees, along with regulation of university fees.

There are three critical issues within the current student funding system.

  1. What household income threshold should be used to determine student eligibility for support from the National Student Financial Aid Scheme (NSFAS) to ensure all students who need partial or full support are covered?

  2. What resources are needed to ensure that all students below the threshold receive the adequate funding; up to full cost where necessary?

  3. How should the support provided be structured in terms of grants versus loans, or combinations of these?

The commission errs in trying to address these questions.

A worsening of equity

The fees commission’s fundamental proposal in response to the demand for free higher education is the adoption of an income-contingent loan (ICL) scheme. Under this all students regardless of family income who register for university are funded by loans up to the full cost of study.

These loans would be from private banks based on guarantees of repayment from government. In other words, after a specified number of years either the student or the government would have to start repaying the loan. There are numerous problems with this model.

The ICL would, in some ways, constitute a worsening of equity. Poor students who currently qualify for NSFAS grants would now only get loans.

In the ICL scheme, either students pay or the government does. The current state of the higher education system suggests a significant number of students will not be able to repay such loans. But nowhere does the commission calculate the implications for future government expenditure.

A number of other proposals are seriously problematic. One involves extending the loan scheme to students in private higher education institutions. This constitutes a dramatic change in post-apartheid policy, potentially leading to indirect privatisation of the higher education system without proper consultation or sound basis for doing so.

Another is the suggestion that higher education expenditure should be benchmarked as 1% of South Africa’s Gross Domestic Product. This is wrongheaded because it does not take into account the proportion of young people in the country or the state of the basic education system.

The Davis Tax Commission’s report is more narrowly focused but, perhaps as a result, endorses arguably the best and most feasible way forward for tertiary funding.

Better scenarios

The current NSFAS threshold is R122,000, which means that students whose households earn less than this in a year qualify for funding by the scheme. There are two problems: first, not even all students below this threshold are getting all the financial support they need. Second, there are students in the “missing middle” who are above the threshold. They cannot fully fund themselves but have no access to support.

In 2015 the department of higher education and training provided rough estimates of the cost of raising the NSFAS threshold and fully funding students below the different, hypothetical thresholds.

It estimated that increasing the NSFAS threshold to R217,00 and covering full cost of study for all students below that would require an extra R12.3bil in 2016/17 for approximately 210,000 students.

The Davis Tax Commission effectively endorses this scenario, proposing a hybrid scheme that retains and increases grants for poor students and university fees, but uses income-contingent loans to fund the missing middle. It estimates that an additional R15 billion could be raised annually for higher education through a combination of increasing the rate of income tax for the highest earners by 1.5%; increasing capital gains tax for corporations; and, raising the skills levy by 0.5%.

In contrast, the commission’s proposals for raising funds for the loan scheme and other proposals – such as taking R50 billion from a surplus in the unemployment insurance fund for infrastructure investment – arguably violate some fundamental public finance principles and may be illegal.

The tax committee’s report suggests that the department’s scenario is feasible from a public finance perspective. If the government is genuinely concerned with creating maximally equitable access to higher education for poor students, this is the immediate option that should be receiving the most attention. The design and cost of a more modest income-contingent loan scheme for those students who are not covered, even with expanded support, will require detailed technical analysis and further discussion. Some related work has been done under the umbrella of a separate income-contingent loan initiative, the Ikusasa Student Financial Aid Programme, which could be useful. As the commission report notes in rejecting it, however, there are various concerns about the actual financial aid programme proposal that make it an unconvincing option at this stage.

The different all-or-nothing approaches being proposed by student activists and the fees commission risk the possibility of hundreds of thousands of poor and needy students not being assisted – even though the resources are available to do so.

Thursday, November 16, 2017

NAPLAN has done little to improve student outcomes


Since it was introduced in the 1800s, standardised testing in Australian schools has attracted controversy and divided opinion. In this series, we examine its pros and cons, including appropriate uses for standardised tests and which students are disadvantaged by them.


In recent years, we have seen a global surge in standardised testing as nations attempt to improve student outcomes. Rich nations, as well as many middle- and low-income nations, have participated in international assessments such as the Programme for International Student Assessment (PISA), and also developed their own national standardised assessments. But can such assessments improve student outcomes?

Information from standardised tests is too limited to improve outcomes

The National Assessment Program – Literacy and Numeracy (NAPLAN) was introduced in Australia in 2008. It is a standardised test administered annually to all Australian students in Years 3, 5, 7 and 9. These tests are supposed to perform two functions: provide information to develop better schooling policies, and provide teachers with information to improve student outcomes.

However, a decade on and many millions of dollars later, student outcomes on NAPLAN have shown little improvement. Australia’s performance on international assessments such as PISA has actually fallen over these years. Standardised testing has not produced a positive effect on student learning outcomes.

Supporters of standardised testing see NAPLAN as necessary to know which schools and school systems are doing well and which ones are not. It is undoubtedly useful to know if certain parts of the country (such as regional or rural areas), or certain student populations (for example, students with an immigrant or low-SES background), are underperforming. Such information is also crucial when it comes to arguing for resource redistribution, as we see in debates about Gonski.

However, there are clear limits to what NAPLAN can tell us. While it helps us understand schooling at the system level, the information gained from NAPLAN about individual students, classrooms and schools is too limited and error-prone to be of use.

For instance, there is a limit to the number of questions NAPLAN can ask to assess a particular student’s skill or understanding. It may determine that a student cannot perform addition using “carrying over” based on their performance on one or two such items on the 40-item test. This means the error margins in these assessments are very high.

Such errors may be neutralised at a system level, when the test is performed at a sufficiently large scale and with a large sample of students, but when used at the level of individual students, classrooms or schools, NAPLAN assessment data is seriously flawed.

Assessment versus standardised testing

Assessment is integral to the teaching process and occurs almost constantly in good classrooms. Teachers have a range of assessment techniques, including questioning during the course of a lesson, setting assignments, using data from standardised testing, and developing more formal exams. These different assessment techniques fulfil a variety of different purposes: diagnosing student knowledge, shaping student learning and assessing what has been learned.

Increasingly, teachers are encouraged to individualise their teaching in order to accommodate the needs of individual students. This focus on “inclusion” extends to assessment, and teachers are expected to provide a variety of formats and opportunities for students to demonstrate their learning. Education policy statements, such as the 2008 Melbourne Declaration on Educational Goals for Young Australians, emphasise the valuing of student diversity.

Standardised assessments, on the other hand, assume that particular levels of achievement are expected of certain ages or year levels. Students are then classified as meeting, exceeding or being below these expectations. This flies in the face of the realities that teachers observe daily in their classrooms: students do not present themselves as “standardised” humans.

Geoff Masters, Chief Executive of the Australian Council for Educational Research, claims that in any given classroom, the differences between students can be multiple years:

Some Year 9 students perform at the same level as some Year 5, and possibly some Year 3, students.

By this logic, the notion of providing a standardised NAPLAN test for all Year 3, 5, 7 and 9 students is inappropriate.

Teachers who see their students all year long will always have a deeper knowledge of their students than point-in-time standardised tests can offer. Teachers can make better, more nuanced, more useful and more timely assessments of their students. They may choose to include standardised assessments in the suite of approaches they use, but NAPLAN should not be solely privileged over teacher assessments.

Despite this, enormous amounts of money and time have been spent training teachers to use NAPLAN results to inform their teaching. This not only provides an unnecessary and misleading distraction to already over-burdened teachers but it undermines their own professional knowledge and judgement.

Stepping up accountability doesn’t necessarily translate to better outcomes

One of the goals of NAPLAN was to enhance accountability. By judging all schools on the same measure, comparing schools with similar populations, and then making these comparisons public, it was expected that all schools would lift their game.

This strategy assumed that schools could improve but were choosing not to, and that the inducement of market logics (such as school choice) would motivate all schools to do better. It also ignored the many out-of-school factors, such as poverty and geography, that affect the ability of teachers and schools to improve student outcomes.

The other logic was that schools that performed worse could learn from schools that were doing better. Besides minimising the importance of local factors to student learning and suggesting there are universal “silver bullets”, setting schools in competition with one another hardly provides incentives for better performing schools to share their knowledge.

Blame alone is not the answer

Accountability is important and standardised testing can inform policies and improve accountability. But to function as an instrument of accountability, these tests should not be high-stakes, high-stress or high-visibility, particularly since they are so error prone at the student, classroom and school levels.

The use of sample-based tests, such as the United States’ National Assessment of Educational Progress (NAEP), may instead provide useful information by state and territory, as well as by categories such as social capital, ethnicity and gender. This information could highlight problematic areas, and trigger closer and more targeted explorations.

To get this type of information, the tests need not be conducted every year, since effects of any reforms are seldom evident in one year. The error margins also make year-on-year comparisons of limited value. Sample-based tests will also remove the pressures placed on schools and students, which have proven so detrimental.

As recent NAPLAN results have shown, “blame and shame” alone does not improve student learning. Indeed, focusing solely on NAPLAN scores distracts from broader efforts to provide teachers, schools and school systems with the support needed to ensure all students are given the best chance to learn and succeed.

To date, NAPLAN has been largely used by politicians and the education system to hold teachers and schools accountable. But accountability can work both ways. If NAPLAN is to be used, we should also use it to also hold the education system and politicians accountable for the resources and funding they provide to schools and to the local communities they serve. Perhaps then we would see some real and sustained improvements in student outcomes.

Jobs and paid-for schooling can keep Tanzanian girls from early marriages

Sub-Saharan Africa is home to four of the top five countries in early marriage – or child marriage – rates: Niger, Chad, Mali and Central African Republic. Despite decades of campaigning to restrict or forbid early marriage, little has changed for the world’s poorest women. The percentage of these particularly poor women who were in a conjugal union by the age of 18 has remained unchanged for the continent as a whole since 1990 – and has actually risen in East Africa.

Early marriage appears to have absolutely no benefits. It accelerates population growth and decreases women’s participation in the labour force. It also reduces a country’s overall national earnings. Girls who marry before they turn 18 are at greater risk of childbirth-related complications that are the leading cause of death worldwide for girls aged 15 to 19.

But what’s not often reported in the media is that some girls themselves want to marry early. I discovered this when I conducted interviews with 171 people, most of them Muslim women, in two low-income neighbourhoods in Tanzania’s capital city Dar es Salaam.

The poorest girls and women see themselves as having few possibilities to earn an income for themselves. Even before they marry, girls from poor families must often resort to premarital sexual relations with their boyfriends who provide food and money. For many low-income Tanzanians, it’s also normal to start thinking about marriage at roughly age 15. Established cultural expectations in many ethnic groups suggest that adulthood begins at age 15 or 16.

Yet even those girls and parents who would like to delay marriage often have little choice because of poverty and the fact that women in slum neighbourhoods have fewer opportunities to earn an income than men do. Creating more opportunities for young women and girls to work and earn money is one possible solution to early marriages. Subsidising secondary education to keep poorer girls in school for longer is another.

Choices

One factor that pushes some girls into early marriage is the hidden costs associated with education. Many Tanzanian girls drop out after primary school. Primary education in the country is mandatory by law and is nominally free of charge. But numerous hidden costs exist: additional fees, uniforms, books and transportation.

Only a small percentage of students achieve good enough exam scores to be accepted to low-priced government secondary schools. This forces the rest into private secondary schools, which are usually too expensive for the poorest urban residents. Parents recognise the value of education and want to school their daughters. They just can’t afford to do so.

Sometimes the girls themselves wish to discontinue their studies. They perceive the transactional intimacy provided through marriage as offering a more secure future than an expensive secondary education.

After age 15, girls are expected to be self-sufficient to gain respect in the eyes of others. Marriage is viewed as a more likely way to gain that respect than through years of education with its high costs and uncertain rewards.

The people I interviewed felt that premarital sex was seen as shameful in their neighbourhoods. Relying on a husband or fiancé for money, though, is a respectable means of displaying independence.

When girls drop out of school, cannot find work and don’t have enough starting capital to sell food or other goods in their neighbourhoods, early marriage is often the only culturally approved way to be a productive adult. It can be seen as a sign of “success” for a girl: it means she has a good tabia; a good character.

Employment could help

Cultural traditions are a popular scapegoat for policymakers. But these should not be blamed for what are perceived elsewhere in the world as “backward” practices. Trying to eradicate cultural attitudes when these are grounded in economic and educational realities does little to change people’s behaviour.

Women living in the poorest parts of any city need policies that create employment opportunities. This would offer girls who might otherwise choose early marriage other choices.

Tanzania was a leader in the 1990s in Africa when it came to inclusive policies towards informal and street traders. But a rapidly growing population and competition among traders means many women cannot afford the licenses and permits needed to set up a business in a busy area with many customers. They may also not have sufficient capital or may need to stay close to home to care for family members.

Ultra-low interest micro loan programmes serving the poorest areas of the city could be organised for women who have no option but to obtain income from the smallest and least visible vending niches in the city.

Making education more attractive

Another option, or one that could run in parallel with improved access to work opportunities, could centre on education. Tanzania could consider employment-oriented education policies and subsidising secondary education for the poorest students. This would provide motivation for girls and their families to continue girls’ studies. These are issues over which poor families themselves have little control: structural change needs to come from above.

As long as girls and their families see the most viable – and morally acceptable – option for a girl’s economic survival to be early marriage with a male partner whose earning opportunities are greater than hers, the practice of early marriage is unlikely to decline among the urban poor.