Careers Update – February 7, 2019
February 6, 2019
Five Important Things to Keep in Mind when a Student Asks About University Rankings
The media frenzy from the release of the annual university ranking tables is an Australian winter tradition. It shows itself in many forms on a slow-news-day, from the media beat up over falling standards of Australian higher education to the self-congratulatory smugfest when the latest cut of the data falls in a certain university’s favour.
Because of this prevalence in the media-cycle, it’s only natural that parents and students attempt to adopt rankings as a definitive comparison tool between institutions.
It can be hard to dissuade students, or their parents, of the apparent importance of rankings. Here are five thoughts to keep in the back of your mind when you are asked about university rankings.
- The major rankings don’t focus on teaching quality
Many of the international rankings say that they consider teaching quality in their methodology, but these claims rarely stack up.
For example, let’s consider the- Times Higher Education (THE)– university rankings.
Within the Time’s methodology, 30% of an institution’s score is attributed to -‘Teaching’. But let’s break that category down and investigate what is- actually– being measured:
-
Teaching (total 30%)
- Reputation survey – 15%
- Staff-to-student ratio -“ 4.5%
- Doctorate to bachelor ratio -“ 2.25%
- Doctorates awarded to academic staff -“ 6%
- Institutional income -“ 2.25%
-
Not exactly the metrics you expect from a category titled -‘teaching’, is it?
- Some rankings rely on opinion polls to determine their outcomes
Even when student experience and teaching is considered by ranking organisations, the scores from these outcomes are often heavily diluted through the subjective opinions of the academic community.
In fairness, an institutions reputation can play a role in the validity of its publications and experienced researchers are more likely to have reviewed another universities research.
When it comes to teaching however, I don’t see any way that the opinions of the academic community could be an informed one, except within their own institutions. An academic from one university is not likely to have met many students from other institutions. Now I might not have a PhD, but I can think of a few more effective ways to measure teaching quality than based on hearsay.
Bias is also a major issue with these opinion polls. Virtually all survey participants would be active users of the data previously collected and are therefore undoubtedly influenced by it. This makes it harder for rankings to change from year to year. Essentially, they changed this year’s result by measuring (and publishing) it last year.
- Research outcomes dominate ranking outcomes
The methodology of most major ranking organisations is dominated by two things. Research outcomes and academic opinions of research outcomes. In every major ranking,- at least– 50% of the result is measured by these two factors.
For most course-work bound students, it’s unlikely this will bear much impact on the best institution for them.
- Employability information is available through government sources
Over the past five years the Federal Government has introduced a raft of measures designed to make it easier for students to compare Australian institutions -“ as demonstrated by the recent admissions transparency changes that impacted the terminology of university admissions.
The most important of these innovations is the Quality Indicators for Learning and Teaching (QILT) website at- www.qilt.edu.au.
QILT pulls results from several different surveys and data analysis to provide student comparisons on a huge range of different metrics. Students can compare discipline areas, universities or discipline areas at different universities.
Most importantly, career counsellors can access field of study graduate outcome data and compare based on institution. Graduate employment, median starting salaries and proportion of students in further study.
- Context is king when comparing data
QILT is an amazing tool for gaining an insight into different institutions and their program offerings. It is also potentially misleading to those who might approach the data it contains without context.
Many of the surveys are subjective, asking participants their opinion on resources, support, and the quality of their education. The problem with comparing these results is that every university has a different student body with a unique demographic and psychographic profile.
What a student considers to be a high-quality education will change based on background and interests. An ATAR 99 school-leaver on a Law or Medicine track will naturally have different expectations than say a mature-age student at a small regional uni.
Subjective surveys are a great way to get a feel for an institution, but generally are only truly comparable within the demographic context of that institution.
For institution comparison on QILT, you might be better off sticking to hard and fast metrics such as graduate outcomes.