This time last year a UK parliamentary committee of inquiry asked the vice-chancellor of Oxford University John Hood and the vice-chancellor of Oxford Brookes University Janet Beer: "Is an honours degree in history from Oxford University worth the same as an honours degree in history from Oxford Brookes?"
The Guardian reported that Hood replied: "At Oxford, we apply a consistent standard in awarding degree classifications. We use external examiners and we take their assessments very, very seriously."
Beer said: "It depends what you mean by equivalent and worth," adding that her university knew its honours degree was "of a national standard".
The vice-chancellors' responses were pretty standard for UK universities but unsatisfactory for the members of parliament who assessed their replies as not passing a final-year school essay.
The UK MPs' concerns about variable university academic standards are shared by legislators and prospective students in the US and elsewhere in the Anglo world for two reasons. First, governments have cut public funding for universities, increasing their reliance on tuition fees and other private sources of funds.
At the same time, several governments have allowed private for profit providers to offer higher education and some have actively encouraged and supported their expansion.
The privatisation of higher education in many countries has increased the financial incentive for institutions to compromise standards to maintain their viability. It has also led to the increased influence of institutions and their managers over lecturers and their academic decisions which were previously more strongly influenced by disciplinary norms and the expectations of the 'invisible college'.
The second reason for the public's growing dissatisfaction with universities' internal processes and assurances that appropriate academic standards are being maintained is the broader range of the public directly concerned with higher education.
In earlier periods of elite higher education, the students who attended universities and their parents knew the smaller number of universities by their reputation or knew someone who could advise them on college choice.
But the greatly increased number of institutions offering universal and near mass higher education are not well known and trusted by the much broader range of prospective students, many of whom do not have extensive cultural capital.
This article considers three measures currently being used or developed to establish similar academic standards in higher education: aptitude tests, the European community's tuning process and the OECD's feasibility study for the internal assessment of higher education learning outcomes.
Most US states do not have external final-year school assessment so selective colleges require applicants to sit an aptitude test, most commonly the SAT and the ACT. The standards of US colleges and universities also vary widely so many selective colleges use aptitude tests to select students for admission to their graduate schools.
Common graduate admission tests are the Graduate Record Examination, Graduate Management Admission Test, Law School Admission Test and the Medical College Admission Test.
These tests are problematic because they seem to measure innate ability or intelligence rather than scholastic aptitude and are not good at predicting academic success as a University World News item reported.
Even the apparently discipline-specific law, management, medicine and other graduate admission tests are too general to assess disciplinary skills and knowledge. So students in their final year of school train for their aptitude test and neglect their subject studies.
Yet while these aptitude tests aren't specific enough to assess disciplinary knowledge and skills, they do measure cultural capital. Consequently, they distinguish between students by race, socio- economic status or class, and many distinguish between students by sex. The tests discriminate - but on the wrong grounds.
The tuning project
The European Community is developing its tuning process to build trust in the very different qualifications offered in the 46 countries of the European higher education area so that academic credits may be accumulated, transferred and recognised in the European credit transfer system.
The tuning process identifies for each programme in each subject area its objectives and learning outcomes expressed as knowledge, understanding, skills and abilities. From this, the process identifies general and subject-related competences students should achieve after completing the course.
The tuning project has identified 30 general competences for occupational therapy, for example, such as 'capacity for analysis and synthesis' and 'ethical commitment' and 35 subject specific competences such as 'explain the relationship between occupational performance, health and well-being' and 'take a pro active role in the development, improvement and promotion of occupational therapy'.
The Reference points for the design and delivery of degree programmes in occupational therapy, published for the tuning project in 2008, is 212 pages and would be a good handbook to accredit an occupational therapy programme.
In an earlier but somewhat similar development, the UK Quality Assurance Agency has developed an 'academic infrastructure' to give institutions a shared starting point for setting, describing and assuring the quality and standards of their higher education programmes.
The academic infrastructure comprises a qualifications framework, programme specifications describing the programme's learning outcomes and how these can be achieved and demonstrated, and a code of practice for assuring academic quality and standards in higher education.
The code has 10 parts covering things such as student admission, programme approval, assessment and providing for students with disabilities. The infrastructure also includes 'benchmark statements' for 57 disciplines which specify with reasonable clarity the knowledge and skills that students with a major in the discipline should have.
The European and UK 'reference points' and 'academic infrastructure' would require an extensive process to be implemented for each institution. The UK's process for monitoring compliance with its 'academic infrastructure' is not yet generally trusted and the European area has not yet said how it may ensure that the knowledge and skills it specifies for each programme are being developed to a similar standard by each institution.
The OECD is trying a somewhat different approach in its feasibility study of the assessment of higher education learning outcomes, known colloquially as the higher education PISA. The PISA or programme for international student assessment is a test of 15-year-olds' achievement in reading, mathematics and science conducted every three years.
It has been used extensively in some countries to compare their schools' performance with those of other OECD countries and to stimulate change.
The assessment of higher education learning outcomes is trialling four strands: general skills, the disciplines of economics and engineering, learning contexts and value-added or the marginal gain from higher education.
The general skills study is an international pilot test of the Collegiate Learning Assessment, a US test of students' ability and learning in critical thinking, writing, and synthesising quantitative and qualitative data. Current participants in the general skills strand are Finland, Korea, Mexico and Norway.
The OECD says the discipline study is seeking to "assess competencies that are fundamental and "above content", that is, with the focus on the capacity of students to extrapolate from what they have learned and apply their competencies in novel contexts unfamiliar to them, an approach that is similar to PISA. (See this report.)
The Flemish community of Belgium as well as Italy, Mexico and the Netherlands are participating in the economics strand and Australia, Japan and Sweden are participating in the engineering strand.
The study of learning contexts will gather information from public statistics, previous research, and surveys of students and staff on physical and organisational characteristics (observable characteristics such as enrolment figures or the ratio of male to female students).
As well it will collect details on education-related behaviours and practices (student-staff interaction, academic challenge, emphasis on applied work, etc.), psycho-social and cultural attributes (career expectations of students, parental support, social expectations of institutions) and behavioural and attitudinal outcomes (students' persistence and completion of degrees; continuation into graduate programmes or success in finding a job; student satisfaction, improved self-confidence, and self-reported learning gains claimed by students or their teachers).
The fourth study of value-added or the marginal gains by higher education institutions will be a review and analysis of possible methods for capturing marginal learning outcomes that can be attributed to attendance at a higher education institution.
The study will examine potential data sources, methods and psychometric evidence from existing national data with a view to advising on the development of a value-added measurement approach for a fully-fledged AHELO main study.
The OECD is currently developing assessment frameworks and instruments and it plans to test instruments towards the end of this year and report in July 2011. The OECD says that a full scale AHELO is unlikely before 2016.
Tension between standardised specificity and vacuous generality
These attempts to assure the similarity of higher education academic standards zealously shun testing skills and knowledge specific to a discipline because this would lead to a standardisation and hence uniformity of curriculum and possibly pedagogy.
It is thought appropriate for 15-year-olds to follow common and standardised curricula, at least in core disciplines of literacy, mathematics and science. But apparently in higher education common curricula are an anathema even in the natural sciences and applied empirical and social sciences such as engineering and accounting.
In attempting to assess students' scholastic achievement independent of curriculum, the general tests may distinguish between students' skills in general problem solving or expression but they fail to assess skills and knowledge specific to any discipline.
They therefore end up measuring general aptitude. Yet the public and employers are not interested only in graduates' general aptitude but also in what they learned in their discipline.
I suggest this tension be resolved by identifying a core study in each programme or discipline which would be assessed externally and used as an anchor for all other subjects which would continue to be assessed internally.
A suitable core subject for current Australian law degrees would be Commonwealth constitutional law. Public international law would be a good core subject if one wanted to develop an international orientation among law graduates and establish comparability with law programmes in other countries, both of which are probably desirable.
The core subject would be assessed externally. It would standardise curriculum in the core subject but this would be an acceptable price for assuring the similarity of academic standards in the core subject.
Each university could then use the external assessment in the core subject to moderate or scale their assessments in their other subjects in the programme. This would leave universities free to retain their diversity in most of their subjects.
External assessment is rejected by most in universities. But they cannot continue ignoring the need for a publicly verifiable assurance that their graduates are learning and they are marking at an appropriate standard. The current methods are inadequate and the current developments are unlikely to succeed.
* Gavin Moodie is a higher education policy analyst at RMIT University in Australia
Receive UWN's free weekly e-newsletters