Recently, while reading Australian Cultural Studies: A Reader, I was struck by just how long neoliberalism has been considered a defining feature of our times. Frow and Morris open their introduction with a meditation on a few words uttered by Rupert Murdoch in 1990. In response to a journalist's question about how to'save the Australian economy', Murdoch replied,'Oh, you know, change the culture' (vii). Frow and Morris take this sentiment to be indicative of a neo-liberal rhetoric'now broadly shared' among the governing classes, one that seeks to modify behaviour across social fields so as to realise the imperative that'fewer workers must produce more for less' (viii).
This paper concerns one manifestation of neoliberal reformism: the pseudomarketization of state-managed higher education through technologies of performance evaluation: those used to measure and fund research quality. In particular, the focus is the publication-related components of such schemes.
With regard to the British and Australian sectors, it appears we have arrived at a point where existing research quality mechanisms are under review, with new systems virtually certain but still under construction. In the UK, the RAE (Research Assessment Exercise)-which determines levels of research capacity funding mainly through peer review of the quality of research publications-is in its last round. In Australia the existing'quantum' of research activity (the Institutional Grants Scheme, or IGS) allocates research funding to universities in proportion to the numbers of recognised publications they produce, research student numbers and the value of grants they win. It is likewise on borrowed time, being originally due for replacement this year by the RQF (Research Quality Framework, modelled on the British RAE). However, the new government has postponed the change. It is pursuing instead several concurrent reviews of higher education and the development of a new quality framework, the ERA (Australian Research Council, Excellence), which will put more weight on ranking of publication outlets by'quality'.
The results of these processes of review in Australia may define the nature of the academic enterprise for many years to come. It remains to be seen whether they will overturn the legacy of the Dawkins reforms of the 1980s and early 1990s, which created a competitive level playing field as higher education colleges became universities able to compete with the established research-intensive institutions for research funding (through a new set of grant schemes of which the IGS is one component). What is at stake is not just the nature of the academic vocation and the conditions under which academics work, but also the rationale of the modern university, or differing rationales of different universities:'diversity of mission', that is, the capacities of institutions to undertake research, teaching and other functions, is an explicit dimension of the recent'Bradley Review' of Higher Education (Commonwealth of Australia, Review). With this in mind, and in light of cautionary tales derived from the British RAE, my principal aim here is to highlight how research quality frameworks can threaten inclusive, social democratic understandings of higher education. I will also make the contentious claim that in a world where such frameworks are inevitable, the current Australian system is about as good as it gets.
The bulk of the article outlines the evaluation frameworks and provides a discussion of their consequences contextualised by my reading of the political rationalities in play. It is partly derived from my experiences working as a researcher both under the RAE (at Cardiff and Lincoln) and the IGS (at Queensland and Sydney). This necessarily means that some of the evidence presented is impressionistic, but I make no apology for that. In the search for greater productivity and accountability, research evaluation mechanisms discipline workers, and do so in ways that can affect behaviour from the micro level of individual researchers up to the sector-wide patterns of production they foster. Arguing from experience is not to assume all academics will view research evaluation in identical ways, but to bring alternative perspectives to policy issues that are too often dominated by pragmatic and technocratic concerns with efficiency, as though measures implemented are value-neutral ways of making things better. Rather, the structuring of the research environment involves the operationalising of specific values and priorities that should simultaneously be open to debate in considering policy details.
Getting things to work
In deeming research evaluation frameworks neoliberal I do not mean that they are essentially all the same, but that in conceptualising collectivities as comprising actors responsible for governing their own performances and maximising associated outcomes, they are part of a broader cultural shift in the management of people and organisations. Governmentality theorists, influenced by Foucault's later work on the'conduct of conduct' (220-221), have identified contemporary discourses that depict citizens as self-determining agents and encourage them to use their freedoms to act in ways that bolster the privatisation of responsibility. The rollback of the welfare state-the so-called'bludger' culture of dependence that Frow and Morris saw as one target of neoliberalism-is one outcome. Another, as writers including Rose and du Gay have shown, is the changes in human resource management which lead to workers being viewed more like entrepreneurs than'organisation men' with stable roles and fortunes. The worker becomes someone tasked with realising organisational goals through using their own initiative, tested in this through performance appraisal, and held responsible for results. This is a culture of continuous audit by the organisation, and self-management by the subject, and is based on the supposition that there is always room for improvement. Or, to put this differently, there is a management demand for the' infinite resourcefulness' of workers in processes of adding value (Costea et al. 250).
Higher education is a distinctive sector in which the traditional model of academic work is that of a vocation often undertaken more for the inherent joys of discovery and circulation of ideas than for financial reward. Andrew Ross has observed how this romantic legacy of working with the gift of knowledge underpins a longstanding tendency for academics to gift production to their employers in self-directed unpaid overtime. When you love what you do it can merge with available leisure time. But universities are also becoming corporatized. This involves greater managerial intervention over what academics do with their time. As Rutherford argues, academic activities are becoming increasingly measured for their instrumental value to the knowledge economy and its need for innovation and educated workers. Corporatization also increasingly opens up academic practice to managerial calculations of the value for money of operations. Ross observes how the tradition of gifted labour is being converted into discounted production as paymasters devise ways to enforce unpaid academic work time via employment conditions. The increased casualization of academic work since the 1980s is one aspect. It most often reduces the financial reward for teaching while relying on the worker's'self-managed' cultivation of expert knowledge altogether outside of the paid time.
The corporate imperative to enhance value for money is also applied to research, and in some respects it is engendered by government policy. The Dawkins reforms led to a system in which government sponsored research by paying universities for their quantifiable achievements. In recounting the development of cultural studies at the University of Melbourne, Simon During explains that'By the late 1990s, conditions had changed. New hyper-Benthamite management techniques, new funding models and social objectives, geared toward boosting national economic productivity, governed the Australian university system' (275). He goes on,'The budget I managed was determined by a formula which allocated money according to performance quantified across a number of variables. Each student, each text written by faculty, each PhD completion, each research dollar won had a money value, so that, at least in theory, it was possible to compute exactly how much each academic earned and to assess whether they were departmental profit centres' (275).
No choice but rationally to choose
This was not always the case. The ideas behind the proliferation of such mechanisms across the public sector had first to be invented. In the 1970s and 1980s, while the ideas of free-market economists such as Milton Friedman led to macroeconomic reforms now associated with globalization, other economists and mathematicians were busy working out how to apply rational-choice models, based on the inherently self-interested figure of homo economicus, to a wide range of fields of human endeavour. The'rational' choice in this view is the self-interested one, as calculable through analysis of the benefits to the self versus the costs of any course of action. Microeconomists, such as game theorist Gary Becker, sought to develop more sophisticated models of cost-benefit analysis that could be applied to understanding people's tendencies to do anything. Activities beyond direct financial transactions in markets could also be modelled in market-like terms, as involving agents' responses to'price-like' signals regarding whether something is worth doing (see Harford for an account).
This kind of thinking entered into public sector management in the guise of public choice theory, which centres on the use of price signals to incentivise workers. The core belief of this approach is that public officers who are left to their own devices in civil bureaucracies will never perform optimally in the delivery of whichever public good they are supposed to deliver. In her insider's account of the influence of economic rationalism in governmental circles in Canberra, Lindy Edwards cites the public choice view that it is the self-interested behaviour of bureaucrats that leads to such putative'government failure' (101). It is thought that in conventional bureaucracies, those who run them can too easily fulfill their sectional'producer interest' of adopting an easy life for themselves because of the lack of market pressures for them to really create an adequate good and exchange it for something else they want. In this view, the key to improving public sector performance is to introduce market forces in order to bring extrinsic motivating factors to bear upon public sector workers: to make high performance the rational choice.
This has led to'the New Public Administration'. Government is redefined as a buyer of the services it requires from the public sector rather than the centre of operational control and responsibility. This allows for various ways of making the relationships between governments, civil servants and service users (citizens) more like business ones. According to Kaul, some of the key features are that policy formation is transferred away from the public services themselves (including government ministries) to separate executives that are responsible for setting standards, goals and service agreements that effectively act as contracts with the public sector organisations. This distinction makes'delivery' the main autonomous responsibility of the latter, while the former decides upon the mechanisms through which delivery will be measured (performance targets) and ways that performance levels of particular agencies will be variously rewarded or punished. These include payment-by-results at organizational levels, and also performance-related pay for staff. Thus government-as-buyer-of-services is, in theory, able to benefit from forces of competition it unleashes within and between organisations, which scramble to deliver up to the mark.
Sweating your assets, UK style
In discussing New Labour's continuation of Thatcher's legacy, Stuart Hall notes how the new managerialism derived from public choice is apparently neutral, but really'the vehicle by means of which neo-liberal ideas actually inform institutional practices' ('New Labour's double shuffle'). It is part of a broader diffusion of neoliberal sensibilities through the population creating a new habitus, whereby citizens at any given site become self-managers in processes structured through market logic. In incentivising performance, and enjoining civil servants to act and feel like entrepreneurs,'It replaces professional judgement and control by the wholesale importation of micro-management practices of audit, inspection, monitoring, efficiency and value-for money, despite the fact that neither their public role nor their public interest objectives can be adequately re-framed in this way.'
It should come as little surprise that public choice was operationalised first under Thatcher. Reagan, after all, had no national healthcare or higher education systems to subject to it. Along with the first steps taken towards managing the National Health Service through performance criteria, the UK's Research Assessment Exercise was one of the earliest examples in the 1980s. The RAE promised that research capacity funding would be distributed to institutions on merit, through major peer-review exercises undertaken every few years. As well as reviewing publications in their area for their quality, RAE subject panels considered circumstantial factors such as'research environment' and'esteem indicators'. The results were overall quality profiles for research achievements in given subject areas within institutions, with the original scale running from 1 (no effective research culture) to 5* (thoroughgoing international excellence). Funding levels were then determined by the ratings in proportion to numbers of staff who were identified as research active and the relative costs of research in each discipline. Each institution's overall research capacity income was an aggregate of its RAE earnings in the subject areas it had submitted for consideration.
Let us first of all concede that the RAE has had a major impact on British research, and there is no doubt that it'worked' in the sense that it did boost research performance in the sector. Above all, by dint of the differential funding formula applied to the rating scale, it has succeeded in concentrating funds at the top, helping a small group of institutions in South East England to maintain their world-class reputations amid a broader decline in per capita funding to universities.
However, rather being taken as an unproblematic boosting of the public good, the success of the RAE has to be understood as something relative to the terms in which the system was configured. Those terms of success are narrow. They take no account of the many negative, unintended consequences of the behavioural change that it causes. The realization that the downsides outweigh the upsides by this point in time is what has convinced the British government, in the very language that it uses in such matters, that it should be discontinued. The costs are greater than the benefits.
There is not space here to elaborate on all of the costs. But from the point of view of someone who lived under the thrall of the RAE, and escaped, they include:
- Distortion of research and its transfer. Because of the valorizing of certain kinds of output (single-authored work in prestigious fora likely to impress an expert reviewer working in a specific disciplinary framework upon being speed read), researchers modify their behaviour to adapt to perceived demands. This means they may eschew worthwhile kinds of work they are good at in order to conform. Public intellectualism, collaboration, and interdisciplinary, highly specialised and teaching-related research are devalued.
- Neglect of teaching. In-class contact hours are cut and class sizes increased to free up lecturers' time for research, or contact hours are casualised, or dumped on continuing lecturers whose research is deemed unlikely to earn RAE money. The specific drivers of the RAE thus compound already existing threats to teaching quality caused by underfunding.
- The massive amounts of academics' time the RAE takes to administer. Time used for micromanagement and review is itself non-productive and could be spent on research. Likewise professional research administrators working on RAE compliance and process management cost money that could be spent on direct research support.
- Selective support for research within institutions. It becomes aimed at researchers and projects most likely to yield income. This is de facto infringement of academic freedom, as earning power becomes the rationale for the good of research, and it also damages staff morale and discourages risk taking in research design.
- The transfer market in academics between institutions. The movement of personnel who were already working well in situ (and therefore attractive to headhunters) is nice for the academics concerned, but comes with various costs not related to any additional yield: the on-costs of recruitment and relocation, higher salaries required to lure the personnel, investment in facilities and equipment required, the costs to the former institution as research teams/strengths they cultivated disintegrate.
- Ignoring quality that is not produced in certain quantities. The RAE defines a returnable researcher as someone who publishes at least four peer-reviewed works in a seven-year cycle. Regardless of the quality of their work, the reasons for their modest productivity, or their potential, someone who publishes less falls below the cut-off, and is prone to be managed out of their research career. By his logic, John Rawls, would have become a teaching grunt had he been working in the UK, circa 2000. This is a big equity issue considering many who fall into this category are not sloths or dullards, but early career researchers, academics with caring responsibilities, and those whose employers make significant non-research demands on their time.
- The large amounts of unpaid overtime worked by researchers attempting to achieve their output goals, the associated stress and loss of private time. The RAE exploits the tradition of voluntarily gifted labour in academic work. Its subjects are enjoined to strive for greater quality in publication constantly, while the esteem indicators that are a factor in the overall evaluation provide an incentive for researchers to take on a range of additional activities to show standing in their field, including publishing more than the entry-level requirement. Do or die pressure makes the gifting of'as much labour is required' to the employer by the academic a de facto obligation. Workers adapt the patterns of their lives to the dictates of work, internalizing the imperative always to be available to pursue work projects as personal ones. In adopting this flexible orientation they become the subjects of work intensification, as in many other areas of the contemporary knowledge economy (Hudson 44).
It is not that universities should not change with society, nor that academics should not be accountable for the public good they deliver. Rather, the problem is in the hijacking of change, accountability and value for money by those who apply narrow economistic models of human behaviour to complex organizational operations, and, to return to Frow and Morris's point, do so with the ultimate aim of getting fewer workers to produce more for less, without much care for the real costs of such performance improvement.
The quid pro quo between universities and the British Government really broke down with the betrayal experienced in the 2001 round. The 2001 exercise saw improvements across the board. In particular, unprecedented numbers of 5 and 5* ratings were awarded. However, the reality behind the mechanism was soon apparent: the RAE is effectively a zero-sum game. Rather than rewarding the improved departments with funding levels equivalent to those enjoyed in the previous round, units of resource applied to each rating were reduced. Outcomes rated 5 received a 15 per cent cut in funding and 4 recieved a 20 per cent cut, while 3a was cut by 70 per cent (Goddard). The annual research capacity pot didn't increase with the sector's performance. Instead it was spread more thinly, causing many institutions to incur losses on research having invested in anticipation of reward. The exception was at the top end. Only the top'starred' rank received similar funding in real terms as in the past. Everyone else who had performed well in the terms of the framework had produced more for less.
Unsurprisingly, the new universities, those apparently enfranchised by being able to compete with the established research universities'on the same terms', were disproportionately affected. One professor of educational research (Griffiths) calculated that in 1996 the new universities shared less than 5 per cent of the available funding. After the 2001 RAE, they should have received 15 per cent, but the new funding formulas accorded them only about 2 per cent. Subsequently funding has become even more concentrated at the top. In 2004, funding to 3a departments was cut entirely. Departments rated 4 now receive the entry level of funding, with 5 receiving 3.18 times, and 5* receiving 4.036 times the unit of resource applied to 4 (HEFCE). It beggars belief that 3a research cultures are not supported with a single pound of capacity funding when the descriptor for 3a achievement is' national excellence in two thirds of outputs and international excellence in some'.
This manipulation of the prices accorded to the ranks has proven a way of creating teaching-only departments out of ones that, even by the RAE's own quality ratings, were producing research work of national and international significance; or worse, it has been a way of catalyzing their closure. But this was only to extend the ultimate logic of the RAE: the bifurcation of the sector into research-intensive and teaching-only academics, departments and institutions.
The forced drift of new universities back to teaching (with a few islands of funded research) grabs few headlines, and the controllers of the system do nothing to track such outcomes themselves. However, the closure of more prestigious departments no longer deemed profit centres or compatible with the desired research quality profile of their host institution has garnered considerable attention. In particular, the closure of science departments working in fields of national skills priority has become something of a cause célèbre. For the cultural studies community one of the most perverse outcomes of RAE-related managerialism was the closure of the renowned Centre for Contemporary Cultural Studies at the University of Birmingham. The Centre's great sin was achieving a 3a RAE rating in 2001. While, as Ann Gray notes, it continued to make an important contribution to the field of cultural studies, Birmingham University designated any unit receiving under a 4 rating to be underperforming and thus not worthy of investment. In one fell swoop, everything the Centre had achieved (including much of value that was not'RAE-like') was swept away. And, it is somewhat ironic that the proximate cause of the period of lower research productivity that had led to the'3a' was that management had, a few years before the RAE audit, ordered the Centre staff to develop an undergraduate degree for the first time, thus diverting their time away from research for a critical period.
This kind of short-sighted distortion is a sad inevitability when institutions are forced to prioritise while scrambling for scarce resources. Equally pernicious is the dividing of the academic research workforce into haves and have-nots because of the methodology used to determine ratings. Just as a unit may be deemed underperforming relative to its ability to attract the desired price in the RAE, so individuals may be too. This is not a matter of the research-qualified-but-inactive'dead wood' being excluded. That may be fair so far as their inactivity is attributable to themselves and not the conditions under which they work. Rather, in 2001, more than 32,500 research-active staff (40.4 per cent of the research-active sector total) were excluded from being returned to the RAE by their institutions (Corbyn). This is not because their work has no value. Rather, it is a numbers game. If, for the sake of argument, a given department is aiming for a 5 rating and has 20 researchers deemed up to the mark, but 10 deemed possible 4s, it will bar the 4s from the return. It will make much more money with an overall 5 rating funded at 20 heads than if it is rated 4, but funded at 30 heads. Thus those researchers seen as presenting the risk of bringing down the overall rating into the band below are excluded, even if their work is good by sector standards. In turn, this affects the individual's future entitlement to research time and resources, and their career prospects, and it disproportionately applies to early-career researchers and women.
So it is that a mechanism designed to squeeze maximum performance out of human resources works in exclusionary ways. It makes the academic world into a short-termist one of narrow performance cut-off points through which workers are judged in virtually all-or-nothing terms in spite of institutional complexities. Resources follow in line, buttressing the different classes of knowledge workers.
Superficially it appears to be a transparent system for distributive justice. All parties appear to get what they deserve. Yet, like any meritocratic regime, the RAE does not capture the simple reality out there. It specifies and controls debatable definitions of the good, and then places debatable values on them and the things that are necessarily deemed not so good in its terms-values that then determine the reality. In this it seeks to model performance on market lines. But in capitalist free markets, the'good' is money itself, exchange value. By definition, it matters not one jot to the recipient of a profit what non-pecuniary costs have been incurred along the way. The same cannot be said of university operations. Furthermore, in free markets price values are determined by the balance of powers of supply and demand, not the fiat of politicians. The arbitrary manipulation of rewards in the RAE makes a mockery of the need for management strategy and planning that the system itself engenders.
In short, the RAE is a quasi-market that revolves around the idea that imitating market instrumentalism leads to an improvement in the delivery of the public good. But it is at least as logical to view it as the importation of market failure into the public sector.
The quantum effect
When I was following the debate about the RQF in Australia from the UK a few years ago, having experienced both the Australian system and the RAE, I wrote to former colleagues in Australia warning them of the psychic costs of the RAE on staff, and how it is divisive, unfair and detrimental to other academic activities while appearing a mechanism that can lift all boats. I might have missed it, but, while there were technical objections, I didn't get the impression that many academics saw it as a neoliberal ruse to squeeze extra out of workers, while rewarding only a'world class' research elite with any significant resources.
The RQF has now been cancelled (though possibly only deferred while the new government takes views), but a new performance management and funding system for research is in the offing to replace the much-maligned'quantum' (IGS). The IGS is the method of calculating institutional research grants by a mixture of research income won (60%), research student load (30%) and academic publications (10%), as referred to by During above in less than flattering terms. However, although it is also a performance mechanism linked to competitive, selective funding, that emerged in the same socio-historical context as the RAE, the quantum is essentially a means of indexing funding to levels of research activity rather than quality of research outputs. It is important to stress that the IGS is not a ranking scheme like the RAE, and that it rewards research activity in all its granularity, rather than operating a cut-off logic regarding what outputs can be eligible for evaluation and funding. In an annual round, any unit of activity that has passed a basic threshold of academic peer-review (winning any grant, publishing any article) attracts funding. It does not infringe on academics' autonomy to decide the best form and venue for their work, nor does it require a researcher to achieve a certain level of productivity before their work is credited.
A critique of the quantum is that it just rewards the cranking out of product, regardless of quality. This is the view that encouraged Canberra to develop the RAE-clone, the RQF. A series of studies based on analysis of Australian research output was undertaken under the auspices of the Research Evaluation and Policy Project at the Australian National University. They contributed to the view that the quantum is detrimental to Australia's research performance so far as quality is concerned. In one such paper, Linda Butler estimated the quality of Australian research publications by ascertaining the impact ratings of the journals in which they were published using Science Citation Index data. She argued that the introduction of the Research Quantum measures in the mid-1990s correlated with a discernable trend in Australian academic publications:
The reaction of Australians to these signals is entirely predictable—their publication output has increased dramatically in the last decade. But as quality is paid scant regard in the measures, there is little incentive to strive for the top journals, and this paper shows that the biggest increase has been in those journals at the lower end of the impact scale (39).
It would appear to me that Butler is drawing a misleading conclusion from her analysis. In fact, when other factors are excluded, it is evident that the quantum did actually lead to an impressive increase in research at the top end of the impact scale, as well as in the middle and at the lower end. It did indeed lift all boats. Yet a'dramatic' increase in Australia's research productivity at all levels was explained away as failure because the lower end grew faster than the upper.
It seems logical to deduce that the publication element of the quantum incentivised all researchers to publish, but that, in the same way that, because of their abilities, few students gain high distinctions relative to those who receive other commendable grades, so too the majority of researchers placed their extra publications in less prestigious,'ordinary' journals. And there are only so many rooms at the inn: how many articles can top journals publish? In essence, Butler eschews the significance of the sector-wide improvements, preferring to adopt a public-choice view that the rank and file of researchers have taken the easy route and published outside of Nature and Science en masse. Another interpretation is that they all found suitable niches for their work in the broad environment of academic publishing.
The change lobby concentrates on the lack of Australian institutions at the very top of international indicators (like the Times Higher Education Supplement's'World Rankings', or the Shanghai Jiao Tong University's'Academic Ranking of World Universities'), and seeks ways to get an Australian elite dining at high table. There's nothing wrong with giving more money to Australian teams while they are doing important work, as long as this is not linked to reductions in capacity funding for others (who knows where the next leading edge of innovation will break out?). And there must be simple ways of reviewing current indicators and supplying additional resources to support excellence without detriment to the rest of the research sector. However, there has been little policy discourse focussing on how the quantum could be modified. The accent remains squarely on devising a wholly new system. But in this the advocates of change have been mandated and commissioned by a former government that sought ever-greater returns for little additional investment. The econometric myth is that a whole new system of measurement can usher in a whole new world of excellence, even after the historical productivity gains fostered by the quantum.
A new ERA?
Nonetheless, reform is endless in an era bent on the incessant search for greater value for money. The evidence seems to be pointing towards a successor to the quantum that will put much greater weight on measures and bands of output quality. The Butler position seems to be accepted as the prevailing common sense in research policy circles, and by the new government. Although it is not (yet) formally linked to research funding, the Excellence in Research for Australia Initiative (ERA) currently under development by the Australian Research Council is based around three categories of performance indicators: activity, quality, and applied research and research translation. Exactly how these will be measured and weighted is still under consideration. However, publication indicators are proposed for use only under the'quality' heading, with the principal approximations of quality being citation levels and rating of outlets (journals, monograph publishers, conferences, etc), which are to be divided into four tiers: C (bottom 50%), B (next 30%), A (next 15%) and A* (top 5%).
At the time of writing, the journal rankings are out to consultation. Most of the debate concerns which journals should be placed in which tier. This is what the ARC wants to determine with the help of the sector. However, we should at least consider the possibility that the ERA may create effects similar to those of the RAE. On the face of it, the ERA does not resemble the British system. It will probably not deploy much peer review of publications themselves, so would waste less academic time on that process. However, it would still fundamentally valorise certain indicators of quality, and as such may bring its own suite of distortions as institutions and individuals modify practices and set new priorities. These might or might not resemble the distortions associated with the RAE.
For instance, the ERA's current Research Outlet Rankings (Australian Research Council, Consultation) could lead to unintended consequences. Not only is publication venue a questionable proxy for quality (be advised that the article you are now reading, and indeed any article in Australian Humanties Review, is a'B' according to the current rankings, regardless of what you think of it), but academic publishing is a broad ecology that includes many kinds of publication in which people publish for a range of good reasons. Once the A*, A, B, and C, tiers become a central focus in the micromanagement of research within institutions, that whole ecology could be distorted in ways not yet known. The stampede for A* and A outlets will no doubt produce more Australian publications in them, but at what cost? If applied, the publication criteria of the ERA would threaten academic autonomy of publishing, which revolves around professional discretion in presenting ideas to communities of interest. The kind of work valued would be prescribed, once again, by fiat. Everyone would be urged to chase the top journals all the time, restricting their work to what they think those journals want. And they would most likely have to bear the psychic costs of frequent failure because of a false performance standard applied to all.
In particular, what happens to specialisms? The lists of rankings privilege high-profile'general interest' journals within disciplines. If you happen to work in a field with specialized journals ranked B and C, via which your peer community works though its core issues, what will you do? Accept your designation as a second-class citizen, or give up the work you are dedicated to by vocation in order to strategise about ways of getting into the top tiers? For similar reasons, what about interdisciplinarity? And place-specific work? The problem for law, where many publications are directed towards jurisdictions, has been summarized in David Hamer's recent piece in The Australian. Most of the'top' journals are inevitably focused on U.S. law, and the rankings devalue the best Australian journals. This may prove a problem across the humanities and social sciences as publications addressing local issues have smaller readerships/citation counts, meaning they are less likely to be deemed'top' in their fields. This would appear to be a bizarre bias against important Australian-specific research and associated outlets. Or, just as problematic, journals could be deemed A* just because they are the best in Australia. We are entitled to ask what the public good is of this arbitrary valuation of some kinds of things over everything else.
Despite the differences from the RAE, if the outlet tiers are linked to funding in ways similar to the RAE, such ranking could create large disparities between incomes generated by different academic publications. The ERA maintains the fundamental quality ranking approach of the RAE/RQF. The double whammy could come when politicians decide what price to put on outlets. Who knows what the values will be? They will likely vary with performances and funding levels in a zero-sum game. As with the RAE, such unpredictability of reward would be deleterious to the very strategic planning culture the system requires. For the sake of argument, let's imagine the government rewards publications with units of resource roughly equivalent to those of the RAE, e.g.: C=0, B=1, A=3, A*=4. It is not at all unfeasible that the disparities could be so great. In fact this would be generous to the lower ranks in comparison with the RAE. The RAE has never rewarded the lowest two ranks of quality on its scale, and since 2004 it has not funded the bottom four on the seven-point scale. If this were translated to the ERA, it could mean that'C' and'B' performances are cut out of funding altogether. It is at least wise for the research community to envisage such a scenario.
While it would be naïve to suggest the gentle art of performance management is not yet practised in Australian universities, a research framework that induces a cut-off culture as severe as the RAE would intensify it further. In combination with continued low funding and a zero-sum research pot, it would skew research culture towards exclusivity away from the relative inclusivity of the IGS publications component, which validates and funds any output that has passed academic peer-review (I know many Australian colleagues will baulk at such comparative praise of the quantum, but there you have it). In chasing the much higher incomes that come with'the best', managers would have a newly narrow, short-term incentive to leave the residuum to rot as they invest in their elite research earners only.
When it is considered that the recent discussion paper of the Bradley review (Commonwealth of Australia, Review) makes it quite clear that the specialisation of institutions and the relationship between teaching and research are back on the agenda, there is a real risk that the new research framework could be configured so as to force many institutions and departments to choose between teaching and research on economic grounds. Low or no levels of funding accorded to research performance deemed average or good could force such'rational choices'. This is what happens to university research under mechanisms like the RAE, which put a low unit price on many valid research outcomes (including academic publications that are rated good, but not excellent). It does not fairly reward all for their efforts and fails to guarantee wide distribution of future opportunities to maintain production, as it does not provide adequate revenue to invest in future operations. It is an under-the-radar way of returning the sector to research and teaching intensive institutions while retaining a patina of neoliberal meritocracy. If the prices applied to B and C are so low that they do not actually cover the real costs of research, institutions that attract those levels of funding in the main will start to lose large amounts in undertaking research, and will have an incentive to give it up.
There is a lot to worry about here from an equity viewpoint. The uneven distribution of capacity to do research between individuals, departments, institutions and types of university is a class issue about who gets the opportunity to work with and benefit from knowledge, and on what grounds and under what conditions they are attributed this entitlement. As Frow notes, to take an example from my discipline, cultural studies in Australia was developed at universities outside the research-intensive Group of Eight ('Australian Cultural Studies: Theory, Story, History'). There is every chance that any successor to the quantum forces virtually all funded research back into that consortium, by design.
Such sequestration of research from teaching would have familiar spatial and demographic dimensions. Among other things, it equates to barring all but the most'educationally mobile' working class students (those who study in the Group of Eight) to access research culture, to experience universities as the sites of the production and dissemination of knowledge, and to get involved in these processes. And those staff most likely to experience the greatest barriers to performing against the required criteria would be those hemmed in by their lifeworlds, not the least able. Domestic and caring responsibilities, and any other legitimate demand on an academic's personal or work time that inhibits their ability to undertake research (including as unpaid overtime), would become even more inequitous career determinants than they are already. Performance differences are amplified by new ways of expressing them. Two academics produce similar amounts (and quality) of outputs, but one is totted up to earn through publications on average $15,000 a year, the other $5,000. Who gets the job or promotion?
Conclusion: resisting cut-off culture
If the issues are to be judged in terms of the sectional interests of the academics expected to deliver, the biggest externalized cost of the RAE was the crazy work practices it normalized by creating such high-stakes rewards. In my experience, in departments striving for RAE achievements, extreme working hours were quite commonly expected of the individuals with responsibility to deliver, and whose careers depended on it. It is our personal selves,'us', who pay with our finite energies when we take on the subjective orientation of'infinite resourcefulness' to meet productivity demands. In doing so we subsidise our employers with nothing in return except the right to win the resources to continue to do our jobs, until the next set of hoops comes along. But if you are unlucky, you pay the ultimate price. You go through all this and fail, not because of your inherent abilities, and often in spite of'improved' achievements. You fail simply because the system defines success in a certain narrow way and you fall beyond the range. At the same time, there is no easy way for academics to stand outside. The performance mechanisms are wrapped around things researchers care about and do. Academics are implicated by undertaking the very activities that make up their vocation, and they cannot desist without creating feedback for the very system resisted as non-compliance is converted into performance data. To attempt to do so would be to boycott one's own research career.
Of course, governments that are deeply implicated in the public choice approach do little to evaluate its efficacy overall. However, one review that did question the unintended consequences of one scheme was the so-called'Roberts Report' (Review of Research Assessment). It catalysed the discontinuation of the RAE by highlighting some of the unintended consequences it incurred. However, it did not dislodge the behavioural ideology upon which the RAE was based. There is no sign that any alternative scheme will address the inequities it fostered, or many of the costs. Rather, the aim is only to create a more streamlined version to reduce the operational costs that were seen as a drag on its efficiency. The performance incentives, quality rankings and funding methods are not in question. Likewise it would be prudent to assume, until there are clear indications otherwise, that the ERA will reproduce the main drivers of the RAE, in a leaner, more metrical package.
The crunch question is, of course, how should research capacity funding (as opposed to project or infrastructure funding) be distributed?
Selective funding of some kind is now inevitable. And it is equitable insofar as it is distributed fairly to cover the costs of all valid research undertaken, without detriment to other dimensions of university activity, and in ways that sustain opportunities for individuals and institutions to realize the contributions to education they can best make. The problems come when systems encourage some behaviours at the expense of others and are manipulated so as not to reasonably represent the value of goods produced.
If selective funding is inevitable, no doubt so is a performance evaluation mechanism of some kind. However, naïve as it may seem to resort to a familiar mantra, no such framework can be the magic bullet to make up for chronic underfunding. Australia is the only country in the OECD where public funding of universities has fallen as a share of GDP over the last decade. General taxpayer funding is now the lowest of all OECD countries as a share of public university revenue, standing at 42 per cent in 2006 (Universities Australia, 5). Any new system needs to acknowledge that Australian academics currently perform extremely well in research relative to resources. It is hard to see where the slack is to be taken up, with academics already working at full tilt (see National Tertiary Education Union). There would be huge risks entailed in bringing in a new untested system that will introduce wholesale change in an attempt squeeze a bit more out for the same money. Improvements will not come without funding increases, and if they appear to it will be a false economy in which very specific improvements generate unacceptable trade-offs in other areas.
It may be that the changes are designed to crack the high-end quality nut alone-though you are unlikely to get a politician to admit right out that the intent of the ERA is to divert money back from the new universities to the old, or to force a division between teaching and research functions in the sector. Of course, although the Group of Eight institutions still win the lion's share of funding by dint of the grants component of the IGS, they are not wealthy enough to achieve the very highest international standards judged by research performance indicators. However, the inclusion of the new universities in selective research funding after Dawkins is not the cause of the Go8's relative poverty, even while it has provided modest but significant means to sustain research in the mainstream. The cause is government's decision not to support all research and teaching activity appropriately.
Given that Australia performs so well across the board in research (as indicated by the high numbers of Australian institutions in the top 100 of the THES rankings, and the publication outputs recognised by Butler), it would be a tragic mistake to introduce a system that boosts the highest end at the expense of the rest. Incremental changes to the IGS could be a platform to build upon what has already been achieved, designing backwards from recognition of the kinds of support that actually make research better. For instance, rather than urging all to scramble for the few available slots in the top journals all the time by making them worth dramatically more than'normal' academic publications, equal payments for all academic publications could be maintained, but with a single'A' band covering 20% of outlets also introduced. Publications in them could earn a modest supplement over the norm, rewarded by real-terms increases in funding, not a diversion of support away from the rest. This may well be enough to encourage all Australian researchers to place with'top' journals when they see fit, without distorting their research habits in major ways, and creating a divided sector of stars and grunts by operationalising perverse definitions of success and failure.
Insofar as performance management systems are structured along the lines of markets, they should aim to reproduce fair markets, not free ones, or ones rigged by price-fixing that undervalues academic production. Markets are prone to externalize failures to deliver the social good because of their brutal pursuit of the bottom line. Profit is not what public sector institutions are for, nor are they for endless maximization of values that act as proxies for profit. They deliver complex, holistic services and need realistic support and security to do so well.
All selective support systems are not the same. Instead of importing a refraction of the British class system that comes shrouded behind a meritocratic neoliberal window dressing, the new Australian government also has the option to avoid sweating its human assets any further through ever more elaborate performance management. By design, it can maintain a broad inclusive culture of innovation for the long term, even as it provides additional support for the'most excellent'. It can avoid mechanisms that generate the unintended hidden costs that come when individuals and institutions have to play elaborate high-stakes games to win basic entitlements. Instead it can acknowledge the excellent productivity of Australian academics, maintain and refine a simple and relatively fair performance system, and build on it with funding levels that reflect the real value of academic production.