EVALUATING THE RESEARCH OUTPUT OF AUSTRALIAN UNIVERSITIES ECONOMICS DEPARTMENTS*

Similar documents
Economics research in Canada: A long-run assessment of journal publications #

HISTORY COURSE WORK GUIDE 1. LECTURES, TUTORIALS AND ASSESSMENT 2. GRADES/MARKS SCHEDULE

CONFERENCE PAPER NCVER. What has been happening to vocational education and training diplomas and advanced diplomas? TOM KARMEL

Last Editorial Change:

American Journal of Business Education October 2009 Volume 2, Number 7

Australia s tertiary education sector

GCSE English Language 2012 An investigation into the outcomes for candidates in Wales

Department of Plant and Soil Sciences

Oklahoma State University Policy and Procedures

THE ECONOMIC IMPACT OF THE UNIVERSITY OF EXETER

BASIC EDUCATION IN GHANA IN THE POST-REFORM PERIOD

b) Allegation means information in any form forwarded to a Dean relating to possible Misconduct in Scholarly Activity.

Early Warning System Implementation Guide

Life and career planning

Twenty years of TIMSS in England. NFER Education Briefings. What is TIMSS?

Higher education is becoming a major driver of economic competitiveness

Guidelines for Incorporating Publication into a Thesis. September, 2015

The Talent Development High School Model Context, Components, and Initial Impacts on Ninth-Grade Students Engagement and Performance

University of Waterloo School of Accountancy. AFM 102: Introductory Management Accounting. Fall Term 2004: Section 4

How to Judge the Quality of an Objective Classroom Test

Principal vacancies and appointments

MASTER S THESIS GUIDE MASTER S PROGRAMME IN COMMUNICATION SCIENCE

NCEO Technical Report 27

College Pricing. Ben Johnson. April 30, Abstract. Colleges in the United States price discriminate based on student characteristics

Purpose of internal assessment. Guidance and authenticity. Internal assessment. Assessment

Individual Interdisciplinary Doctoral Program Faculty/Student HANDBOOK

AUTHORITATIVE SOURCES ADULT AND COMMUNITY LEARNING LEARNING PROGRAMMES

Firms and Markets Saturdays Summer I 2014

THE IMPACT OF STATE-WIDE NUMERACY TESTING ON THE TEACHING OF MATHEMATICS IN PRIMARY SCHOOLS

August 22, Materials are due on the first workday after the deadline.

Benchmarking process overview

GDP Falls as MBA Rises?

Master Program: Strategic Management. Master s Thesis a roadmap to success. Innsbruck University School of Management

The Ohio State University Library System Improvement Request,

TOURISM ECONOMICS AND POLICY (ASPECTS OF TOURISM) BY LARRY DWYER, PETER FORSYTH, WAYNE DWYER

KIS MYP Humanities Research Journal

The International Coach Federation (ICF) Global Consumer Awareness Study

ReFresh: Retaining First Year Engineering Students and Retraining for Success

Research Training Program Stipend (Domestic) [RTPSD] 2017 Rules

Trends in College Pricing

PROMOTION and TENURE GUIDELINES. DEPARTMENT OF ECONOMICS Gordon Ford College of Business Western Kentucky University

PUPIL PREMIUM POLICY

Developing Effective Teachers of Mathematics: Factors Contributing to Development in Mathematics Education for Primary School Teachers

USC VITERBI SCHOOL OF ENGINEERING

SCOPUS An eye on global research. Ayesha Abed Library

Longitudinal Analysis of the Effectiveness of DCPS Teachers

Instructions and Guidelines for Promotion and Tenure Review of IUB Librarians

Strategies for Solving Fraction Tasks and Their Link to Algebraic Thinking

Building Extension s Public Value

Eastbury Primary School

Research Update. Educational Migration and Non-return in Northern Ireland May 2008

TU-E2090 Research Assignment in Operations Management and Services

SETTING STANDARDS FOR CRITERION- REFERENCED MEASUREMENT

Keeping our Academics on the Cutting Edge: The Academic Outreach Program at the University of Wollongong Library

The number of involuntary part-time workers,

Course specification

U VA THE CHANGING FACE OF UVA STUDENTS: SSESSMENT. About The Study

Promotion and Tenure standards for the Digital Art & Design Program 1 (DAAD) 2

eculture Addressing English language proficiency in a business faculty Anne Harris Volume Article 10

EXECUTIVE SUMMARY. TIMSS 1999 International Mathematics Report

EDIT 576 DL1 (2 credits) Mobile Learning and Applications Fall Semester 2014 August 25 October 12, 2014 Fully Online Course

EXECUTIVE SUMMARY. TIMSS 1999 International Science Report

Investment in e- journals, use and research outcomes

Promotion and Tenure Guidelines. School of Social Work

Understanding Co operatives Through Research

Document number: 2013/ Programs Committee 6/2014 (July) Agenda Item 42.0 Bachelor of Engineering with Honours in Software Engineering

Proficiency Illusion

Effective Pre-school and Primary Education 3-11 Project (EPPE 3-11)

Self-Concept Research: Driving International Research Agendas

IMPORTANT: PLEASE READ THE FOLLOWING DIRECTIONS CAREFULLY PRIOR TO PREPARING YOUR APPLICATION PACKAGE.

Exploring the Development of Students Generic Skills Development in Higher Education Using A Web-based Learning Environment

EDIT 576 (2 credits) Mobile Learning and Applications Fall Semester 2015 August 31 October 18, 2015 Fully Online Course

Date: 9:00 am April 13, 2016, Attendance: Mignone, Pothering, Keller, LaVasseur, Hettinger, Hansen, Finnan, Cabot, Jones Guest: Roof

VI-1.12 Librarian Policy on Promotion and Permanent Status

Inquiry Learning Methodologies and the Disposition to Energy Systems Problem Solving

10.2. Behavior models

Showing synthesis in your writing and starting to develop your own voice

Rubric for Scoring English 1 Unit 1, Rhetorical Analysis

A GENERIC SPLIT PROCESS MODEL FOR ASSET MANAGEMENT DECISION-MAKING

JONATHAN H. WRIGHT Department of Economics, Johns Hopkins University, 3400 N. Charles St., Baltimore MD (410)

The Good Judgment Project: A large scale test of different methods of combining expert predictions

TRENDS IN. College Pricing

Educational Attainment

Handbook for Graduate Students in TESL and Applied Linguistics Programs

Initial teacher training in vocational subjects

Ryerson University Sociology SOC 483: Advanced Research and Statistics

APAC Accreditation Summary Assessment Report Department of Psychology, James Cook University

TRAITS OF GOOD WRITING

Entrepreneurial Discovery and the Demmert/Klein Experiment: Additional Evidence from Germany

Writing Research Articles

ACADEMIC POLICIES AND PROCEDURES

MANAGERIAL LEADERSHIP

Effective practices of peer mentors in an undergraduate writing intensive course

STUDENT MISCONDUCT PROCEDURE

GUIDE TO EVALUATING DISTANCE EDUCATION AND CORRESPONDENCE EDUCATION

St Philip Howard Catholic School

Exclusions Policy. Policy reviewed: May 2016 Policy review date: May OAT Model Policy

Scoring Guide for Candidates For retake candidates who began the Certification process in and earlier.

The Political Engagement Activity Student Guide

Transcription:

EVALUATING THE RESEARCH OUTPUT OF AUSTRALIAN UNIVERSITIES ECONOMICS DEPARTMENTS* RICHARD POMFRET and LIANG CHOON WANG University of Adelaide This paper presents measures of the research output of Australian economics departments. Our study covers the 640 academic staff at rank Lecturer and above in the 27 Australian universities with economics departments containing eight or more staff in April 2002. We construct publication measures based on journal articles, which can be compared with weighted publication measures, and citation measures, which can be compared with the publication measures. Our aim is to identify the robustness of rankings to the choice of method, as well as to highlight differences in focus of departments research output. A striking feature of our measures is that the majority of economists in Australian university departments have done no research that has been published in a fairly long list of refereed journals over the last dozen years. They may publish in other outlets, but in any event their work is rarely cited. Thus, average research output is low because many academic economists in Australia do not view research as part of their job or, at least, suffer no penalty from failing to produce substantive evidence of research activity. I. Introduction This paper presents estimates of the output of Australian economics departments in 2002 on the basis of published research. A large literature on the ranking of university economics departments has developed over the last half century. 1 Some of this literature continues a longer tradition of rankings based on the perceptions of senior academics or university leaders, but most of the recent literature has aimed at more objective measures of research output. 2 In Australia such exercises assumed added importance in the 1990s as the government adopted several vintages of measures of research output, which were used to allocate research-associated funds. Our study covers the research output of the 640 academic staff at rank Lecturer and above in the 27 Australian universities with economics departments containing eight or more staff in April 2002 (Table I). There is no best way of ascribing cardinal measures to individuals research output, and the approach taken here is to provide rankings based on several methods. Some methods are inferior to others, but all methods involve biases. In the fundamental divide between counting publications or counting citations, the former approach favours more recent work and the latter favours more established scholars. One goal of the paper is to identify * We are grateful to Ian McLean, John Siegfried, John Wilson and three anonymous referees for helpful comments. An earlier version of the paper was presented at the 2002 Australian Conference of Economists in Glenelg, South Australia. The dataset for the study and a longer version of the paper are available at: www.economics.adelaide.edu.au/research/rankings Correspondence: Richard Pomfret, School of Economics, University of Adelaide, South Australia 5005, Australia. Telephone: + 61-8-8303-4751; Facsimile: + 61-8-8223-1460; Email: richard.pomfret@adelaide.edu.au 1 The ranking by Fusfeld (1956), based on presentations at American Economics Association conferences 1950 4, is often identified as the forerunner. 2 No measure is, of course, completely objective. As will be clear from the following literature survey the choice of the objective measure is subjective.

2003 EVALUATING THE RESEARCH OUTPUT 419 Table I Departments included in the sample Staff by level University E D C B Total Name of department(s) Adelaide 3 3 7 6 19 School of Economics ADFA 1 1 5 8 15 Economics & Management ANU 4 5 6 5 20 School of Economics Canberra 1 0 5 7 13 Economics & Marketing Curtin 3 8 5 11 27 Economics & Finance Deakin 2 0 5 7 14 School of Economics Edith Cowan 1 1 1 13 16 Finance & Business Economics Flinders 2 2 4 6 14 School of Economics Griffith 1 3 3 3 10 School of Economics LaTrobe 2 4 7 6 19 Economics & Finance Macquarie 4 1 9 8 22 Economics Melbourne 11 7 21 39 Economics Monash 6 13 35 54 Economics & Econometrics Murdoch 1 2 5 2 10 Economics Newcastle 1 4 3 3 11 School of Policy New England 6 7 7 4 24 Economics Queensland 4 11 14 9 38 Economics QUT 2 2 3 14 21 Economics & Finance RMIT 2 3 8 20 33 Economics & Finance Sydney 4 10 10 7 31 Econ, Econometr & Econ Hist Tasmania 1 1 4 2 8 Economics UNSW 5 13 12 8 38 Economics UTS 4 6 12 12 34 Finance & Economics UWA 3 1 8 2 14 Economics Victoria 1 2 14 26 43 Applied Economics Western Sydney 4 3 10 18 35 Economics & Finance Wollongong 2 6 5 5 18 Economics Total 640 Source: University websites as of April 2002. Note: Level E is Professor, D is Associate Professor or Reader, C is Senior Lecturer and B is Lecturer. how and to what extent the choice of measure affects the ranking of departments. Beyond this, we aim to analyse the research output of Australian-based university economists and to establish proximate explanations for their low output relative to economists in other countries universities. The next section of this paper discusses the methodological alternatives, drawing on the international literature. The Australian literature is reviewed in Section III. Our data are presented in Section IV, and the results in Section V, where we focus on the sensitivity of departmental rankings to the choice of method and on the distribution of individual research output. Our main conclusions are that, although rankings are sensitive to method and time period, the groups of high and low research-output departments are fairly stable over both method and time period, and that the distribution of research output is extremely skewed. In most departments a few high producers account for much of the research output. The majority of academic economists in Australia do not publish and their work is rarely cited. II. Methods Perception-based rankings of universities have existed for centuries. Recent examples include the National Research Council Report on US university departments or the survey of 81 full

420 AUSTRALIAN ECONOMIC PAPERS DECEMBER professors in Australian universities by Anderson and Blandy (1992). The results of such surveys provide a guide to reputation. Some of these exercises are held in high esteem, when the peer reviewers are recognized as valid judges (e.g. the US NRC Report or the UK research evaluations), but they are not readily adapted to cross-country comparisons. In some countries, including Australia, perception-based rankings are not held in high esteem due to their subjectivity. The surveyed academics or administrators do not know all departments well and their responses are likely to reflect prejudgments as much as assessments of current standing. Perception-based rankings may cover the range of university departments activities, but these are often conflated so that it is unclear how, say, teaching and research are weighted. The approaches described in the remainder of this section focus specifically on scholarship contained in published research. The starting point for measuring research output is publications. In days of yore ideas spread by word of mouth and a lecture or seminar or even informal discussions could have a large impact in the circle of academic economists, but today unpublished research is unlikely to reach a sufficiently wide audience to be influential. The immediate issue is how to define a publication? Practically all publications-based rankings go no further than journals and books, with US studies being more likely to focus only on refereed journals. 3 In all cases a key issue is quality. The quality issue can be resolved by only counting articles in the leading journals, but that involves a decision about which are the leading journals and whether they should receive equal weight. One cut-off is only to include core journals. 4 The weights can be impact-weights (based on citations to articles in each journal) or perception-weighted. 5 The fundamental assumption of this approach is that within-journal quality variation is less than across-journal quality variation. This is plausible if the gap between two journals rankings is large, but less plausible when the gap is small, and there will be similar quality journals above and below any cut-off line. Major articles have appeared in field journals rather than core general journals, and any top journal includes some below-par accepted papers. 6 3 The focus on publications alone dates back, at least, to Siegfried (1972) and to Lovell (1973), who treated journal publications as representative of all publications. Stigler (1982, 177) provides evidence against this assumption s validity at the time of Lovell s study, and it seems unlikely to be true today. This is a different argument to the one that significant economic research contributions only appear in refereed journals. In 1991, journal publications accounted for 33 per cent of research output, measured by items, of economists in Australian universities, unpublished monographs and reports 20 per cent, book chapters 14 per cent, unpublished conference papers 8 per cent, books 6 per cent, published conference papers 6 per cent, published monographs and reports 5 per cent, edited books 2 per cent, and other categories 7 per cent (Hill and Murphy, 1994, 42). 4 Conroy et al. (1995) define the core journals as the American Economic Review, Econometrica, International Economic Review, Journal of Economic Theory, Journal of Political Economy, Quarterly Journal of Economics, Review of Economics and Statistics, and Review of Economic Studies. Inclusion of JET, the IER and REStuds means that this list favours pure theory; JET in particular is a journal which splits economists in their opinions as to its value. 5 Laband and Piette (1994a) rank journals by impact weights and Masson, Steagall and Fabritius (1997) provide a comparative perception-based ranking. Any attempt to rank journals by quality faces the problem that journal quality may change over time, and the problem is more severe for studies covering longer periods. Laband and Piette (1994, Table 2) rank the Review of Economics and Statistics as the fifth most influential journal in 1970 but twenty-ninth in 1990 and the Economic Journal as twelfth in 1970 and twenty-eighth in 1990, implying that publishing in these journals was a greater achievement in 1970 than twenty years later. In the more commercial journal-publishing environment of recent years, a high reputation has sometimes been followed by increases in size (pages per year) or in submission fees, which may lead to reduced average quality. Nevertheless, Coupé (2002, Appendix A4) finds high rank correlation coefficients for a number of alternative periods and ranking criteria, including the Laband-Piette ranking which we use. 6 Laband and Piette (1994b) study citations to articles published in 1984 in 28 leading economics journals, and find large standard deviations, which they explain by editors active search for good articles (so that some top quality articles are acquired by second-tier or specialised journals) and by either favouritism or chance (so that some articles place in journals above their merit).

2003 EVALUATING THE RESEARCH OUTPUT 421 Some studies weight by page numbers, adjusted to account for the larger page sizes in some journals than in others, in order to separate major contributions from comments or minor model refinements. In practice, length is a crude and faulty weighting procedure. Some of the major contributions to economics have been brief and succinct, including Nobel-worthy work (e.g. by John Nash, or Paul Samuelson s Pure Theory of Public Expenditure in REStats 1954). One of the most influential Australian-content papers in the last three decades was the Kemp-Wan proposition, which occupied three pages in a field journal. Finally, a major problem in ranking by publications is how to deal with anything other than refereed journals. Most studies ignore all other categories, and yet significant contributions have appeared in conference volumes and other collections. Even more obviously, books are still significant vehicles for disseminating research output, especially empirical studies requiring extensive documentation and even some important sustained theoretical arguments (e.g. Nobel-winning work by Amartya Sen). If books are included, the problem is quality control because, even more than with top journals, the top academic book publishers have great quality variance within their lists. The impact of a researcher s output can be captured by citation counts. Laband and Piette (1994a, 641) describe citations as the scientific community s version of dollar voting by consumers for goods and services. One difference is that, whereas consumers will not willingly buy a dud product, an author may refer to a poor paper as an example of how not to address an economics problem. There are also different categories of positive citations and some writers have suggested excluding categories such a self-citations, citations from articles published in minor journals, and citations from other fields, or weighting citations according to whether the cited article is referenced several times or is a single irrelevant reference in an obscure footnote (Diamond, 1986, 207). 7 Some survey articles are frequently cited, especially when they attain locus classicus status and a bandwagon effect ensues, but many economists would consider survey articles not to be real research unless they contain some novel results. 8 Addressing these issues is, of course, operationally difficult and involves problems of defining minor journals or obscure footnotes or original surveys. 9 Nevertheless, citing is non-trivial because there are costs to citing (e.g. locating the reference, and the risk of being accused of misrepresentation) and it carries information. 10 The advantages over publication counts are that citations 7 These issues are discussed in the essays in Part IV of Stigler (1982). Stigler illustrates the difficulty of distinguishing between favourable and unfavourable references by quoting John Hicks s review of a book by Don Patinkin The main things I have learned [from the book] are not what the author meant to teach me Stigler (1982, 203) comments that It is not complimentary to be told that one did not understand his own message; it is complimentary for an economist to be able to teach Sir John anything. Citation counts have a longer history in the natural sciences, but Stigler s essays, co-authored with Claire Friedland and originally published in the 1970s, and Eagly (1975) were influential early uses in economics. Posner (1999, 4 9) discusses pure and impure motives for citing. 8 Citations of edited volumes may also be misleading if the editors contributed only a small part of the research contained in the book. Although some credit should go to editors who assemble a collection worth citing, an ideal citation count would give credit to the other contributors too. 9 There may also be a field bias; citation counts and impact weights tend to favour econometrics and finance journals because of their interlinked and frequent citations, whereas economic history articles cite historical sources relatively often and the work of other economic historians relatively infrequently. Blockbuster citees usually involve an econometric method; of the twenty most-cited articles published between 1975 and 2000 ten were in Econometrica, two in JASA and one in OBES (Coupé, 2002, Appendix A10). 10 The seriousness with which citations analysis is taken in the USA is reflected in its use in legal cases. Posner (1999, 19n) lists four cases in which academics have claimed to have been discriminated against by their university, and citations analysis was used in the court proceedings to help determine whether the alleged discrimination was invidious or based on lack of scholarly distinction.

422 AUSTRALIAN ECONOMIC PAPERS DECEMBER acknowledge non-journal publications and that citing implies that research had some impact, which is interpreted as a measure of quality. 11 The construction of citation counts has been facilitated by the existence of the Social Science Citation Index (SSCI) maintained by the Institute of Scientific Information, although reliance on a single dominant source has operational drawbacks. 12 In particular, early versions of the SSCI listed citations only by first author, which involved an alphabetic bias unless researchers were willing to carefully follow up citations to non-first authors of co-authored publications. There were also inconsistencies in listing authors by initial or first name, some misspellings, and the problem of distinguishing between citations to identically named authors. The most recent electronic versions list only by initials, but allow for relatively easy cross-checking, and identify all authors not just the first-named. 13 The coverage changes frequently; the version available at our university at the time of the research lists only citations in journals published since 1995, although they include citations to earlier journal articles and to books and other publications. In choosing between citations and publications as the basis for ranking methods, we should consider both the practical difficulties, which tend to be greater with citations, and the conceptual difficulties, which tend to be greater with publications. There is also a time dimension insofar as citation counts will favour departments stacked with established scholars, while publications capture more recent research. 14 Every ranking method has its strengths and weaknesses and sources of bias. In a meta-analysis of five US studies, 15 Feinberg (1998) found a clear pattern of the authors institutions always ranking much higher in their own study than in any other. While this bias may be driven by selfinterest, it may be more insidious in that different departments place different weight on theory versus applied work, core economics versus fields, books versus articles, policy-relevant versus esoteric research, and so forth, and the preferred ranking method may reflect the ethos of a researcher who chooses to be working in a particular style of department. 16 Whichever explanation one accepts of home-institution bias, the implication is that any single ranking method is unlikely to produce a generally acceptable ranking of departments. Finally, it should be noted that all ranking systems are sensitive to decisions about who to include in each department, the time period covered and other such decisions. Thursby (2000) 11 One referee took strong exception to the use of citation counts and, especially, to any claim that they measure quality. Citation counts are widely used and we agree with the rationale provided by Laband and Piette that, with all the caveats, more citations imply greater value. Publication counts, even when weighted by journal quality, are also inherently flawed due to intra-journal quality variance. 12 The SSCI was developed in 1969. Smyth (1999, 122 3), who works with citations in five Australian journals rather than with the SSCI, discusses previous sources of citation counts. 13 The entries are, however, only as good as the original reference source, which might be to Abbot et al. rather than to named co-authors. It is difficult to evaluate the completeness or accuracy of the ISI website, but a check by a well-published but alphabetically disadvantaged Adjunct Professor at Adelaide University found many cases of his work being cited under co-authors names but not his own. 14 This difference will be diminished as the number of years covered by publications increases, and it is also reduced by the methodology of the current SSCI which only reports citations in journals published since 1995. Another time dimension is that journal articles have become longer (Ellison, 2002), so that any page-weighted measure will favour more recent publications. 15 Berger and Scott (1990) from the University of Kentucky, Conroy et al. (1995) from the University of Texas, Scott and Mitias (1996) from Louisiana State University, Tremblay, Tremblay and Lee (1990) from Kansas State University and Tschirhart (1989) from the University of Wyoming. Dusansky and Vernon (1998) is an update of the Conroy et al. study. 16 Stigler (1982, 209 12) observed a similar home-institution bias in the citation practices of PhDs from five leading US graduate schools. He constructed a parochialism index which showed disproportionate citing of faculty from their own school, especially by Harvard PhDs.

2003 EVALUATING THE RESEARCH OUTPUT 423 has emphasised the dangers of placing too much weight on minor ranking differences, and he uses the National Research Council measures to develop a test of distance between departments in the USA. Significance tests are, however, conceptually difficult when we are dealing with the population of defined publications or citations, rather than a sample. Despite all their flaws, in practice published rankings continue to be treated seriously. III. Australian Literature Economists at Australian universities as a whole perform rather poorly in publishing. Kocher et al. (2001) report that Australian-affiliated authors published a total of 66 articles in the top ten impact-weighted journals in even years from 1980 to 1998, while researchers based in other English-speaking university systems produced 5963 (USA), 379 (UK), 364 (Canada), 185 (Israel), 21 (New Zealand) and 9 (Ireland). 17 When adjusted for population or number of universities, Australia lags far behind all of these countries except Ireland, and even behind non- English-speaking countries such as Belgium, Sweden and Switzerland. 18 In a large ranking exercise funded by the European Economics Association, only five Australian universities ranked in the top 200 economics departments, both by publications in economics journals and by citation of articles in economics journals (Coupé, 2002). 19 Jonson and Brodie (1980) analysed the publication record of Australian economists on the basis of the bibliographies of the five surveys in Fred Gruen (ed) Surveys of Australian Economics vol. 1 (George Allen & Unwin, Sydney, 1978). The criterion for inclusion is subjective; expert evaluation of whether an item is worthy of inclusion or not, with no weighting of 17 The impact factor is determined by citations, and excluding the Economist, the top ten are the Journal of Economic Literature, Journal of Financial Economics, Brookings Papers on Economic Activity, Journal of Political Economy, Econometrica, Quarterly Journal of Economics, Journal of Law and Economics, American Economic Review, Journal of Monetary Economics, and Review of Economic Studies. The list is open to criticism; for example, the Journal of Financial Economics was the most cited journal in the world over the period 1977 82, but over a fifth of total citations to the JFE were to a single article published in 1976 (Smyth, 1999, 122) and over a different time period the rankings would have been different. Note the significant differences in inclusion and ranking between this list and the Conroy et al. definition of core journals. 18 The reasons for Australian-based economists relatively poor research performance are unclear. Ireland appears to have suffered from mobility, with some of the most productive researchers having emigrated. Australia also suffered from the emigration of productive researchers such as Max Corden and Stephen Turnovsky to the USA in the 1980s, but emigration alone cannot explain Australia lagging Canada and New Zealand. Fox and Milbourne (1999) identify time as a crucial determinant of research productivity in Australian economics departments. Bhattacharya and Smyth (2002), based on a survey of full professors in Australian economics departments, find that output measured by either publications or citations is positively correlated with time devoted to research and negatively correlated to the time spent on other duties. McCormick and Meiners (1998) show that research output of economics departments is lower in universities which give a greater role to academic staff in university management, suggesting that the heavy administrative and committee burdens in the Australian system may be distracting academics from research. In a non-anglophone setting Combes and Linnemer (2001) raise similar concerns about the low research productivity of academic economists in France. 19 By 1990 2000 publications ANU ranked 53 rd, UNSW 79 th, Melbourne 109 th, Monash 125 th, and Sydney 145 th, while measured by citations the rankings were lower with Sydney dropping out of the top 200 and UWA appearing. For comparison, Canada, with a fifty per cent larger population than Australia, has 15 universities in the top 200 by publications and 14 by citations. Only four Australian-based economists made the top 1000 individual publishers and only two made the top 1000 citees. Coupé s sources, which rely heavily on Econlit, and method, which only counts citations to and in articles listed in both Econlit and the SSCI, may be criticised, but it is not obvious that any net bias works against Australian-based economists.

424 AUSTRALIAN ECONOMIC PAPERS DECEMBER publications by quality or length. Of the 664 items in the bibliographies, 289.5 were by academics at Australian universities. The references are concentrated; the seven universities which accrue more than ten items apiece (ANU 65, Melbourne 37.5, Monash 37, Sydney 34, Adelaide 25.5, UNSW 22.5 and New England 21.5) account for over four fifths of the Australian university entries. The Jonson-Brodie ranking suffers from serious selection bias. The top seven departments excelled in different fields; Adelaide in monetary economics, Melbourne in wages policy, UNSW in inflation, UNE and Sydney in agricultural policy, and ANU and Monash in protection policy. Failure to count the second and third volumes of the Surveys discriminated against departments with strengths in industrial organisation, income distribution and so forth. Even allowing for field bias there is a suspicion of parochial bias; the monetary policy survey was by Adelaide economists, the wages policy survey by Melbourne economists, and the protection policy survey by an ANU economist. Harris (1988) undertook the first major attempt to quantify research output as the basis for ranking Australian universities economics departments. Harris included staff of lecturer and above in eighteen teaching departments. 20 He does not use citations because With a small number of exceptions, Australian academic economists, and therefore their departments, are quite inadequately represented in the SSCI, mainly because Australian academics tend to publish in journals which are not widely cited in other journals (Harris, 1988, 103 4). Instead he counts publications over the period 1974 83, which are divided into eight categories of journal articles and books determined by (fairly subjectively evaluated) quality, for which the weights range from 35 points for a research book to one point for articles not included in other categories (e.g. articles written for secondary school students). 21 The results identify three outstanding departments in terms of research output per head ANU, ADFA and Newcastle and otherwise not much variation between departments (Harris, 1988, 107). 22 This conclusion illustrates the distinction between basing conclusions on per capita or total output, because the Australian Defence Forces Academy, with fewer than five staff members, ranked sixteenth out of eighteen by total points (Table II). By total points, Harris s rankings were ANU, Newcastle, Queensland, LaTrobe, Sydney, UNSW, Monash, Macquarie, Melbourne, Adelaide and UWA, followed by a substantial point gap before UNE and Flinders. Harris (1990a) updated his results to cover publications in 1984 8. For this period In terms of total points earned, Newcastle, Melbourne, Adelaide and the ANU were the largest producers and The leading five departments in terms of publication points per head were Newcastle, ANU, Tasmania, Adelaide and University College (ADFA) (Harris, 1990a, 251). 23 Harris also provided data on the top twelve publishers in 1984 8, which showed one academic to be far ahead of anybody else. In two departments, a single academic accounted for over half of the publication points. Harris (1990a; 1990b) provides the only published citation-count analysis of Australian based economists, although his discussion is preceded by a list of caveats and he only covers 20 Unlike most other studies, Harris assigns a publication to the institution with which the author was affiliated at the time the publication was written. Most studies, including ours, assign publications to the institution with which the author is affiliated at the end of the period studied. 21 Category one journals (counting ten points each) include 12 first-rank general journals, plus the Economic Record and Australian Economic Papers, together with 25 first rank specialist journals, whereas category two (counting six points) includes some 50 second rank journals, but the criteria for distinguishing between first and second rank are not set out. 22 Macquarie, LaTrobe, UNSW and Sydney were all bunched on 47 48 points per head, eleven points behind Newcastle and four ahead of eighth place Melbourne. 23 In Harris s table Macquarie is second in the total points ranking, and appears to have been omitted from this sentence by error.

2003 EVALUATING THE RESEARCH OUTPUT 425 Table II Harris rankings for 1974 83 and 1984 8 Publication points 1974 83 Publication points 1984 8 Total Per staff member Total Per staff member Adelaide 841 (10) 37.2 (16) 519 (4) 23.6 (4) ADFA 287 (16) 62.4 (2) 138 b 23.0 b ANU 1404 (1) 88.3 (1) 513 (5) 30.1 (2) Flinders 470 (13) 41.6 (10) 149 b 9.9 b James Cook 316 (15) 40.5 (11) 60 (14) 7.4 (14) LaTrobe 1040 (4) 47.1 (5) 463 (9) 17.8 (11) Macquarie 959 (8) 48.0 (4) 576 (2) 19.2 (9) Melbourne 910 (9) 42.7 (8) 551 (3) 19.7 (7) Monash 996 (7) 39.1 (13) 488 (7) 21.1 (5) Murdoch 71 a 19.2 a 52 c 12.9 c Newcastle 1221 (2) 58.7 (3) 646 (1) 35.5 (1) New England 530 (12) 42.4 (9) 276 (12) 20.9 (6) Queensland 1101 (3) 37.4 (15) 490 (6) 18.0 (10) Sydney 1003 (5) 46.7 (7) 386 (10) 15.3 (13) Tasmania 365 (14) 36.1 (17) 272 (13) 25.6 (3) UNSW 1002 (6) 46.8 (6) 483 (8) 19.5 (8) UWA 772 (11) 39.4 (12) 286 (11) 17.2 (12) Wollongong 278 (17) 38.6 (14) 173 d 14.0 d Source: Harris (1988, 109) and Harris (1990a, 251). Notes: Numbers in parentheses are rankings. a 1977 83. b 1984 6, c 1984 7, d 1986 8. Table III Citation measures for 1986 7 Proportion of staff cited Citations per staff member Adelaide 59 3.1 (5) ADFA 50 1.1 (15) ANU 82 6.0 (1) Flinders 44 2.3 (7=) James Cook 12 0.2 (17) LaTrobe 47 2.0 (9=) Macquarie 48 3.5 (3) Melbourne 47 3.4 (4) Monash 43 2.9 (6) Murdoch 7 0.1 (18) Newcastle 30 2.0 (9=) New England 59 1.4 (13) Queensland 43 2.3 (7=) Sydney 54 1.6 (12) Tasmania 46 1.8 (11) UNSW 41 1.3 (14) UWA 33 3.6 (2) Wollongong 27 0.8 (16) Source: Harris (1990a, 255). Notes: Numbers in parentheses are rankings. two years of citations (1986 and 1987). The rankings suggest that ANU economists research had the most impact, with UWA, Macquarie, Melbourne, Adelaide and Monash closely grouped in second to sixth places (Table III). Although the small number of citations appears to reinforce Harris s earlier scepticism about the degree to which Australian economists work achieves international recognition, it also reflects the limitation of covering only two years. His

426 AUSTRALIAN ECONOMIC PAPERS DECEMBER listing of the twelve leading researchers by citations indicates substantial potential differences between prolixity and impact. 24 Harris work was pioneering, and his weighting system attempts to account for quality and covers a wide range of publications, rather than just journal articles. His weighting system does, however, have a large subjective element. He observes that the composition of points totals varies substantially among the top departments, with some winning points for first-rank journal articles and others gathering points mainly for books. It follows that the rankings are likely to be sensitive to the precise weighting system, and Harris has no clear justification; is a research book worth three and a half articles in first-rank journals and almost six second-rank journal articles, and should the weights be the same for all research books? Anderson and Blandy (1992) identified 81 professors of economics, econometrics and economic history in Australia, of whom 65 per cent responded to their survey conducted by mail in 1992. The questions mainly concerned economists ideas, but they also included a section on what the professors thought of Australian economics departments, other than their own (which they tended to rate very highly). 25 Departments were ranked on six criteria (undergraduate, honours and postgraduate education, research, contribution to public policy, and quality of faculty), as well as overall. ANU, Melbourne, UNSW and Monash led on all criteria, in that order overall, with Adelaide and Sydney following fifth and sixth overall. Apart from these six, only Macquarie, Flinders, LaTrobe and UWA received mention. 26 The study by Towe and Wright (1995) has been influential as the only publication-based rankings published in the decade after Harris s work. Their rankings are based on number of pages published in journals during the 1988 93 period. They divide the 332 journals listed in the Journal of Economic Literature into four tiers, and for the 71 journals in the top three tiers they weight articles by their AER-equivalent length (Table IV). 27 They cover the academic staff of twenty-three economics and five econometrics departments as of 1 st March 1994, and find that economics departmental rankings are similar over a broad range of journal groupings with Melbourne, Monash, Sydney, Tasmania, and ANU consistently ranked in the top third. The econometrics departments that consistently ranked highly under the same journal groupings were Monash and Sydney (Towe and Wright, 1995, 9). Sinha and Macri (2002) examine the 1988 2000 research output of academic staff at 27 teaching departments with at least eight staff at rank of lecturer and above at the beginning of 2001 and in 1994. They include publications in some 400 journals listed in EconLit, weighting journals in two ways (perceptions and citation-based impact measures) and weighting articles by standardised page length. 28 The authors suggest that the impact-based weighting is more 24 It also illustrated the importance of self-citations and of blockbuster articles. The economist who ranks first by total citations, ranks third if self-citations are excluded. The most cited article in 1986 7, with 55 citations over the two years, would alone have put its author in the top five, whereas the leading author by publication measures garnered only just enough citations to make the top dozen. 25 Hometown bias was especially strong in Melbourne, in economics as in AFL. Clements and Wang (2001, 18) found a similar bias in the citation patterns of Australian PhD students. 26 This ranking is similar to the peer ranking from a less well-documented May/June 1987 survey of university economists reported in Harris (1990b) which ranked ANU first by a large margin, followed by Monash, Melbourne, UNSW and, after a gap, Sydney and Adelaide, and then another gap before Macquarie and Flinders. 27 Fox and Milbourne (1999) use a similar method to calculate research output of Australian academic economists, although they are concerned with explaining research productivity rather than ranking performance. Harris and Kaine (1994) tackle similar issues using the Harris weighting. 28 The quality weightings are based on Masson, Steagall and Fabritius (1997) and Laband and Piette (1994a) respectively. The LP impact weights decline much more rapidly than the MSF perception weights, so that the former give a greater weight to publications in the top journals while the latter place relatively more weight on lesser journals.

2003 EVALUATING THE RESEARCH OUTPUT 427 Table IV Towe and Wright rankings for 1988 93 Number Pages published per staff member of Staff Top tier only Tiers 1&2 Tiers 1 3 Tiers 1 4 Adelaide 21 0.59 (9) 0.92 (12) 3.54 (16) 14.74 (15) ANU 19 1.95 (1) 6.58 (1) 9.88 (5) 22.46 (9) Bond 6 0.67 (7) 1.34 (10) 5.50 (12) 26.14 (5) Curtin 19 0.23 (11) 0.39 (16) 2.16 (18) 12.92 (16) Deakin 18 0 0.59 (13) 0.80 (21) 7.44 (20) Flinders 14 0.21 (12) 0.21 (17) 4.33 (15) 9.47 (19) Griffith 8 0 0 6.43 (9) 12.05 (17) James Cook 10 0 0 0 (23) 0 (23) LaTrobe 24 0.96 (6) 1.71 (9) 5.91 (10) 24.76 (8) Macquarie 25 0 0.47 (14) 5.25 (13) 10.71 (18) Melbourne 25 1.03 (4) 4.08 (2) 14.83 (1) 31.88 (2) Monash 31 1.25 (2) 3.39 (3) 8.72 (6) 33.16 (1) Murdoch 9 0 0 1.64 (19) 21.41 (11) Newcastle 27 0 0 2.82 (17) 15.48 (14) New England 13 0 0 7.52 (8) 28.71 (3) Queensland 31 0.48 (10) 1.16 (11) 5.60 (11) 26.18 (4) RMIT 20 0 0.44 (15) 1.26 (20) 2.74 (21) Sydney 26 0.64 (8) 2.41 (5) 10.40 (4) 25.40 (7) Tasmania 12 0 2.09 (6) 10.51 (3) 25.75 (6) UNSW 30 1.10 (5) 2.47 (4) 7.61 (7) 22.36 (10) UTS 27 0 0.06 (18) 0.36 (22) 1.04 (22) UWA 17 1.06 (3) 1.93 (7) 11.32 (2) 20.49 (12) Wollongong 16 0 1.85 (8) 4.89 (14) 17.48 (13) Econometrics ANU 5 1.71 (2) 1.71 (5) 5.17 (5) 8.97 (5) Monash 11 1.43 (3) 9.23 (2) 12.47 (2) 19.53 (3) New England 8 0.95 (4) 5.39 (3) 8.64 (3) 22.84 (1) Sydney 8 3.81 (1) 10.44 (1) 13.99 (1) 22.24 (2) UNSW 9 0.54 (5) 3.71 (4) 8.53 (4) 18.19 (4) Source: Towe and Wright (1995). Notes: Numbers in parentheses are rankings. The top tier consists of 12 journals, tier 2 of 23, tier 3 of 36, so that tiers 1 3 include 71 journals. Tiers 1 4 include all 332 journals in the printed version of the Journal of Economic Literature. commonly used, and by that measure ANU, UNSW, Melbourne, Sydney, UWA and Monash lead, with a big gap before next placed Queensland (Table V). Adjusting for department size, the ranking is ANU, Sydney, UWA, UNSW, Melbourne, Monash and Griffith. With perceptionbase weighting of journals the ranking is Melbourne, ANU, Queensland, UNSW, LaTrobe, Monash, UNE, UWA and Sydney, with a gap before next-ranked Adelaide. On a per capita basis, the ranking is Melbourne, UWA, ANU, LaTrobe, Sydney, Queensland, Tasmania, Monash, UNE, UNSW, Western Sydney, Adelaide and Murdoch. The Sinha-Macri rankings suggest a leading group of ANU, Melbourne, and UNSW, followed by Sydney, UWA and Monash, but the rankings are affected by the choice of weighting (with Melbourne and Queensland ranking higher with perception weights) and rankings per capita obviously improve the position of smaller departments such as UWA, LaTrobe, Griffith or Tasmania. Sinha and Macri also report results for 1994 2000. The LP rankings produce the same top six departments based on total publications and one change when adjusted for size (Tasmania displacing Monash at number six), with some reordering in positions 2 6. With MSF weights the top six are also fairly stable, although UWA displaces Monash at number six on a total publications basis, and Tasmania displaces Sydney in the top six on a per capita basis (Table V).

428 AUSTRALIAN ECONOMIC PAPERS DECEMBER Table V Sinha and Macri rankings for 1988 2000 Total Per capita LP weights MSF weights LP weights MSF weights Adelaide 7.51 (13) 445.66 (10) 0.34 (13) 20.26 (12) ADFA 0.25 (27) 85.42 (26) 0.02 (27) 5.69 (25) ANU 144.77 (1) 959.78 (2) 6.03 (1) 39.99 (3) Canberra 0.57 (25) 80.95 (27) 0.06 (24) 8.99 (20) Curtin 3.92 (18) 342.67 (12) 0.14 (22) 12.24 (18) Deakin 5.32 (17) 185.04 (18) 0.33 (15) 11.56 (19) Edith Cowan 0.37 (26) 95.58 (25) 0.02 (26) 5.97 (24) Flinders 3.03 (19) 111.94 (24) 0.20 (18) 7.46 (21) Griffith 12.23 (8) 143.06 (22) 1.11 (7) 13.01 (17) LaTrobe 10.52 (9) 670.88 (5) 0.53 (9) 33.54 (4) Macquarie 8.20 (12) 368.34 (11) 0.37 (12) 16.74 (14) Melbourne 77.24 (3) 1,851.24 (1) 2.03 (5) 48.72 (1) Monash 34.15 (6) 577.36 (6) 1.42 (6) 24.06 (8) Murdoch 2.87 (20) 230.22 (17) 0.24 (17) 19.18 (13) Newcastle 1.62 (24) 163.82 (19) 0.16 (21) 16.38 (15) New England 10.30 (10) 575.24 (7) 0.41 (10) 23.01 (9) Queensland 13.87 (7) 949.54 (3) 0.41 (11) 27.93 (6) QUT 2.67 (21) 156.12 (21) 0.13 (23) 7.43 (22) RMIT 7.05 (15) 232.15 (16) 0.20 (19) 6.45 (23) Sydney 58.89 (4) 499.93 (9) 3.46 (2) 29.41 (5) Tasmania 7.29 (14) 240.69 (15) 0.81 (8) 26.74 (7) UNSW 85.05 (2) 828.69 (4) 2.24 (4) 21.81 (10) UTS 10.02 (11) 158.00 (20) 0.29 (16) 4.51 (26) UWA 38.79 (5) 564.67 (8) 2.77 (3) 40.33 (2) Victoria 2.39 (22) 139.82 (23) 0.06 (25) 3.68 (27) Western Sydney 2.04 (23) 254.55 (14) 0.17 (20) 21.21 (11) Wollongong 6.53 (16) 259.77 (13) 0.34 (13) 13.67 (16) Source: Sinha and Macri (2002, 142 3). Notes: LP refers to the impact-based journal weights and MSF to the perceptions-based journal weights. Numbers in parentheses are rankings. Thus, at least at the top end, these rankings are robust to the method of weighting journals and to the choice of time period. A feature of the Australian literature, apart from its paucity, is the limited range of methods used. Leaving aside Anderson and Blandy s perceptions-based study, the other studies focus on weighted publications counts. Harris tried to cover a wide range of publications, but his weighting system was idiosyncratic and the results apply to a quarter of a century ago. Both Towe and Wright and Sinha and Macri restrict their analysis to articles in refereed journals, and they adjust for quality by weighting the journals and by measuring each paper s length. 29 It is surprising that only one Australian study has used citations as a basis for ranking individuals research performance. The sole exception (Harris, 1990) clearly devoted little energy to compiling this measure, which only involved two years worth of data. Yet, Harris s claim that Australian economists are not cited in major journals is simply not true, at least at the professorial level. The list of the 1082 most frequently cited economists in Blaug (1999), based on references in about 200 SSCI-listed economics journals during 1984 96, includes 26 Australian-resident 29 The general approach of Sinha and Macri follows the method of Conroy et al. (1995) and Dusansky and Vernon (1998), which has also been adopted by Coupé (2002). Pages just don t do it is the comment by Griliches and Einav (1998, 211) on the pages-weighted approach of Dusansky and Vernon (1998), which ranked Pittsburgh above Chicago in the USA.

2003 EVALUATING THE RESEARCH OUTPUT 429 economists, although not all are still actively associated with university departments. The seventeen active ones are concentrated in the sandstone universities Melbourne three, Adelaide, ANU, Monash, Sydney and UWA two each, Bond, Macquarie Queensland and the AGSM one each and the youngest was born in 1952. 30 IV. Data Our study covers the research activity of academic staff at rank lecturer and above in Australian economics departments with eight or more staff in the first semester of 2002 (Table I). 31 Emeritus and adjunct staff are not included, and staff on leave are not excluded. Leave status poses a problem insofar as academics often continue to be listed in a department when they have not resigned but are in another position from which they may never return, but it is difficult to distinguish these situations from sabbatical or other short-term leave from which the academic will soon return. 32 For most universities the economics department is well-defined, 33 but some decisions have to be made about disciplinary boundaries. Where econometrics or economic history are in separate departments, we amalgamate them with economics. Finance and industrial relations are more difficult because they are sometimes in the economics department, sometimes free-standing departments, and sometimes integrated into other departments; where finance and industrial relations are separate departments, we do not amalgamate them with economics. 34 The general rule of thumb was not to attempt to identify individual staff in such departments and count them as economists. Research institutes and staff primarily affiliated with them were not included; for ANU the economics department is that in the teaching faculty, and economists in the research schools are not included. 35 30 This is an extreme example of the bias of citation counts in favour of older economists. Blaug s count covers a long time period (1984 96) followed by a time lag before publication. An economist born after 1952 would normally be hitting their publications stride by the mid or late 1980s and not attracting citations until towards the end of the 1984 96 period. 31 The basic source for staff lists is departmental web pages as they stood in April 2002. Care was taken to remove the names of people who had recently left departments but were still listed on their websites. No adjustment was made for less than full-time status of people listed as regular academic staff. 32 This matters because some of the academics on quasi-permanent leave are prolific publishers, and because some departments do not list staff on long-term leave. 33 For Newcastle, the School of Policy is treated as the economics department. For Sydney we include three economics disciplines from the School of Economics and Political Science, but exclude the Politics and Political Economy disciplines and economists in the School of Business. For Melbourne we exclude the Household Research Unit and actuarial studies staff. For Curtin we exclude Property Studies. 34 This is subjective, but when finance and industrial relations departments are distinct they tend to contain institutional experts or others who would not normally be considered economists. The fundamental problem is that there is no generally accepted definition of an economist, so that it is difficult to identify economists outside economics departments (or to exclude non-economists who happen to be employed in economics departments). 35 This excludes some of the long-term most productive publishers in Australia such as Bob Gregory and Adrian Pagan at ANU or Peter Dixon at Monash. Note that staff on research only positions within teaching departments, e.g. ARC research fellows such as Steve Dowrick and John Quiggin at ANU, are included. Harris (1989) assessed the 1974 86 research output of six research centres (the Centre for Economic Policy Research and the economics departments in the Research Schools of Pacific Studies and of Social Sciences (RSPS and RSSS) at the ANU, the Centre for Policy Studies at Monash, the Institute of Applied Economic and Social Research at Melbourne, and the National Institute of Labour Studies at Flinders) using the same weighting system as in Harris (1988). He concluded that although the research centres on average published more per staff member than teaching departments, this was less than propor

430 AUSTRALIAN ECONOMIC PAPERS DECEMBER While our methods are those which have been applied to other countries academic economists, there are pitfalls in applying the methods to Australia. Especially in a smaller university system, such as Australia s, a department s ranking can be hugely influenced by a single star. When the median academic produces zero publications in most years, the choice of dates can be critical in including or excluding a good year. Especially with measures such as publications in impact-weighted core journals, a single American Economic Review article can raise a department s ranking by many places. 36 Moreover, some Australian economists work on domestic issues for which appropriate outlets are Australian journals (and book publishers). Should the major Australian journals such as the Economic Record, Australian Economic Papers or the Australian Economic Review receive special weights (and what about policy journals such as Agenda, or field journals such as the Australian Economic History Review, the Australian Journal of Agricultural Economics or the Australian Journal of Agricultural and Resource Economics?), or does this penalise economists working on research projects without specific Australian content who publish in international journals of similar standing to the EcoReco or AEP? In counting publications, we focus on journal articles, and our rule of thumb was to exclude comments, replies, obituaries and book reviews. For multi-authored articles, each author is ascribed 1/n credit, where n is the number of authors. Our base publication list is all articles published between 1990 and the end of 2001 in the top 88 journals as listed by Laband and Piette (1994a, Table A2 final column). 37 To test for whether there is an Australian-topic bias, we separately count publications in the six academic journals listed in the previous paragraph (not Agenda). Our preferred source for publication lists is an individual s curriculum vitae downloaded from the university website. Sometimes these are out of date and for many staff they are absent, although there is a (favourable) selection bias in that more active publishers are more likely to have their curricula vitae posted. We also used departmental reports when they were available, and in some cases of absent or incomplete vitae, we followed up directly to the academic or department. An alternative source for publications was the Econlit database, but as far as possible we kept Econlit as a final resort, mainly using it as a cross-check against data from other sources. 38 tionate to the extra time available for research; although he also adds that funding of the research schools differs so that some (e.g. the Centre for Policy Studies) are constrained by the need to raise consulting money, which draws time away from academic research. 36 Advocates of the use of citation counts in the USA often argue that it is important to realize that the existence of noise in data does not invalidate quantitative analysis. Critics of citations analysis often fail to note that if errors in data are randomly distributed with respect to the variable of impact... they are unlikely to invalidate the conclusion of the study, provided that the sample is large (Posner, 1999, 12 3). In ranking Australian departments, errors, even if randomly distributed, could make a big difference in the relative positions of the few leading departments and even larger difference in rankings of the non-leading departments. 37 The 88 th ranked journal was the top-ranked Australian journal, the Economic Record. The list is similar to Towe and Wright s top three tiers of journals, which appears to be their preferred measure of quality journals. The LP ranking is based on citations to articles published 1985 89 and thus the list excludes new journals (e.g. Health Economics which began publication in 1991) and may include some whose quality has declined over the last decade. Any ranking is sensitive to method and dates, but Pieters and Baumgartner (2002) found a high correlation between their ranking of forty-two leading economics journals based on citations in 1995 7 and other citation-based rankings of journals, including the Laband-Piette ranking. 38 Econlit is easy to work with but full of pitfalls. Listings under initials and full names are not consistent for the same person, and publications from similarly named people are commingled. Transcription errors, e.g. one of Olan Henry s papers is listed under Ulan Henry and one of Bharat Hazari s under Bharathazari, mean that a search does not capture misfiled entries. It is impossible to assess the number of such errors, but they appear to be many. Finally, the publication selection process is incomplete as soon as one moves beyond publications in the leading journals.