Universities have been criticized for being divorced from the real world and failing to contribute to the society that funds them. The primary culprit blamed is the “publish or perish” mentality that exclusively rewards professors for racking up publications, even if they are worthless or rarely read. This mentality is also blamed for the notion that some professors don’t care about teaching.
One of the many contributors to this perceived state is the accreditation system, which has been accused of diverting professors from teaching by focusing too intensively on research productivity. As a result, accreditation agencies (such as the Accrediting Council for Continuing Education and Training or Accrediting Council for Independent Colleges and Schools) have shifted to a mission-based accreditation system that allows universities to designate which types of intellectual contributions meet its mission beyond only refereed journal publications.
Despite the new standards, however, universities largely continue to count numbers of publications instead of giving credit for other forms of intellectual contributions. As summarized by W.A. Roberts et al, “For good or bad the emphasis on research remains [despite the new standards] … and teaching efforts give way to increased research efforts.” (“The faculty perspective on the impact of AACSB accreditation,” Academy of Educational Leadership Journal, 2005.)
I would suggest that this is partially due to a lack of understanding of the standards, or the difficulty of measuring alternative forms of intellectual contribution (or simple inertia and/or laziness).
Business school example
I’ll use an example from the Association to Advance Collegiate Schools of Business (AACSB), which updated its Eligibility Procedures and Accreditation Standards for Business Accreditation in 2012.
The AACSB Standard 1 states: “The mission statement … must articulate the school’s focus relative to the production of intellectual contributions (i.e., discipline based scholarship, contributions to practice, and/or teaching/pedagogy scholarship).” It emphasizes: “The school’s mission determines the appropriate balance of activity among the three types of contribution.”
Standard 2 gives specific examples of allowable contributions beyond traditional discipline-based scholarship (peer-reviewed publications), including preparation of new materials for use in courses, creation of teaching aids, creation and delivery of executive education courses and development of discipline-based practice tools. “Generally, intellectual contributions should meet two tests: exist in public written form, and have been subject to scrutiny by academic peers or practitioners prior to publication,” state the primary guidelines from 2012.
So clearly, university administrators can give credit for far more than just the number of publications. But how?
I propose a system that would meet the guidelines while also recognizing real alternatives to published articles, including online and social media alternatives. I base my proposal on the following:
1) Administrators like to count numbers of publications because it is easy and unambiguous, so the proposed system must also result in a specific number that can stand up to inevitable grievances. The proposed system awards a number for each intellectual contribution as follows (Value = Weighting based on Mission x Weighting based on Scale x Weighting based on Quality/Impact).
2) Mission: The system must reward all three categories of intellectual contribution (discipline-based scholarship, contributions to practice, and teaching/pedagogy scholarship). The weighting factor given for each category will be based entirely on the university’s mission.
Currently the weighting for disciple-based scholarship is 1 and all others are given a 0. The new system would give all three forms of intellectual contribution some value. For example, all three would get a 1 if all three have equal weight in the mission statement.
3) Scale: All published journal articles have approximately equivalent scale – all are around 5,000 to 7,000 words and exist in written form. So the weighting factor for intellectual contributions of this scale would receive a 1. However a book would receive a higher weighting whereas a short article would get a lower one.
The guideline that intellectual contributions exist in written form seems like an anachronistic relic of the last millennium. Intellectual contributions will increasingly become multi-media works that may exist in other public archival forms. The proposed system would thus award points for non-traditional works based on the scale of the contribution (e.g. a book gets more points than a chapter, a software program gets more points than an iPhone app, and a movie gets more points than a video clip).
4) Quality/impact: Similarly, the guideline that contributions be reviewed prior to publication seems insular and antiquated. Why should the opinion of two anonymous academics be more important than 100,000 practitioners or a review by the New York Times? The review process will increasingly become public as professors contribute to the world around them and are subjected to scrutiny and comments by those who view their contributions. The issue is not when or by whom the scrutiny is provided, but some measure of quality and/or impact.
While traditional discipline-based scholarship can be weighted using traditional peer review, citation, and journal tier standards, we can weight the quality/impact of the other two forms of contribution using publically available metrics, such as number of downloads or number purchased. This means that professors should not just write something, publish and forget it. They should market their contributions and interact with their customers to ensure their intellectual contributions are impactful.
The following table provides a representative sample of how the proposed system would be applied in a variety of contexts. The examples show a university business program whose mission equally values all three categories of intellectual contribution:
This flexible system provides a simple tool to reward professors for all three types of intellectual contributions required by mission-driven accreditation standards. It is easily adaptable to different faculties and could serve as a framework to be used, for example, by universities seeking to implement community-engaged scholarship. Certain research-based universities might prefer to continue to focus exclusively on counting discipline-based journal publications. However, I think many universities should take a broader perspective as proposed here.
Change is difficult. Changing how we hire and promote is even harder. We can continue to do things the way they have been done in the past, and continue to accept widespread criticism, or we can try to find a way to embrace new mission-driven standards, as well as change the overall mindset that counts publications as the only valid form of contribution. This proposal is one such way.
Dr. Gedeon is associate professor of entrepreneurship and strategy at the Ted Rogers School of Management, Ryerson University.
While I agree with some of the goals expressed by Dr. Gedeon, his chosen instrument (how to get there) has a few problems. For example, he asks: “Why should the opinion of two anonymous academics be more important than 100,000 practitioners or a review by the New York Times?” The answer to this SHOULD be because the two anonymous reviewers are experts in the field who can assess both the methodology and the conclusions. The fact that something is “liked” doesn’t make it rigorous.
There is a place for non-peer-reviewed publications, and I would agree that they should contribute to tenure and promotion decisions. But they should not REPLACE the requirement that a probationary faculty member produce something that can satisfy his or her peers.
I appreciate what Steven Gedeon is trying to accomplish here – the rebalancing of types of scholarly contributions that count in tenure and promotion considerations. However, I have two criticisms:
1. The concept of discipline-based scholarship requires reconsideration. In my own fields of scholarship (natural resources and the environment), we significantly value scholarship beyond the disciplines. Various names apply: multidisciplinarity, interdisciplinarity, cross-disciplinarity, transdisciplinarity. I’m ready to add antidisciplinarity and adisciplinarity. A reference to Ernest Boyer’s classification of scholarships would be helpful here – scholarships of discovery, application, and integration (besides the scholarship of teaching). The scholarship of discovery is the closest to disciplinary contribution – the others usually transcend disciplines.
2. There is an ongoing irritating premise that we must commensurate scholarly contributions in quantitative terms. I used to believe this in earnest, but my research odyssey over the last couple of decades has taught me the value of the nominal scale. There is absolutely no need to quantify and commensurate scholarly contributions to be able to discern whether a budding assistant professor is worthy of promotion or tenure. Sure, I count various kinds of contributions when I assess an application, but I do this so that I can get an impression, not a score. I hold firmly that a senior scholar can review an application and come up with a defensible impression of whether the applicant is worthy of promotion or tenure – scores are not required.
Let’s stop counting for the sake of scoring – let’s count important quantitative things so that we can get a balanced impression.
Dr. Gedeon seems to identify the primary problem here: “Administrators like to count numbers of publications because it is easy and unambiguous.”
What he goes on to suggest, however, is just another number, on the assumption that administrators need a basis for decisions in fields of which they are ignorant. If administrators can’t assess assistant professors on the merits of their publications and teaching, then they’re incompetent to make a tenure decision, and should defer to the candidate’s peers.
Trying to account for everything, as the AACSB does, just creates a monster of complexity. More importantly, it allows administrators to start perverting the intellectual careers of the untenured, demanding for instance that they “market their contributions and interact with their customers to ensure their intellectual contributions are impactful.”
I found this an interesting perspective, but I would quibble with some of the premises as well as some of the “solutions” offered.
“Publish or perish” implies that we publish only as a means for career progress. A much stronger motivation to publish is because most academics love doing research; that’s why we took that path. The number of us who continue our research during retirement is testimony to that. If publishing progresses us along the career path, so much the better.
And do you really believe that some/much/most published academic papers are “worthless or rarely read”? Who decides the “worth” and where are the stats to suggest that they are rarely read?
I dare agree with you that “some professors” don’t care about teaching. But “the notion” you refer to suggests that many/most of us don’t, and I couldn’t agree less! Those professors that don’t care about teaching don’t last very long as professors, at least not at my university, and that’s as it should be.
Your new proposal:
I think that there is a lot to recommend in what you suggest here. But you also say that administrators merely count publications. That’s very much an over-simplification, again at least in my experience. But it’s true that we need metrics to reward productivity that are meaningful as well as achievable. The “reputation” of the journal in which a paper is published is used as a proxy for gauging quality. Are there faults with that? Of course there are, but the only alternative (having every paper vetted by an expert in the field to determine quality) is beyond the realm of practicality.
I do like your suggestion about several categories of weightings, spread over “all three categories of intellectual contribution”. I suspect, however, that this won’t eliminate the perceived problems, because any metrics used will still have to be a proxy for an in-depth analysis of each academic category. And I will argue strongly against one of your implied points: a longer publication is better than a shorter one. It is true that it takes much more work to produce a typical book than a typical article, but once within a category (book, article), the length is irrelevant. Classic example: Watson and Crick’s 1953 paper in Nature occupied only 2 published pages, but it reported, arguably, the greatest discovery in the biological sciences of the 20th century (the double helix nature of the DNA molecule). According to your system, Watson and Crick may not have received much credit for such a short paper! Probably more to the point in general: I always taught my students that concise writing is more virtuous than the contrary. I would never assign a minimum page length for an essay. On the contrary, I set a strict upper limit, one that challenged my students to edit ferociously! We should not be judging scholarship even loosely based on word number.
It is true that there are other types of scholarship than traditional publishing in academic journals: computing science, the plastic arts, music, of course, are the first to come to my mind. But of course these are acknowledged by those evaluating performance in those disciplines.
In brief (ha ha), I don’t see what is so particularly innovative about your new proposal, but of course I could well be missing the point.
The first three comments appeared on my screen, literally, the instant I posted my comment. They all said it better than I did (and using fewer words!).