ABSTRACT: Quantifying the relative performance of individual scholars, groups of scholars, departments, institutions, provinces/states/regions and countries has become an integral part of decision-making over research policy, funding allocations, awarding of grants, faculty hirings, and claims for promotion and tenure. Bibliometric indices (based mainly upon citation counts), such as the h-index and the journal impact factor (JIF), are heavily relied upon in such assessments. There is a growing consensus, and a deep concern, that these indicesmore and more often used as a replacement for the informed judgement of peersare misunderstood and are, therefore, often misinterpreted and misused. Although much has been written about the JIF, some combination of its biases and limitations will be true of any citation-based metric. While it is not our contention that bibliometric indices have no value, they should not be applied as performance metrics without a thorough and insightful understanding of their (few?) strengths and (many?) weaknesses. We hope that the articles in this Theme Section will motivate readers to more seriously study the nature of the metrics that are being used to assess them and to consider what their overuse, and misuse, means to us and to future generations of scholars.
KEY WORDS: Citation counts · Research assessment · Bibliometrics · Journal impact factor · h-index · Inherent value of scholarship
Full text in pdf format | Cite this article as: Browman HI, Stergiou KI
(2008) Factors and indices are one thing, deciding who is scholarly, why they are scholarly, and the relative value of their scholarship is something else entirely. Ethics Sci Environ Polit 8:1-3. https://doi.org/10.3354/esep00089
Export citation Share: Facebook - - linkedIn |
Next article |