Saturday 25 May 2013

[pakgrid] Stopping the numbers game in academic research

 

Quite often this topic comes under discussion; here is a very good paper from renowned researcher David Parans, credited to be a pioneer in software engineering.

http://dl.acm.org/citation.cfm?doid=1297797.1297815

Here are some excerpts from the above paper

/-------------------------------------

The widespread practice of counting publications without reading and judging them is fundamentally flawed for a number of reasons:

It encourages superficial research. Those who publish many hastily written, shallow (and often incorrect) papers will rank higher than those who invest years of careful work studying important problems; that is, counting measures quantity rather than quality or value;

It encourages overly large groups. Academics with large groups, who often spend little time with each student but put their name on all of their students' papers, will rank above those who work intensively with a few students;

It encourages repetition. Researchers who apply the "copy, paste, disguise" paradigm to publish the same ideas in many conferences and journals will score higher than those who write only when they have new ideas or results to report;

It encourages small, insignificant studies. Those who publish "empirical studies" based on brief observations of three or four students will rank higher than those who conduct long-term, carefully controlled experiments; and

It rewards publication of half-baked ideas. Researchers who describe languages and systems but do not actually build and use them will rank higher than those who implement and experiment.

Evaluation by counting the number of published papers corrupts our scientists; they learn to "play the game by the rules." Knowing that only the count matters, they use the following tactics:

Publishing pacts:
"I'll add your name to mine if you put mine on yours." This is highly effective when four to six researchers play as a team. On occasion, I have met "authors" who never read a paper they purportedly wrote;

Clique building:
Researchers form small groups that use special jargon to discuss a narrow topic that is just broad enough to support a conference series and a journal. They then publish papers "from the clique for the clique." Formation of these cliques is bad for scientific progress because it leads to poor communication and duplication, even while boosting the apparent productivity of clique members;

Anything goes:
Researchers publish things they know may be wrong, old, or irrelevant; they know that as long as the paper gets past some set of referees, it counts;

Bespoke research:
Researchers monitor conference and special-issue announcements and "custom tailor" papers (usually from "pre-cut" parts) to fit the call-for-papers; Minimum publishable increment (MPI). After completing a substantial study, many researchers divide the results to produce as many publishable papers as possible. Each one contains just enough new information to justify publication but may repeat the overall motivation and background. After all the MPIs are published, the authors can publish the original work as a "major review." Science would advance more quickly with just one publication and

Organizing workshops and conferences:
Initiating specialized workshops and conferences creates a venue where the organizer's papers are almost certain to be published; the proceedings are often published later as a book with a "foreword" giving the organizer a total of three more publications: conference paper, book chapter, and foreword.

-------------------------------/

__._,_.___
Reply via web post Reply to sender Reply to group Start a New Topic Messages in this topic (1)
Recent Activity:
.

__,_._,___

No comments:

Post a Comment