When Authorship Becomes Currency: A Critical Reading of the Authorship Misappropriation Diamond and Its Unfinished Agenda

A commentary on Oyenuga, Apata, Oladele, and Jeresa (2026), Ethics & Behavior, with reference to Ioannidis, Klavans, and Boyack (2018, Nature) on hyperprolific authorship.

A study that names what everyone already knows

Authorship misconduct is one of those problems that academics discuss in corridors but rarely in print. In their recent paper in Ethics & Behavior, Oyenuga, Apata, Oladele, and Jeresa break that silence. Drawing on semi-structured interviews with 18 lecturers across Nigerian universities, they document a phenomenon that participants themselves nickname “You Put My Name, I Put Your Name” — the open trade of authorship credit between colleagues, supervisors, and senior administrators.

The empirical contribution is a five-part typology of unethical authorship:

  • Transactional authorship — credit exchanged for payment of article processing charges (APCs) or direct cash.
  • Promotion-driven padding — last-minute name additions to clear publication quotas before promotion windows close.
  • Patronage / hierarchy authorship — senior figures, supervisors, or heads of department added by default.
  • Reciprocal authorship exchanges — informal “I add yours, you add mine” agreements among peers.
  • Unauthorised inclusion and covert claim — names inserted without consent, or mentors reassigning lead authorship to themselves.

The theoretical contribution is more ambitious. The authors extend Wolfe and Hermanson’s (2004) Fraud Diamond — pressure, opportunity, rationalization, capability — by adding a fifth element: normalization. The resulting Authorship Misappropriation Diamond (AMD) treats authorship misconduct not as isolated moral failure but as a self-reinforcing institutional cycle. Pressure creates urgency, opportunity lowers resistance, rationalization launders intent, capability hands the lever to those already in power, and normalization closes the loop by turning deviance into routine. Once “everyone does it,” the psychological cost of refusing collapses.

This is a useful framework. It moves the conversation beyond the familiar lament that academics need more training, and locates the problem where it actually sits: in the architecture of incentives, hierarchies, and tolerated practices that make misconduct rational.

Why this matters for health management research

Healthcare and health management departments are not bystanders here. They publish prolifically, depend heavily on graduate-student labour, operate under steep promotion thresholds, and increasingly pay APCs that rival a junior lecturer’s monthly salary. Every mechanism the AMD identifies — APC trading, supervisor authorship by default, last-minute promotion padding — maps directly onto the daily reality of health management publishing in Türkiye, India, much of Southeast Asia, the Gulf, and large parts of Latin America. The Nigerian case is not exotic. It is a clearer mirror of pressures that exist, in slightly more polite forms, almost everywhere.

For health policy and patient safety researchers in particular, the stakes are sharper than in most disciplines. When clinical guidelines, hospital management protocols, and quality-improvement frameworks rest on author lists that no longer signal genuine intellectual responsibility, the entire chain of accountability between research and clinical decision becomes unreliable. A guideline cited in a hospital quality manual is only as trustworthy as the authorship behind it.

The hyperprolific connection: what the AMD does not yet see

The AMD describes the local mechanics of misappropriation beautifully. What it does not yet capture is how those mechanics scale at the upper tail of the distribution.

Ioannidis, Klavans, and Boyack’s 2018 Nature analysis identified more than 9,000 authors worldwide who, in at least one calendar year between 2000 and 2016, published more than 72 full papers — the equivalent of one paper every five days, weekends included. After excluding particle physics (where 1,000-author consortia distort the picture) and disambiguation-prone Chinese and Korean records, 265 “hyperprolific” authors remained, concentrated in medicine, cardiology, crystallography, and epidemiology. Most admitted in follow-up surveys that, for many of their papers, they did not meet traditional Vancouver / ICMJE authorship criteria. Ioannidis and colleagues later refined the framework (2024) by introducing “almost hyperprolific” authors (61–72 papers per year) and the broader umbrella of “extremely productive” authors. Subsequent work (Ioannidis, Collins, & Baas, 2024; bioRxiv extension to 2022) shows the phenomenon spreading rapidly outside physics, with the largest fold-increases between 2016 and 2022 in Thailand, Saudi Arabia, Spain, India, Italy, Russia, Pakistan, and South Korea.

Read alongside Oyenuga and colleagues, the picture sharpens considerably. The AMD’s five practices — especially transactional authorship, patronage authorship, and reciprocal exchange — are precisely the mechanisms by which a single individual can plausibly accumulate 60, 70, 80, or more “authored” papers a year without violating any single rule visibly enough to trigger sanction. The Nigerian interviews describe the micro-physics of the problem: who asks whom, in what tone, citing which favour. The Ioannidis data describe the macro-output of those same mechanics aggregated across thousands of careers, several countries, and two decades. The two literatures explain each other, but neither paper cites the other’s level of analysis. That is the most obvious gap in Oyenuga and colleagues’ framework: AMD is a theory of how individuals fall into misconduct, but it has not yet been linked to the bibliometric signature that mass misconduct leaves in databases like Scopus and Web of Science.

This omission has a consequence. If the AMD is correct that normalization closes the loop, then the hyperprolific tail is not an anomaly to be excluded as a special case — it is the predictable equilibrium that the AMD itself forecasts. A model that does not account for its own most extreme observable outcome is incomplete.

What the paper does not examine: an agenda for future research

Oyenuga and colleagues acknowledge limitations of sample and geography, but several substantive blind spots deserve a more explicit research agenda.

First, the bibliometric verification problem. AMD is built entirely from interview self-report. No author list, citation network, or co-authorship graph is examined. A natural next step is to triangulate the typology against Scopus or Web of Science records for the same institutions: are reciprocal exchange dyads visible as repeated co-authorship pairs with low intellectual diversity? Does promotion-driven padding cluster temporally around known promotion windows? Hyperprolific and almost-hyperprolific authors in health-related fields are the obvious test cases. A combined qualitative–bibliometric design would convert AMD from a self-report model into a falsifiable one.

Second, the journal and publisher side is missing. Every misconduct described in the paper requires a journal willing to accept the paper, an editor willing not to ask, and a peer-review process that does not check contribution statements. The AMD treats journals as a passive backdrop, but Scanff and colleagues (2021) and Pinto and colleagues (2023) have shown that publication concentration within single journals is itself a signal of editorial favouritism and authorship inflation. A serious extension of AMD must add a sixth construct or at least a structural moderator: journal permissiveness, operationalised through Gini coefficients of author concentration, editor-author overlap, and APC structure. Predatory and semi-predatory journals are not symmetric to legitimate ones in this respect, and the model should say so.

Third, the cross-national comparison is absent. The AMD is presented as Nigerian but framed as universal. It needs to be tested against settings where the cash-for-publication incentive is institutionalised at the national level — China’s bonus system, several Gulf countries’ cited-paper rewards, and the Turkish teşvik (academic incentive payment) regime — and against settings where it is not. The hyperprolific literature shows that the geography of extreme productivity has shifted dramatically since 2016, with the steepest increases in Thailand, Saudi Arabia, Spain, India, Italy, Russia, Pakistan, and South Korea (Ioannidis et al., 2024; bioRxiv preprint, 2023). The AMD framework should be applied comparatively to these settings to identify which constructs are universal and which are context-specific.

Fourth, gender and early-career dynamics are under-theorised. The interviews include only one Reader and zero junior researchers below Lecturer I; they include no breakdown of how women, contract academics, and PhD students experience coercive authorship differently. Existing literature (Maggio et al., 2019; Douglas et al., 2024) has shown clear gender asymmetries in authorship climate. AMD as currently specified treats “capability” as a single construct, but it likely operates differently for senior women, junior men, and contract staff. A stratified replication is needed.

Fifth, the relationship between AMD and other forms of research misconduct is unexamined. Authorship misappropriation rarely travels alone. It coexists with citation padding, salami slicing, paper mills, image manipulation, and tortured phrases. The AMD’s normalization mechanism predicts that all five should rise together in environments where any one is tolerated. This is testable against retraction databases such as Retraction Watch and via paper-mill detection tools. The AMD’s strongest theoretical claim — that normalization is the master variable — has not yet been pressure-tested against this co-occurrence pattern.

Sixth, AI-assisted authorship is the missing 2026 variable. The interviews appear to have been conducted before generative-AI co-writing became standard. AMD constructs interact in non-trivial ways with large language model use: opportunity expands (drafts are cheaper), capability redistributes (junior researchers can produce text without senior input), and rationalization shifts (if the AI wrote it, who is the “real” author anyway?). Any future quantitative test of AMD must include AI-use items in the measurement model.

Seventh, the intervention question is open. The paper proposes contribution-based promotion, ethics training, fair APC access, and dispute-resolution frameworks. None of these have been evaluated experimentally. A natural next step is a multi-institution quasi-experimental design — for example, comparing institutions that adopt CRediT taxonomy mandates against matched controls — using AMD constructs as outcome variables. Without this, the policy section of the paper remains a wish list.

Eighth, the patient and clinical-outcome end is invisible. For health management and patient safety research specifically, authorship misconduct is not only an ethics problem but a clinical-evidence problem. If guidelines, systematic reviews, and quality-improvement frameworks rest on padded author lists, who is accountable when implementation fails? The AMD does not yet connect authorship misconduct to downstream evidence quality. This is the most important question for healthcare audiences and the one most worth testing.

A model worth using, and worth pushing further

The AMD is the most useful theoretical advance on authorship ethics in several years, and the addition of normalization is genuinely novel. It deserves to be cited, taught, and adapted in health management curricula across the regions where the same dynamics operate quietly.

But the model is, by the authors’ own admission, an opening move. Read against the Ioannidis hyperprolific literature, it explains the local mechanics of a phenomenon whose macro-signature is already visible in global bibliometric data. Read against the policy debate, it offers diagnoses without yet offering tested interventions. Read against the present-day reality of AI-assisted writing, paper mills, and shifting national incentive systems, it captures a 2024-vintage problem in a world that has already moved on.

The next paper in this line will not just describe the diamond. It will measure it, falsify it where it fails, and connect the lecture-theatre transactions of the Nigerian case to the Scopus tail of authors who publish a paper every five days. Until then, Oyenuga and colleagues have given the field a vocabulary it badly needed and a framework that any serious researcher of academic integrity should now reckon with.

References

Ioannidis, J. P. A., Klavans, R., & Boyack, K. W. (2018). Thousands of scientists publish a paper every five days. Nature, 561(7722), 167–169. https://doi.org/10.1038/d41586-018-06185-8

Ioannidis, J. P. A., Collins, T. A., & Baas, J. (2024). Evolving patterns of extremely productive publishing behavior across science. Royal Society Open Science (and bioRxiv preprint, 2023). https://doi.org/10.1101/2023.11.23.568476

Oyenuga, M. O., Apata, S. B., Oladele, T. O., & Jeresa, S. (2026). You Put my name, I Put your name: Exploring unethical practices in academic publishing using the authorship misappropriation Diamond framework. Ethics & Behavior. https://doi.org/10.1080/10508422.2026.2661701

Wolfe, D. T., & Hermanson, D. R. (2004). The Fraud Diamond: Considering the four elements of fraud. CPA Journal, 74(38), 38–42.

Subscribe to the Health Topics Newsletter!

Google reCaptcha: Invalid site key.