The comprehensive article published in the journal Philosophy, Ethics, and Humanities in Medicine and written by Metin Akgün examines the profound and nuanced transformation created by generative artificial intelligence (AI) in the domain of scientific authorship. The central focus of the article is how the boundary between human contribution and machine assistance is becoming increasingly blurred, and how this challenges traditional definitions of scientific authorship, accountability, and intellectual property.
Traditional Authorship and the Challenge Posed by AI
Historically, the act of scientific authorship has represented more than merely writing text; it has served as an expression of being. Authorship has symbolized the grasp of ideas and the acceptance of responsibility for the expression of those ideas. This role has remained ethically and symbolically robust despite changes in how texts are produced — until now.
Today, however, researchers are increasingly using AI tools to generate drafts, restructure arguments, or refine language. As a result, although human names appear on manuscripts, much of the linguistic work — and in some cases the structuring of arguments — can emerge through algorithms. This raises the question of how to evaluate the author’s contribution to and relationship with the text when what matters is not the ontological identity of the text itself but the interventions mediated by AI.
The Ship of Theseus Metaphor and the Triple Focus
The article uses the classical thought experiment of the Ship of Theseus to explain this challenge in AI-assisted authorship. This paradox asks whether a ship remains the same if each of its planks is gradually replaced over time. When applied to authorship, the metaphor becomes illuminating in a scenario where the author’s own contributions are gradually replaced by AI-generated content.
However, rather than focusing on abstract questions such as how many “planks” (sentences) have been replaced, the article shifts the discussion to the axes of contribution, control, and attribution. In this context, philosophical discussions on the continuity of identity (Hobbes, Locke, Parfit) are brought together with intellectual property (IP) law. IP frameworks offer a more precise lens on AI-mediated authorship by evaluating authorship and originality in terms of contribution, transformation, and control (for example, non-trivial transformation and derivative authorship).
The central concern of the article is not whether AI should be credited as an author. The core question is how the human contribution, control, and accountability should be tracked and attributed when AI mediates linguistic or argumentative content. To address this, the article proposes a three-dimensional framework that reframes the problem around contribution and control:
Type of Replacement: Whether the changes consist only of superficial edits such as grammar and style, or whether they involve substantive changes such as significant rephrasing, summarization, and even restructuring of arguments.
Agent of Replacement: Whether the changes were produced by human collaborators or by AI systems.
Degree of Authorship Control: Who defined the prompts, who edited the outputs, who integrated the claims, and who accepts final responsibility for the text.
This triple framework grounds the Ship of Theseus analogy in concrete criteria for authorship and allows the discussion to move beyond abstract questions of identity.
Gaps in Ethical Frameworks and the Need for Transparency
Existing ethical frameworks used in scholarly publishing (ICMJE, COPE) address significant contribution and accountability but leave gaps in cases where work is mediated through AI.
ICMJE (International Committee of Medical Journal Editors): ICMJE ties authorship to making a substantial contribution, performing critical revision, granting final approval, and being accountable for all aspects of the work. However, these guidelines assume that the content is produced by a human and do not define whether sending prompts to an AI, editing its output, or integrating its material constitutes a contribution.
COPE (Committee on Publication Ethics): COPE prohibits listing AI tools as authors on the grounds that they cannot assume responsibility. However, COPE does not provide sufficient guidance on how to acknowledge the human labor involved in shaping AI-mediated content.
CRediT taxonomy: CRediT categorizes contributor roles such as supervision and writing, but it does not include descriptors for tasks such as prompt design, model oversight, or machine-guided synthesis.
Policies and Thresholds for Transparency and Accountability
To address these uncertainties, the article proposes that academic communities update their authorship and disclosure frameworks. Disclosure obligations should be tied to the above-mentioned triplet and graduated thresholds should be adopted:
De Minimis Level: Superficial grammar or style edits generally do not require disclosure.
Material Contribution Level: Situations involving summarization, significant rephrasing, argument restructuring, or text generation require disclosure.
Substantive Interpretive Input Level: When AI outputs substantially shape or introduce claims and interpretations, enhanced disclosure and explicit justification are mandatory.
To ensure transparency, the article recommends that journals provide clear guidance and adopt measures for implementation. These may include:
“AI methods” note: A note that explains which parts of the text were AI-supported and under whose supervision this was done.
Expanded Contribution Statements: Contribution statements should be broadened to include AI-specific tasks such as prompt design, model oversight, and curation of AI output.
New Roles: Roles such as “prompt architect” or “AI integrator” can offer more precise language to describe these forms of contribution.
To link implementation with analysis, the article advises journals and scholarly communities to pilot CRediT extensions for AI roles and to align ICMJE’s concept of “substantial contribution” with the above material and substantive thresholds.
In conclusion, while generative AI offers benefits such as providing clarity, reducing language barriers, and accelerating the drafting process, its use must be transparent. The normative core of scientific authorship is accountability for thought. Just as the identity of a ship is questioned when its planks are gradually replaced, authorship must be reconsidered when human contributions are progressively substituted with AI-generated content. This analogy confirms that even if some components are machine-produced, the human author remains “responsible for the voyage.”
Reference: Akgün, M. (2025). Author or prompter? Scientific writing, identity, and the Theseus paradox. Philosophy, Ethics, and Humanities in Medicine, 20(29). https://doi.org/10.1186/s13010-025-00195-x
