The operating logic of AI detectors largely relies not on content accuracy but on stylometric patterns. In other words, algorithms determine whether a text is “human or AI” by measuring surface indicators such as pronoun density, conjunction use, rhetorical structure, discourse flow, and linguistic variance. For this reason, the conscious and balanced use of ten linguistic features that are considered semi-informal in academic writing can make an AI-generated text more likely to be perceived as human-written. Below, each feature is presented separately with a didactic logic, brief explanation, and example.
First-Person Pronouns AI texts tend to keep pronoun use limited and purpose-driven. In human writing, pronouns make the thinking process visible. Example: Instead of “In this paper, we propose a model,” → “We tested the model under three scenarios and we were surprised by the variance.”
- Sentence-Initial Conjunctions AI is overly dependent on the “however” pattern. Variety makes the text appear more natural. Example: Instead of “However, the results were inconsistent,” → “But the results did not align with the initial hypothesis.”
- Second-Person Pronouns Directly addressing the reader is rare in AI but establishes interaction in human writing. Example: “This interface allows you to visualize patient flows in real time.”
- Listing Expressions Open-ended listing loosens discourse and softens rigid academic tone. Example: “Digital tools such as telemedicine platforms, e-prescription systems, remote monitoring devices, etc., reshape care delivery.”
- Sentence-Final Prepositions AI usually front-loads prepositions; human writing may leave them at the end. Example: “This is a limitation we must carefully work with.”
- Split Infinitives AI repeats certain adverbs; diversification appears more natural. Example: “The framework aims to fully integrate clinical and administrative data.”
- Unattended Demonstrative Pronouns AI prefers noun phrases like “this study,” “this result.” Stand-alone use is closer to human writing. Example: “The error rate dropped by 18%. This suggests a structural improvement.”
- Contractions Limited but strategic use in academic texts adds naturalness. Example: “It’s difficult to generalize these findings across rural hospitals.”
- Direct Questions Rhetorical questions are rare in AI but sharpen argumentation in human writing. Example: “Can this model sustain performance under resource constraints?”
- Exclamations When used rarely and in moderation, they create emphasis. Example: “A 40% reduction in adverse events—remarkable!”
From a didactic perspective, these ten features serve two functions. First, they humanize the text discursively. Second, they lower the formalization score measured by detectors. This is because algorithms process indicators such as low pronoun ratios, high conjunction standardization, absence of rhetorical questions, and excessively formal tone as signals in favor of AI authorship. When these indicators are diversified, the text statistically shifts toward the “human variance” band.
In conclusion, even when the content remains the same, altering the architecture of style significantly affects the confidence scores of detection systems. This demonstrates that current detectors measure surface regularities of language rather than semantic intent or intellectual originality. In other words, what is detected is not the epistemic value of the text but its stylometric trace. And this trace is a manageable variable.
Reference: Zhao, N., & Lei, L. (2026). Informality features in AI-generated academic writing: A corpus-based comparison between human and AI. Journal of English for Academic Purposes, 79, 101629. https://doi.org/10.1016/j.jeap.2026.101629
