A Wikipedia Collective Created a Guide to Identify AI-Generated Text. Now a Plugin Utilizes It to Make Chatbots Seem More Human.

On Saturday, tech entrepreneur Siqi Chen unveiled an open-source plug-in for Anthropic’s Claude Code AI assistant that advises the AI model to avoid writing in a typical AI manner.
Named Humanizer, this straightforward plug-in provides Claude with a list of 24 linguistic and formatting styles identified by Wikipedia editors as indicators of chatbot-generated content. Chen shared the plug-in on GitHub, where it has garnered over 1,600 stars as of Monday.
“It’s really convenient that Wikipedia compiled a thorough list of ‘indicators of AI writing,’” Chen stated on X. “So much so that you can instruct your LLM to … not do that.”
The source material is derived from a guide by WikiProject AI Cleanup, a collective of Wikipedia editors focused on identifying AI-generated articles since late 2023. The initiative was launched by French Wikipedia editor Ilyas Lebleu. Volunteers have flagged over 500 articles for further examination and released a comprehensive list of frequently observed patterns in August 2025.
Chen’s tool is a “skill file” for Claude Code, Anthropic’s terminal-based coding assistant, encompassing a Markdown-formatted file that appends a set of written instructions (available here) to the prompt provided to the large language model powering the assistant. Unlike a standard system prompt, this skill information is structured in a standardized format that Claude models are more finely tuned to interpret than a regular system prompt. (Custom skills necessitate a paid Claude subscription with code execution enabled.)
However, like all AI prompts, language models may not strictly adhere to skill files, raising the question: does the Humanizer actually work? In our limited tests, Chen’s skill file made the AI agent’s outputs sound less formal and more relaxed, but it may have drawbacks: it won’t enhance factual accuracy and could negatively impact coding effectiveness.
Specifically, some of Humanizer’s guidelines may mislead depending on the assignment. For instance, one instruction reads: “Have opinions. Don’t just report facts—react to them. ‘I genuinely don’t know how to feel about this’ is more human than neutrally listing pros and cons.” While seeming imperfect can appear human, following such advice might not be beneficial if you’re using Claude for technical documentation.
Despite its limitations, it’s ironic that one of the web’s most referenced rule sets for identifying AI-assisted writing could enable some individuals to circumvent it.
Spotting the Patterns
So, what are the characteristics of AI writing? The Wikipedia guide is detailed with numerous examples, but we’ll provide just one for brevity.
Certain chatbots tend to embellish subjects with phrases like “marking a pivotal moment” or “stands as a testament to,” according to the guide. They write in a promotional style, describing breathtaking views and towns as “nestled within” picturesque areas. They often append “-ing” phrases to sentences to create an analytical tone: “symbolizing the region’s commitment to innovation.”
To circumvent these tendencies, the Humanizer skill instructs Claude to replace inflated language with straightforward facts, demonstrating this transformation:
Before: “The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain.”
After: “The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics.”
Claude will interpret this and attempt to generate an output that aligns with the context of the conversation or task.
Why AI Writing Detection Fails
Despite a confident array of rules crafted by Wikipedia editors, we’ve noted before the reasons why AI writing detectors can lack reliability: there is no unique characteristic of human writing that consistently sets it apart from LLM writing.
One reason is that although most AI language models tend to use specific types of language, they can also be instructed to avoid these patterns, as demonstrated by the Humanizer skill. (Although at times, avoiding such patterns can be quite challenging, as OpenAI experienced in its long battle with the em dash.)
