Wikipedia: WikiProject AI Cleanup

(en.wikipedia.org)

179 points | by thinkingemote 6 hours ago

18 comments

  • Antibabelic 5 hours ago
    I found the page Wikipedia:Signs of AI Writing[1] very interesting and informative. It goes into a lot more detail than the typical "em-dashes" heuristic.

    [1]: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

    • jcattle 5 hours ago
      An interesting observation from that page:

      "Thus the highly specific "inventor of the first train-coupling device" might become "a revolutionary titan of industry." It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated."

      • embedding-shape 4 hours ago
        I think that's a general guideline to identify "propaganda", regardless of the source. I've seen people in person write such statements with their own hands/fingers, and I know many people who speak like that (shockingly, most of them are in management).

        Lots of those points seems to get into the same idea which seems like a good balance. It's the language itself that is problematic, not how the text itself came to be, so makes sense to 100% target what language the text is.

        Hopefully those guidelines make all text on Wikipedia better, not just LLM produced ones, because they seem like generally good guidelines even outside the context of LLMs.

        • Antibabelic 4 hours ago
          Wikipedia already has very detailed guidelines on how text on Wikipedia should look, which address many of these problems.[1] For example, take a look at its advice on "puffery"[2]:

          "Peacock example:

          Bob Dylan is the defining figure of the 1960s counterculture and a brilliant songwriter.

          Just the facts:

          Dylan was included in Time's 100: The Most Important People of the Century, in which he was called "master poet, caustic social critic and intrepid, guiding spirit of the counterculture generation". By the mid-1970s, his songs had been covered by hundreds of other artists."

          [1]: https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style

          [2]: https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Word...

          • embedding-shape 4 hours ago
            Right, but unless you have a specific page about "This is how to treat AI texts", people will (if they haven't already) bombard you with "This text is so obviously AI written, do something" and by having a specific page to answer to those, you can just link that instead of general "Here's how text on Wikipedia should be" guidelines. Being more specific sometimes helps people understand better :)
      • bspammer 4 hours ago
        That sounds like Flanderization to me https://en.wikipedia.org/wiki/Flanderization

        From my experience with LLMs that's a great observation.

      • Amorymeltzer 3 hours ago
        I particularly like (what I assume is) the subtle paean to Ted Chiang's "Blurry Jpeg of the Web" in there.

        <https://www.newyorker.com/tech/annals-of-technology/chatgpt-...>

      • robertjwebb 4 hours ago
        The funny thing about this is that this also appears in bad human writing. We would be better off if vague statements like this were eliminated altogether, or replaced with less fantastical but verifiable statements. If this means that nothing of the article is left then we have killed two birds with one stone.
        • nottorp 3 hours ago
          What do you think the LLMs were trained on? 90% of everything is crap, and they trained on everything.
      • mrweasel 3 hours ago
        To me that seems like we're mistaken in mixing fiction and non-fiction in AI training data. The "a revolutionary titan of industry" makes sense if you where reading a novel where something like 90% of a book is describing the people, locations, objects and circumstances. The author of a novel would want to use exaggeration and more colourful words to underscore a uniquely important person, but "this week in trains" would probably de-emphasize the person and focus on the train-coupler.
        • lacunary 1 hour ago
          fiction is part of our shared language and culture. we communicate by making analogies, and our stories, especially our old ones, provide a rich basis to draw upon. neither a person nor an llm can be fluent users of human language without spending time learning from both fiction and non-fiction.
      • eurekin 5 hours ago
        That's actually putting into words, what I couldn't, but felt similar. Spectacular quote
        • jcattle 4 hours ago
          I'm thinking quite a bit about this at the moment in the context of foundational models and their inherent (?) regression to the mean.

          Recently there has been a big push into geospatial foundation models (e.g. Google AlphaEarth, IBM Terramind, Clay).

          These take in vast amounts of satellite data and with the usual Autoencoder architecture try and build embedding spaces which contain meaningful semantic features.

          The issue at the moment is that in the benchmark suites (https://github.com/VMarsocci/pangaea-bench), only a few of these foundation models have recently started to surpass the basic U-Net in some of the tasks.

          There's also an observation by one of the authors of the Major-TOM model, which also provides satellite input data to train models, that the scale rule does not seem to hold for geospatial foundation models, in that more data does not seem to result in better models.

          My (completely unsupported) theory on why that is, is that unlike writing or coding, in satellite data you are often looking for the needle in the haystack. You do not want what has been done thousands of times before and was proven to work. Segmenting out forests and water? Sure, easy. These models have seen millions of examples of forests and water. But most often we are interested in things that are much, much rarer. Flooding, Wildfire, Earthquakes, Landslides, Destroyed buildings, new Airstrips in the Amazon, etc. etc.. But as I see it, the currently used frameworks do not support that very well.

          But I'd be curious how others see this, who might be more knowledgeable in the area.

      • andrepd 4 hours ago
        Outstanding. Praise wikipedia, despite any shortcomings wow, isn't it such a breath of fresh air in the world of 2026.
    • smusamashah 4 hours ago
      This is so much detailed and everyone who is sick of reading generated text should read this.

      I had a bad experience at a shitty airport, went to google maps to leave a bad review, and found that its rating was 4.7 by many thousand people. Knowing that airport is run by corrupt government, I started reading those super positive reviews and the other older reviews by them. People who could barely manage few coherent sentences of English are now writing multiple paragraphs about history and vital importance of that airport in that region.

      Reading first section "Undue emphasis on significance" those fake reviews is all I can think of.

    • eddyg 2 hours ago
      It’s also very useful in writing skills to help avoid these kinds of issues.

      https://github.com/blader/humanizer

    • harrisoned 3 hours ago
      This is very good, but I'm surprised the term "game-changer" is not mentioned there. From my observations this is used a lot in LLM texts.
      • tasuki 2 hours ago
        Great point! That would be a game-changer!
        • danielbln 1 hour ago
          "This is the smoking gun!"

          _sigh_ Is it though, Claude, is it really?

    • paradite 4 hours ago
      Ironically this is a goldmine for AI labs and AI writer startups to do RL and fine-tuning.
      • zipy124 2 hours ago
        That's not quite how that works though. It can for example be possible that fine-tuning a model to avoid the styles described in the article cause the LLM to stop functionaing as well as it can. It might just be an artefact of the architecture itself that to be effective it has to follow these rules. If it was as easy as just providing data and the LLM would then 'encode' that as a rule, we would advance much quicker than we currently are.
      • kingstnap 3 hours ago
        Seems more like the kind of thing you would make prompts using.

        I can totally see someone taking that page and throwing it into whatever bot and going "Make up a comprehensive style guide that does the opposite of whatever is mentioned here".

      • einrealist 4 hours ago
        In the case of those big 'foundation models': Fine-tune for whom and how? I doubt it is possible to fine-tune things like this in a way that satisfies all audiences and training set instances. Much of this is probably due to the training set itself containing a lot of propaganda (advertising) or just bad style.
        • paradite 4 hours ago
          I'm pretty sure Mistral is doing fine tuning for their enterprise clients. OpenAI and Anthropic are probably not?

          I'm more thinking about startups for fine-tuning.

  • vintermann 4 hours ago
    There was a paper recently about using LLMs to find contradictions in Wikipedia, i.e. claims on the same page or between pages which appear to be mutually incompatible.

    https://arxiv.org/abs/2509.23233

    I wonder if something more came out of that.

    Either way, I think that generation of article text is the least useful and interesting way to use AI on Wikipedia. It's much better to do things like this paper did.

    • JimDabell 3 hours ago
      That’s super interesting. I had a similar idea about 18 months ago.

      I think the biggest opportunity is building a knowledge graph based on Wikipedia and then checking against the graph when new edits are made. Detect any new assertions in the edit, check for conflicts against the graph, and bring up a warning along with a link to all the pages on Wikipedia that the new edit is contradicting. If the new edit is bad, it shows the editor why with citations, and if the new edit is correcting something that Wikipedia currently gets incorrect, then it shows all the other places that also need to be corrected.

      https://www.reddit.com/r/LocalLLaMA/comments/1eqohpm/if_some...

    • Tiberium 3 hours ago
      You can easily do this with normal GPT 5.2 in ChatGPT, just turn on thinking (better if extended) and web search, point a Wikipedia page to the model and tell it to check the claims for errors. I've tried it before and surprisingly it finds errors very often, sometimes small, sometimes medium. The less popular the page you linked is, the more likely it'll have errors.

      This works because GPT 5.x actually properly use web search.

      • sgc 35 minutes ago
        I am sure that could be useful with proper post-request research.

        As a technique though, never ask an LLM to find errors. Ask it to either find errors or verify that there are no errors. That way it can answer without hallucinating more easily.

        • 1718627440 32 minutes ago
          > As a technique though, never ask an LLM to find errors.

          What I do is both ask it to explain why there are no errors at all and why there tons of errors. Then I use my natural intelligence to reason about the different claims.

      • nottorp 3 hours ago
        Have you verified those errors?
      • multjoy 2 hours ago
        It says it finds errors.
  • tonymet 4 minutes ago
    Although Wikipedia has no firm rules (WP:PILLARS), the admins reference the policies (that aren’t rules) when reverting content and banning. So here’s what I gathered

    * no new Articles from LLM content (WP:NEWLLM)

    * Most images wholly generated by AI should not be used." (WP:AILLM)

    * “it is within admins' and closers' discretion to discount, strike, or collapse obvious use of generative LLMs" (WP:AITALK)

    There doesn’t seem to be an outright ban on LLM content as long as it’s high quality .

    Just an amateur summary for those less familiar with Wikipedia policy. I encourage people to open an account, edit some pages and engage in the community. It’s the single most influential piece of media that’s syndicated into billions of views daily, often without attribution.

  • ChrisMarshallNY 1 hour ago
    > Unfortunately, these models virtually always fail to properly source claims and often introduce errors.

    A quote for the times.

    May be a bit of a sisyphean task, though...

  • maxbaines 6 hours ago
    This is hardly surprising given - New partnerships with tech companies support Wikipedia’s sustainability. Which relies on Human content.

    https://wikimediafoundation.org/news/2026/01/15/wikipedia-ce...

    • jraph 5 hours ago
      I agree with the dig, although it's worth mentioning that this AI Cleanup page's first version was written on the 4th of December 2023.
  • crtasm 3 hours ago
    I enjoyed the recent talk looking at the reasons people add generated content: https://media.ccc.de/v/39c3-ai-generated-content-in-wikipedi...
  • dfajgljsldkjag 1 hour ago
    It is really good that they are taking steps to remove this stuff. You can usually tell right away when something was not written by a human.
  • bluebarbet 1 hour ago
    Contrarian take: Wikipedia could use more AI, as well as less.

    A major flaw of Wikipedia is that much of it is simply poorly written. Repetition and redundancy, ambiguity, illogical ordering of content, rambling sentences, opaque grammar. That should not be surprising. Writing clear prose is a skill that most people do not have, and Wikipedia articles are generally the fruit of collaboration without copy editors.

    AI is perfectly suited to fixing this problem. I recently spent several hours rewriting a somewhat important article. I did not add or subtract information from the article, I simply made it clearer and more concise. I came away convinced that AI could have done as good a job - with supervision, of course - in a fraction of the time. AI-assisted copy-editing is not against Wikipedia rules. Yet as things stand, there are no built-in tools to facilitate it, doubtless because of the ambient suspicion of AI as a technology. We need to take a smarter approach.

  • progbits 5 hours ago
    The Sanderson wiki [1] has a time-travel feature where you read a snapshot just before a publication of a book, ensuring no spoilers.

    I would like a similar pre-LLM Wikipedia snapshot. Sometimes I would prefer potentially stale or incomplete info rather than have to wade through slop.

    1: https://coppermind.net/wiki/Coppermind:Welcome

    • csande17 4 hours ago
      The easiest way to get this is probably Kiwix. You can download a ~100GB file containing all of English Wikipedia as of a particular date, then browse it locally offline.

      I'm not sure if it's real or not, but the Internet Archive has a listing claiming to be the dump from May 2022: https://archive.org/details/wikipedia_en_all_maxi_2022-05

      • JKCalhoun 48 minutes ago
        There's a torrent at the linked URL. Trying that right now. (I have a couple of Kiwix dumps of Wikipedia offline already.)
      • embedding-shape 4 hours ago
        Alternatively, straight from Wikimedia, those are the dumps I'm using, trivial to parse concurrently and easy format to parse too, multistream-xml in bz2. Latest dump (text only) is from 2026-01-01 and weights 24.1 GB. https://dumps.wikimedia.org/enwiki/20260101/ Also have splits together with indexes, so you can grab few sections you want, if 24GB is too large.
    • Antibabelic 5 hours ago
      But you can already view the past version of any page on Wikipedia. Go to the page you want to read, click "View history" and select any revision before 2023.
      • progbits 5 hours ago
        I know but it's not as convenient if you have to keep scrolling through revisions.
    • kace91 4 hours ago
      Have you personally encountered slop there? I tend to use Wikipedia rabbit holes as a pastime and haven’t really felt a difference.
  • KolmogorovComp 5 hours ago
    I wish they also spent on the reverse: automatic rephrasing of the (many) obscure and very poorly worded and/or with no neutral tone whatsoever.

    And I say that as a general Wikipedia fan.

    • alt227 1 hour ago
      I would hate it so much if all the articles on Wikipedia were suddenly all rewritten to have a smiliar tone and style. Its beauty is its diversity.
    • philipwhiuk 5 hours ago
      WP:BOLD and start your own project to do it.
      • vintermann 4 hours ago
        Or be extra bold, and have an AI bot handle the forum politics associated with being allowed to make nontrivial changes.
        • embedding-shape 4 hours ago
          Great way to get banned :)

          I've made a bunch of nontrivial changes (+- 1000s of characters), none of them seems to have been reverted, never asked for permission, I just went ahead and did it. Maybe the topics I care about are so non-controversial no one actually seen it?

  • shevy-java 3 hours ago
    It may be that AI made Wikipedia worse (I have no idea), but Wikipedia itself made several changes in the last 5 years which I hate. The "temporary account" annoys me; the strange side bars that are now the new default also annoy me. Yes, they can be hidden, but why are they shown by default? I never want them; I don't want to use them either. And some discussion pages can not be modified either - I understand that main articles can not so easily be changed, but now discussion pages as well? This happened to me on a few pages, in particular for "ongoing events". Well, I don't even visit ongoing events at a later time usually, so I give feedback or I WANT to give feedback, then I move on. With that changed policy, I can now skip bothering giving any feedback, so Wikipedia becomes less interesting as I give feedback on the QUALITY - what to improve. And so forth. It is really sad how different people can worsen the quality of a project such as Wikipedia. Wikipedia is still good, but it was better, say, 6 years ago.
  • jMyles 4 hours ago
    Signed up to help.

    On PickiPedia (bluegrass wiki - pickipedia.xyz), we've developed a mediawiki extension / middleware that works as an MCP server, and causes all of the contributions from the AI in question to appear as partially grayed out, with a "verify" button. A human can then verify and either confirm the provided source or supply their own.

    It started as a fork of a mediawiki MCP server.

    It works pretty nicely.

    Of course it's only viable in situations where the operator of the LLM is willing to comply / be transparent about that use. So it doesn't address the bulk of the problem on WikiPedia.

    But still might be interesting to some:

    https://github.com/magent-cryptograss/pickipedia-mcp

  • singinishi 1 hour ago
    [dead]
  • feverzsj 4 hours ago
    Didn't they just sells access to all the AI giants?
    • input_sh 4 hours ago
      They sold AI giants enterprise downloads in order for them not to hammer Wikimedia's infrastructure by downloading bulk data the usual way available to everyone else. You really have to twist the truth to turn it into something bad for any of the sides.
    • jcattle 4 hours ago
      You mean convenient read-access?
  • russnes 3 hours ago
    Inb4 wikipedia is lost to the same narrative control as MSM
  • weli 5 hours ago
    I don't see how this is going to work. 'It sounds like AI' is not a good metric whatsoever to remove content.
    • csande17 4 hours ago
      Wikipedia agrees: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing#...

      That's why they're cataloging specific traits that are common in AI-generated text, and only deleting if it either contains very obvious indicators that could never legitimately appear in a real article ("Absolutely! Here is an article written in the style of Wikipedia:") or violates other policies (like missing or incorrect citations).

    • embedding-shape 4 hours ago
      If that's your takeaway, you need to read the submission again, because that's not what they're suggesting or doing.
    • ramon156 5 hours ago
      This is about wiping unsourced and fake AI generated content, which can be confirmed by checking if the sources are valid
  • PlatoIsADisease 4 hours ago
    Isn't having a source the only thing that should be required. Why is AI speak bad?

    I'm a embarrassed to be associated with US Millennials who are anti AI.

    No one cares if you tie your legs together and finish a marathon in 12 hours. Just finish it in 3. Its more impressive.

    EDIT:

    I suppose people missed the first sentence:

    >Isn't having a source the only thing that should be required.

    >Isn't having a source the only thing that should be required.

    >Isn't having a source the only thing that should be required.

    • PurpleRamen 3 hours ago
      There is usually no quality-control on AI-output, because people are lacking time and/or competence doing it, which are also the reasons why they are using AI.

      And AI still can make up things, which might be fine in some random internet-comment, or some irrelevant article about something irrelevant happening somewhere in the world, but not with a knowledge-vault like Wikipedia.

      And, we are talking here about Wikipedia. They are not just checking for AI, they are checking everything from everyone and have many many rules to ensure a certain level of quality. They can't check everything at once and fetch all problems immediately, but they are working step by step and over time.

      > I'm a embarrassed to be associated with US Millennials who are anti AI.

      You should be embarrassed for making such a statement.

    • alt227 1 hour ago
      >Isn't having a source the only thing that should be required.

      No, referencing and discussing it properly whilst retaining the tone and inferred meaning are equally as important. I can cite anything as a source that I want, but if I use it incorrectly or my analysis misses the point of the source then the reference source itself is pointless.

    • IshKebab 3 hours ago
      > Why is AI speak bad

      It's not inherently bad, but if something was written with AI the chances that it is low effort crap are much much much higher than if someone actually spent time and effort on it.