8 comments

  • Eufrat 2 hours ago
    They seemed open to giving it a try if they were actively involved in the experiment. Instead, it feels like a lot of people don’t really understand how Wikipedia is managed and thought that they could use it as a freeform place to get credibility or just test their pet projects.

    Like, this attempt† where the bot then attempted to lecture users who were hostile towards it before it was eventually banned.

    https://en.wikipedia.org/wiki/User_talk:TomWikiAssist

  • jjmarr 2 hours ago
    This is the traditional "innovators dillema" where a skilled profession facing an imperfect technological threat decides not to adopt it until it is too late.

    AI generated articles are, on the balance, inferior, except for people that want simple, low quality content.

    But LLMs are moving up the value chain with Deep Research. They can give explanations tuned to a reader's knowledge/viewpoints and provide interactive content Wikipedia doesn't support. That is a killer app for math/science topics.

    Wikipedia will win against a generic corporate encyclopedia on neutrality/oversight, but it'll lose badly on UX, which is what matters.

    I think the tipping point will be direct integration of academic sources into ChatGPT/Claude/Gemini and a "WikiLink" type way to discover interesting follow-up topics.

    I can't trust AI answers for serious historical or social science topics because of the first. And generally my chat with AI ends once I get the answer I need because I can't get rabbitholed into other topics.

    • Kim_Bruning 39 minutes ago
      It REALLY depends on how you're using the AI. I get the strong impression a lot of people are still at the "I'll write a few prompts and see what happens" stage, and hoping for an answer from the magical oracle; as opposed to really using the tool. This never fails to disappoint.

      I might be slightly wrong, but probably not by a lot, yet. Sure there's an element of "holding-it-wrong-ism" in my position. But ... it does actually take practice to get it right, and best practices are badly documented!

      That said the situation is changing rapidly: https://news.ycombinator.com/item?id=47547849 "AI bug reports went from junk to legit overnight, says Linux kernel czar"

      --

    • redanddead 7 minutes ago
      it’s not supposed to win on UX, it’s current UX is maybe too conservative sure

      of course they banned ai they could barely allow css

  • cozzyd 33 minutes ago
    If you want an AI encyclopedia that already exists
  • longislandguido 5 hours ago
    Will this open the door to editors' deletion of any content they dislike, under the guise that it might (or might not) be AI generated?

    Can't wait for the 80 page Talk threads.

    • kjkjadksj 5 hours ago
      Don’t they already do that?
  • rox_kd 44 minutes ago
    Well on time tbh. or at least some sort of better moderation, because there has really been some unfortunate cases imo
  • swingboy 2 hours ago
    I’ve contributed a fair amount over the past few months of primarily AI generated content that I mainly just edit for the usual AI tropes and it’s pretty much all still up.
  • slyall 3 hours ago
    This policy has been shared a lot by the anti-AI crowd over the last week. They are celebrating it as a major site saying no to AI.

    It seems a smaller "win" than most think. Just discourages wholesale rewriting and creation of new articles using AI. Assistance with editing is explicitly allowed.

    • Kim_Bruning 37 minutes ago
      Right, the wikipedia rules are not that different from the HN rules. A human needs to be responsible for what finally goes on the page. And that's fair enough. There's some experimental (non-wikimedia) wikis that use AI for editing, but they haven't taken off yet.
  • ChrisArchitect 4 hours ago