Relicensing with AI-Assisted Rewrite

(tuananh.net)

200 points | by tuananh 8 hours ago

42 comments

  • danlitt 1 hour ago
    I am pretty sure this article is predicated on a misunderstanding of what a "clean room" implementation means. It does not mean "as long as you never read the original code, whatever you write is yours". If you had a hermetically sealed code base that just happened to coincide line for line with the codebase for GCC, it would still be a copy. Traditionally, a human-driven clean room implementation would have a vanishingly small probability of matching the original codebase enough to be considered a copy. With LLMs, the probability is much higher (since in truth they are very much not a "clean room" at all).

    The actual meaning of a "clean room implementation" is that it is derived from an API and not from an implementation (I am simplifying slightly). Whether the reimplementation is actually a "new implementation" is a subjective but empirical question that basically hinges on how similar the new codebase is to the old one. If it's too similar, it's a copy.

    What the chardet maintainers have done here is legally very irresponsible. There is no easy way to guarantee that their code is actually MIT and not LGPL without auditing the entire codebase. Any downstream user of the library is at risk of the license switching from underneath them. Ideally, this would burn their reputation as responsible maintainers, and result in someone else taking over the project. In reality, probably it will remain MIT for a couple of years and then suddenly there will be a "supply chain issue" like there was for mimemagic a few years ago.

    • brians 12 minutes ago
      I do not agree with your interpretation of copyright law. It does ban copies: there has to be information flow from the original to the copy for it to be a "copy." Spontaneous generation of the same content is often taken by the courts to be a sign that it's purely functional, derived from requirements by mathematical laws.

      Patent law is different and doesn't rely on information flow in the same way.

    • femto 20 minutes ago
      > If you had a hermetically sealed code base that just happened to coincide line for line with the codebase for GCC, it would still be a copy.

      That's not what the law says [1]. If two people happen to independently create the same thing they each have their own copyright.

      If it's highly improbable that two works are independent (eg. the gcc code base), the first author would probably go to court claiming copying, but their case would still fail if the second author could show that their work was independent, no matter how improbable.

      [1] https://lawhandbook.sa.gov.au/ch11s13.php?lscsa_prod%5Bpage%...

    • zabzonk 30 minutes ago
      > It does not mean "as long as you never read the original code, whatever you write is yours"

      I think there is precedence that says exactly this - for example the BIOS rewrites for the IBM PC from people like Phoenix. And it would be trivial to instruct an LLM to prefer to use (say, in assembler) register C over register B wherever that was possible, resulting in different code.

      • bandrami 29 minutes ago
        Different but still derivative
        • zabzonk 17 minutes ago
          Well, I am not exactly a hotshot 8086 programmer (though I do alright) but if I was asked to reproduce the IBM BIOS (which I have seen) I think I would come up with something very similar but not identical - it is really not rocket science code, so the LLM replacing me would have rather few alternatives to choose from.
    • petercooper 22 minutes ago
      The actual meaning of a "clean room implementation" is that it is derived from an API and not from an implementation

      I know you were simplifying, and not to take away from your well-made broader point, but an API-derived implementation can still result in problems, as in Google vs Oracle [1]. The Supreme Court found in favor of Google (6-2) along "fair use" lines, but the case dodged setting any precedent on the nature of API copyrightability. I'm unaware if future cases have set any precedent yet, but it just came to mind.

      [1]: https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_....

    • dathinab 1 hour ago
      the author speaks about code which is syntactically completely different but semantically does the same

      i.e. a re-implementation

      which can either

      - be still derived work, i.e. seen as you just obfuscating a copyright violation

      - be a new work doing the same

      nothing prevents an AI from producing a spec based on a API, API documentation and API usage/fuzzing and then resetting the AI and using that spec to produce a rewrite

      I mean "doing the same" is NOT copyright protection, you need patent law for that. Except even with patent law you need to innovations/concepts not the exact implementation details. Which means that even if there are software patents (theoretically,1) most things done in software wouldn't be patentable (as they are just implementation details, not inventions)

      (1): I say theoretically because there is a very long track record of a lot of patents being granted which really should never be granted. This combined with the high cost of invalidating patents has caused a ton of economical damages.

      • jacquesm 1 hour ago
        No, that depends on whether or not the AI work product rests on key contributions to its training set without which it would not be able to the the work, see other comment. In that case it looks like 'a new work doing the same' but it still a derived work.

        Ted Nelson was years ahead of the future where we really needed his Xanadu to keep track of fractional copyright. Likely if we had such a mechanism, and AI authors respected it then we would be able to say that your work is derived from 3000 other original works and that you added 6 lines of new code.

        • uyzstvqs 28 minutes ago
          No, training and inference are two separate processes. Training data is never redistributed, only obtained and analyzed. What matters is what data is put into context during inference. This is controlled by the user.

          AI/ML is complex, so as a simpler analogy: If I watch The Simpsons, and I create an amusing infographic of how often Homer says "D'oh!" over time, my infographic would be an original work. AI training follows the same principle.

          • jacquesm 19 minutes ago
            > my infographic would be an original work.

            > AI training follows the same principle.

            If you really believe that then we can't have a meaningful conversation about this, that's not even ELIF territory, that's just disconnected. You should be asking questions, not telling people how it works.

    • pmarreck 50 minutes ago
      > With LLMs, the probability is much higher (since in truth they are very much not a "clean room" at all).

      I beg to differ. Please examine any of my recent codebases on github (same username); I have cleanroom-reimplemented par2 (par2z), bzip2 (bzip2z), rar (rarz), 7zip (z7z), so maybe I am a good test case for this (I haven't announced this anywhere until now, right here, so here we go...)

      https://github.com/pmarreck?tab=repositories&type=source

      I was most particular about the 7zip reimplementation since it is the most likely to be contentious. Here is my repo with the full spec that was created by the "dirty team" and then worked off of by the LLM with zero access to the original source: https://github.com/pmarreck/7z-cleanroom-spec

      Not only are they rewritten in a completely different language, but to my knowledge they are also completely different semantically except where they cannot be to comply with the specification. I invite you and anyone else to compare them to the original source and find overt similarities.

      With all of these, I included two-way interoperation tests with the original tooling to ensure compatibility with the spec.

      • airza 42 minutes ago
        By what means did you make sure your LLM was not trained with data from the original source code?
      • ostacke 38 minutes ago
        Bu that's not really what danlitt said, right? They did not claim that it's impossible for an LLM to generate something different, merely that it's not a clean room implementation since the LLM, one must assume, is trained on the code it's re-implementing.
  • pornel 13 minutes ago
    Generative AI changed the equation so much that our existing copyright laws are simply out of date.

    Even copyright laws with provisions for machine learning were written when that meant tangential things like ranking algorithms or training of task-specific models that couldn't directly compete with all of their source material.

    For code it also completely changes where the human-provided value is. Copyright protects specific expressions of an idea, but we can auto-generate the expressions now (and the LLM indirection messes up what "derived work" means). Protecting the ideas that guided the generation process is a much harder problem (we have patents for that and it's a mess).

    It's also a strategic problem for GNU. GNU's goal isn't licensing per se, but giving users freedom to control their software. Licensing was just a clever tool that repurposed the copyright law to make the freedoms GNU wanted somewhat legally enforceable. When it's so easy to launder code's license now, it stops being an effective tool.

    GNU's licensing strategy also depended on a scarcity of code (contribute to GCC, because writing a whole compiler from scratch is too hard). That hasn't worked well for a while due to permissive OSS already reducing scarcity, but gen AI is the final nail in the coffin.

  • nairboon 7 hours ago
    That code is still LGPL, it doesn't matter what some release engineer writes in the release notes on Github. All original authors and copyright holders must have explicitly agreed to relicense under a different license, otherwise the code stays LGPL licensed.

    Also the mentioned SCOTUS decision is concerned with authorship of generative AI products. That's very different of this case. Here we're talking about a tool that transformed source code and somehow magically got rid of copyright due to this transformation? Imagine the consequences to the US copyright industry if that were actually possible.

    • pavlov 2 hours ago
      If anything, the SCOTUS decision would seem to imply that generative AI transformations produce no additional creative contribution and therefore the original copyright holder has all rights to any derived AI works.

      (IANAL)

      • dathinab 1 hour ago
        that is a very good formulation of what I have been trying to say

        but also probably not fully right

        as far as I understand they avoid the decision of weather an AI can produce creative work by saying that the neither the AI nor it's owner/operator can claim ownership of copyright (which makes it de-facto public domain)

        this wouldn't change anything wrt. derived work still having the original authors copyright

        but it could change things wrt. parts in the derived work which by themself are not derived

        • pseudalopex 1 hour ago
          The court avoided a decision of what the operator could have copyrighted because he said he was not the author.
    • dathinab 1 hour ago
      iff it went through the full clean room rewrite just using AI then no, it's de-facto public domain (but also it probably didn't do so)

      iff it is a complete new implementation with completely different internal then it could also still be no LGPL even if produced by a person with in depth knowledge. Copyright only cares if you "copied" something not if you had "knowledge" or if it "behaves the same". So as long as it's distinct enough it can still be legally fine. The "full clean room" requirement is about "what is guaranteed to hold up in front of a court" not "what might pass as non-derivative but with legal risk".

  • kshri24 6 hours ago
    > The ownership void: If the code is truly a “new” work created by a machine, it might technically be in the public domain the moment it’s generated, rendering the MIT license moot.

    How would that work? We still have no legal conclusion on whether AI model generated code, that is trained on all publicly available source (irrespective of type of license), is legal or not. IANAL but IMHO it is totally illegal as no permission was sought from authors of source code the models were trained on. So there is no way to just release the code created by a machine into public domain without knowing how the model was inspired to come up with the generated code in the first place. Pretty sure it would be considered in the scope of "reverse engineering" and that is not specific only to humans. You can extend it to machines as well.

    EDIT: I would go so far as to say the most restrictive license that the model is trained on should be applied to all model generated code. And a licensing model with original authors (all Github users who contributed code in some form) should be setup to be reimbursed by AI companies. In other words, a % of profits must flow back to community as a whole every time code-related tokens are generated. Even if everyone receives pennies it doesn't matter. That is fair. Also should extend to artists whose art was used for training.

    • dathinab 1 hour ago
      > how does that work

      AI can't claim ownership, humans can't either as they haven't produced it. If there is guaranteed no one which can claim ownership it often seen as being in the public domain.

      In general it is irrelevant what the copyright of the AI training data is. At least in the US judges have been relevant clear about that. (Except if the AI reproduced input data close to verbatim. _But in general we aren't speaking about AI being trained on a code base but an AI using/rewriting it_.)

      (1): Which isn't the same as no one seems to know who has ownership. It also might be owned by no-one in the sense that no one can grant you can copyright permission (so opposite of public domain), but also no-one can sue (so de-facto public domain).

      • graemep 22 minutes ago
        > humans can't either as they haven't produced it. If there is guaranteed no one which can claim ownership it often seen as being in the public domain.

        Says who?. The US ruling the article refers to does not cover this.

        It is different in other countries. Even if US law says it is public domain (which is probably not the case) you had better not distribute it internationally. For example, UK law explicitly says a human is the author of machine generated content: https://news.ycombinator.com/item?id=47260110

      • jacquesm 1 hour ago
        Humans can't claim ownership, but they are still liable for the product of their bot. That's why MS was so quick to indemnify their users, they know full well that it is going to be super hard to prove that there is a key link to some original work.

        The main analogy is this one: you take a massive pile of copyrighted works, cut them up into small sections and toss the whole thing in a centrifuge, then, when prompted to produce a work you use a statistical method to pull pieces of those copyrighted works out of the centrifuge. Sometimes you may find that you are pulling pieces out of the laundromat in the order in which they went in, which after a certain number of tokens becomes a copyright violation.

        This suggests there are some obvious ways in which AI companies can protect themselves from claims of infringement but as far as I'm aware not a single one has protections in place to ensure that they do not materially reproduce any fraction of the input texts other than that they recognize prompts asking it to do so.

        So it won't produce the lyrics of 'Let it be'. But they'll be happy to write you mountains of prose that strongly resembles some of the inputs.

        The fact that they are not doing that tells you all you really need to know: they know that everything that their bots spit out is technically derived from copyrighted works. They also have armies of lawyers and technical arguments to claim the opposite.

        • dathinab 1 hour ago
          > Humans can't claim ownership, but they are still liable for the product of their bot.

          sure,

          but that is completely unrelated to this discussion

          which is about AI using code as input to produce similar code as output

          not about AI being trained on code

          • jacquesm 1 hour ago
            > which is about AI using code as input to produce similar code as output

            > not about AI being trained on code

            The two are very directly connected.

            The LLM would not be able to do what it does without being trained, and it was trained on copyrighted works of others. Giving it a piece of code for a rewrite is a clear case of transformation, no matter what, but now it also rests on a mountain of other copyrighted code.

            So now you're doubly in the wrong, you are willfully using AI to violate copyright. AI does not create original works, period.

            • bluGill 49 minutes ago
              Every programmer is trained on the copyrighted works of others. there a vanishingly few modern programs with available source code in the public domain.

              it isn't clear how/if llm is different from the brain but we all have training by looking at copywrited source code at some time.

              • jacquesm 27 minutes ago
                > it isn't clear how/if llm is different from the brain

                It's very clear: the one is a box full of electronics, the other is part of the central nervous system of a human being.

                > but we all have training by looking at copywrited source code at some time.

                That may be so, but not usually the copyrighted source code that we are trying to reproduce. And that's the bit that matters.

                You can attempt to whitewash it but at its core it is copyright infringement and the creation of derived works.

    • m4rtink 50 minutes ago
      I would be totally fine with all code generated by LLMs being considered to be under GPL v3 unless the model authors can prove without any doubt it was not trained on any GPL v3 code - viral licensing to the max. ;-)
    • kouteiheika 6 hours ago
      > I would go so far as to say the most restrictive license that the model is trained on should be applied to all model generated code.

      That license is called "All Rights Reserved", in which case you wouldn't be able to legally use the output for anything.

      There are research models out there which are trained on only permissively licensed data (i.e. no "All Rights Reserved" data), but they're, colloquially speaking, dumb as bricks when compared to state-of-art.

      But I guess the funniest consequence of the "model outputs are a derivative work of their training data" would be that it'd essentially wipe out (or at very least force a revert to a pre-AI era commit) every open source project which may have included any AI-generated or AI-assisted code, which currently pretty much includes every major open source project out there. And it would also make it impossible to legally train any new models whose training data isn't strictly pre-AI, since otherwise you wouldn't know whether your training data is contaminated or not.

      • progval 5 hours ago
        > There are research models out there which are trained on only permissively licensed data

        Models whose authors tried to train only on permissively licensed data.

        For example https://huggingface.co/bigcode/starcoder2-15b tried to be a permissively licensed dataset, but it filtered only on repository-level license, not file-level. So when searching for "under the terms of the GNU General Public License" on https://huggingface.co/spaces/bigcode/search-v2 back when it was working, you would find it was trained on many files with a GPL header.

      • kshri24 5 hours ago
        I agree with your assessment. Which is why I was proposing a middle-ground where an agreement is setup between the model training company and the collective of developers/artists et all and come up with a license agreement where they are rewarded for their original work for perpetuity. A tiny % of the profits can be shared, which would be a form of UBI. This is fair not only because companies are using AI generated output but developers themselves are also paying and using AI generated output that is trained on other developer's input. I would feel good (in my conscience) that I am not "stealing" someone else's effort and they are being paid for it.
        • carlob 2 hours ago
          Why settle on some private agreement between creators and ai companies where a tiny percentage is shared, let's just tax the hell out of AI companies and redistribute.
          • rswail 20 minutes ago
            Because the authors of the original content deserve recompense for their work.

            That's what the whole copyright and patent regimes are designed to achieve.

            It's to encourage the creation of knowledge.

            US Constitution, Article I, section 8:

                To promote the Progress of Science and useful Arts, by
                securing for limited Times to Authors and Inventors the
                exclusive Right to their respective Writings 
                and Discoveries;
            • carlob 7 minutes ago
              Right, it says exclusive rights, which does not translate to "we siphon everything and you get a tiny percentage of our profits", it means I can choose to say no to all of this. To me the matter of compensation and that of authorship rights are mostly orthogonal.
          • kshri24 2 hours ago
            > let's just tax the hell out of AI companies and redistribute.

            That's not what I favor because you are inserting a middleman, the Government, into the mix. The Government ALWAYS wants to maximize tax collections AND fully utilize its budget. There is no concept of "savings" in any Government anywhere in the World. And Government spending is ALWAYS wasteful. Tenders floated by Government will ALWAYS go to companies that have senators/ministers/prime ministers/presidents/kings etc as shareholders. In other words, the tax money collected will be redistributed again amongst the top 500 companies. There is no trickle down. Which is why agreements need to be between creators and those who are enjoying fruits of the creation. What have Governments ever created except for laws that stifle innovation/progress every single time?

            • carlob 45 minutes ago
              > What have Governments ever created except for laws that stifle innovation/progress every single time?

              https://www.youtube.com/watch?v=Qc7HmhrgTuQ

              In all seriousness without the government you would have no innovation and progress, because it's the public school system, functioning roads, research grants a stable and lawful society that allow you to do any kind of innovation.

              Apart from that, you have answered to a strawman. I said redistribute, not give to the government. I explicitly worded things that way because I don't think we should not be having a discussion on policy.

              I think we are moving to an economy where the share of profits taken by capital becomes much larger than the one take from labor. If that happens then laborers will have very little discretionary income to fuel consumption and even capitalists will end up suffering. We can choose to redistribute now or wait for it to happen naturally, however that usually happens in a much more violent way, be it hyperinflation, famine, war or revolution.

            • LadyCailin 2 hours ago
              Uh, no? https://en.wikipedia.org/wiki/Government_Pension_Fund_of_Nor...

              Just because you have a failure of imagination for how government should work, doesn’t mean it can’t work. And stifling innovation is exactly what I want, when that innovation is “steal from everyone so we can invent the torment nexus” or whatever’s going on these days.

              • kshri24 1 hour ago
                Pension fund is an example of what exactly? All countries have pension funds. This has nothing to do with Governments wasting money. Please go beyond tiny European countries that have very few verticals and are largely dependent on outside support for protecting their sovereignty. They are not representative of most of the World.

                > As its name suggests, the Government Pension Fund Global is invested in international financial markets, so the risk is independent from the Norwegian economy. The fund is invested in 8,763 companies in 71 countries (as of 2024).

                Basically what I said above. You give your tax dollars to Government and it will invest it into top 500 companies. In the Norway Pension Fund case it is 8,763 companies in 71 countries. None of them are startups/small businesses/creators.

                > And stifling innovation is exactly what I want, when that innovation is “steal from everyone so we can invent the torment nexus” or whatever’s going on these days.

                You are confusing current lack of laws regulating this space with innovation being evil. Innovation is not evil. The technology per se is not evil. Every innovation brings with it a set of challenges which requires us to think of new legislation. This has ALWAYS been the case for thousands of years of human innovation.

        • kouteiheika 3 hours ago
          > Which is why I was proposing a middle-ground where an agreement is setup between the model training company and the collective of developers/artists et all and come up with a license agreement where they are rewarded for their original work for perpetuity. A tiny % of the profits can be shared, which would be a form of UBI. This is fair

          That wouldn't be fair because these models are not only trained on code. A huge chunk of the training data are just "random" webpages scraped off the Internet. How do you propose those people are compensated in such a scheme? How do you even know who contributed, and how much, and to whom to even direct the money?

          I think the only "fair" model would be to essentially require models trained on data that you didn't explicitly license to be released as open weights under a permissive license (possibly with a slight delay to allow you to recoup costs). That is: if you want to gobble up the whole Internet to train your model without asking for permission then you're free to do so, but you need to release the resulting model so that the whole humanity can benefit from it, instead of monopolizing it behind an API paywall like e.g. OpenAI or Anthropic does.

          Those big LLM companies harvest everyone's data en-masse without permission, train their models on it, and then not only they don't release jack squat, but have the gall to put up malicious explicit roadblocks (hiding CoT traces, banning competitors, etc.) so that no one else can do it to them, and when people try they call it an "attack"[1]. This is what people should be angry about.

          [1] -- https://www.anthropic.com/news/detecting-and-preventing-dist...

          • duskdozer 1 hour ago
            >under a permissive license

            well, assuming all data that is itself not permissively licensed is excluded

      • foota 5 hours ago
        I don't know how far it would get, but I imagine that a FAANG will be able to get the farthest here by virtue of having mountains of corporate data that they have complete ownership over.
        • msdz 3 hours ago
          They’d probably get the farthest, but they won’t pursue that because they don’t want to end up leaking the original data from training. It is possible in regular language/text subsets of models to reconstruct massive consecutive parts of the training data [1], so it ought to be possible for their internal code, too.

          [1] https://arxiv.org/abs/2601.02671

    • adrianN 6 hours ago
      We‘ll have to wait until the technology progresses sufficiently that AI cuts into Disney’s profit.
    • shevy-java 2 hours ago
      "We still have no legal conclusion on whether AI model generated code, that is trained on all publicly available source (irrespective of type of license), is legal or not."

      I think it will depend on the way HOW the AI arrived to the new code.

      If it was using the original source code then it probably is guilty-by-association. But in theory an AI model could also generate a rewrite if being fed intermediary data not based on that project.

      • dathinab 1 hour ago
        > "We still have no legal conclusion on whether AI model generated code, that is trained on all publicly available source (irrespective of type of license), is legal or not."

        it depends on the country you are in

        but overall in the US judges have mostly consistently ruled it as legal

        and this is extremely unlikely to change/be effectively interpreted different

        but where things are more complex is:

        - model containing training data (instead of generic abstractions based on it), determined by weather or not it can be convinced to produce close to verbatim output of the training data the discussion is about

        - model producing close to verbatim training data

        the later seems to be mostly? always? be seen as copyright violation, with the issue that the person who does the violation (i.e. uses the produced output) might not known

        the former could mean that not just the output but the model itself can count as a form of database containing copyright violating content. In which case they model provider has to remove it, which is technically impossible(1)... The pain point with that approach is that it will likely kill public models, while privately kept models will for every case put in a filter and _claim_ to have removed it and likely will get away with it. So while IMHO it should be a violation conceptually, it probably is better if it isn't.

        But also the case the original article refers to is more about models interacting/using with code base then them being trained on.

        (1): For LLMs, it is very much removable for knowledge based used by LLMs.

      • amelius 2 hours ago
        You should just look at it as a giant computation graph. If some of the inputs in this graph are tainted by copyright and an output depends on these inputs (changing them can change the output) then the output is tainted too.
    • d1sxeyes 3 hours ago
      > We still have no legal conclusion on whether AI model generated code, that is trained on all publicly available source (irrespective of type of license), is legal or not.

      That horse has bolted. No one knows where all the AI code any more, and it would no longer possible to be compliant with a ruling that no one can use AI generated code.

      There may be some mental and legal gymnastics to make it possible, but it will be made legal because it’s too late to do anything else now.

      • conartist6 2 hours ago
        I hate that this may be true, but I also don't think the law will fix this for us.

        I think this is down the community and the culture to draw our red lines on and enforce them. If we value open source, we will find a way to prevent its complete collapse through model-assisted copyright laundering. If not, OSS will be slowly enshittified as control of projects slowly flows to the most profit-motivated entities.

        • d1sxeyes 51 minutes ago
          But what tools do we have to stop this happening? I agree, we can (and should) all refuse to participate in licence laundering, but there will always be folks less principled.

          I don’t know what happens next, honestly.

          • conartist6 26 minutes ago
            I don't either, but I guess we're both about to find out. There only surety is that there will be moves and countermoves. As far as I could tell the best thing we could do right now is fund software-legal organizations like the EFF which are likely to be the ones to litigate the test cases. What's hurting us most right now is we don't know what law means in this context, so we don't fully understand the scale of what we need to protect against or what tools we have that the courts will recognize
    • thedevilslawyer 6 hours ago
      That's unpractical enough that you might as well wish for UBI and world peace rather than this.
      • kshri24 6 hours ago
        Why is it impractical? Github already has a sponsor system. Also this can be a form of UBI.
  • bengale 5 minutes ago
    Would it work to have an AI write the spec, and a different AI implement the spec?

    I think there are going to be a lot of these types of scenarios where the old way of doing things just doesn't hold.

  • ekjhgkejhgk 3 minutes ago
    > Any developer could take a GPL-licensed project, feed it into an LLM with the prompt “Rewrite this in a different style,” and release it under MIT

    Does this argument make sense? Even before LLMs, a developer could "rewrite this in a different style" and release it under a different license. Why are LLMs a new element in this argument?

    • s0ss 2 minutes ago
      Because now with an LLM it’s almost trivial to do this? Before it was not.
  • abrookewood 3 hours ago
    This seems relevant: "No right to relicense this project (github.com/chardet)" https://news.ycombinator.com/item?id=47259177
    • shevy-java 3 hours ago
      That's another project though, right? In this case I think it is different because that project just seems stolen. The courts can probably verify this too.

      I think the main question is when a rewrite is a clean rewrite, via AI. If it is a clean rewrite they can choose any licence.

      • littlestymaar 2 hours ago
        No, TFA is about chardet too:

        > chardet , a Python character encoding detector used by requests and many others, has sat in that tension for years: as a port of Mozilla’s C++ code it was bound to the LGPL, making it a gray area for corporate users and a headache for its most famous consumer.

  • samrus 6 hours ago
    > The ownership void: If the code is truly a “new” work created by a machine, it might technically be in the public domain the moment it’s generated, rendering the MIT license moot.

    Im struggling to see where this conclusion came from. To me it sounds like the AI-written work can not be coppywritten, and so its kind of like a copy pasting the original code. Copy pasting the original code doesnt make it public domain. Ai gen code cant be copywritten, or entered into the public domain, or used for purposes outside of the original code's license. Whats the paradox here?

    • cxr 18 minutes ago
      FYI: the concept is "copyright" not "copywrite". It doesn't turn into "copywritten" as an adjective. The adjective is "copyrighted".
    • Sharlin 4 hours ago
      The point is that even a work written by an AI trained exclusively on liberally licensed or public domain material cannot have copyright (isn’t a "work" in the legal sense) and thus nobody has standing to put it under a license or claim any rights to it.

      If I train a limerick generator on the contents of Project Gutenberg, no matter how creative its outputs, they’re not copyrightable under this interpretation. And it’s by far the most reasonable interpretation of the law as both intended and written. Entities that are not legal persons cannot have copyright, but legal persons also cannot claim copyright of something made by a nonperson, unless they are the "creative force" behind the work.

    • NitpickLawyer 5 hours ago
      > To me it sounds like the AI-written work can not be coppywritten

      I think we didn't even began to consider all the implications of this, and while people ran with that one case where someone couldn't copyright a generated image, it's not that easy for code. I think there needs to be way more litigation before we can confidently say it's settled.

      If "generated" code is not copyrightable, where do draw the line on what generated means? Do macros count? Does code that generates other code count? Protobuf?

      If it's the tool that generates the code, again where do we draw the line? Is it just using 3rd party tools? Would training your own count? Would a "random" code gen and pick the winners (by whatever means) count? Bruteforce all the space (silly example but hey we're in silly space here) counts?

      Is it just "AI" adjacent that isn't copyrightable? If so how do you define AI? Does autocomplete count? Intellisense? Smarter intellisense?

      Are we gonna have to have a trial where there's at least one lawyer making silly comparisons between LLMs and power plugs? Or maybe counting abacuses (abaci?)... "But your honour, it's just random numbers / matrix multiplications...

      • lelanthran 3 hours ago
        All of your questions have seemingly trivial answers. Maybe I am missing something, but...

        > If "generated" code is not copyrightable, where do draw the line on what generated means? Do macros count?

        Does the output of the macro depend on ingesting someone else's code?

        > Does code that generates other code count?

        Does the output of the code depend on ingesting someone else's code?

        > Protobuf?

        Does your protobuf implementation depend on ingesting someone else's code?

        > If it's the tool that generates the code, again where do we draw the line?

        Does the tool depend ingestion of of someone else's code?

        > Is it just using 3rd party tools?

        Does the 3rd party tool depend on ingestion of someone else's code?

        > Would training your own count?

        Does the training ingest someone else's code?

        > Would a "random" code gen and pick the winners (by whatever means) count?

        Does the random codegen depend on ingesting someone else's code?

        > Bruteforce all the space (silly example but hey we're in silly space here) counts?

        Does the bruteforce algo depend on ingesting someone else's code?

        > Is it just "AI" adjacent that isn't copyrightable?

        No, it's the "depends on ingesting someone else's code" that makes it not copyrightable.

        > If so how do you define AI?

        Doesn't matter whether it is AI or not, the question is are you ingesting someone else's code.

        > Does autocomplete count?

        Does the specific autocomplete in question depend on ingesting someone else's code?

        > Intellisense?

        Does the specific Intellisense in question depend on ingesting someone else's code?

        > Smarter intellisense?

        Does the specific Smarter Intellisense in question depend on ingesting someone else's code?

        ...

        Look, I see where you're going with this - reductio ad absurdum and all - but it seems to me that you're trying to muddy the waters by claiming that either all code generation is allowed or no code generation is disallowed.

        Let me clear the waters for all the readers - the complaint is not about code generation, it's about ingesting someone else's code, frequently for profit.

        All these questions you are asking seem to me to be irrelevant and designed to shift the focus from the ingestion of other people's work to something that no one is arguing against.

        • NitpickLawyer 3 hours ago
          Interesting.

          > the complaint is not about code generation, it's about ingesting someone else's code, frequently for profit.

          Why do you think that is, and what complaint specifically? I was talking about this:

          > The Copyright Office reviewed the decision in 2022 and determined that the image doesn't include “human authorship,” disqualifying it from copyright protection

          There seems to be 0 mentioning of training there. In fact if you read the appeal's court case [1] they don't mention training either:

          > We affirm the denial of Dr. Thaler’s copyright application. The Creativity Machine cannot be the recognized author of a copyrighted work because the Copyright Act of 1976 requires all eligible work to be authored in the first instance by a human being. Given that holding, we need not address the Copyright Office’s argument that the Constitution itself requires human authorship of all copyrighted material. Nor do we reach Dr. Thaler’s argument that he is the work’s author by virtue of making and using the Creativity Machine because that argument was waived before the agency.

          I have no idea where you got the idea that this was about training data. Neither the copyright office nor the appeals court even mention this.

          But anyway, since we're here, let's entertain this. So you're saying that training data is the differentiator. OK. So in that case, would training on "your own data" make this ok with you? Would training on "synthetic" data be ok? Would a model that sees no "proprietary" code be ok? Would a hypothetical model trained just on RL with nothing but a compiler and endless compute be ok?

          The courts seem to hint that "human authorship" is still required. I see no end to the "... but what about x", as I stated in my first comment. I was honestly asking those questions, because the crux of the case here rests on "human authorship of the piece to be copyrighted", not on anything prior.

          [1] - https://fingfx.thomsonreuters.com/gfx/legaldocs/egpblokwqpq/...

          • lelanthran 2 hours ago
            > There seems to be 0 mentioning of training there. In fact if you read the appeal's court case [1] they don't mention training either:

            > ...

            > I have no idea where you got the idea that this was about training data. Neither the copyright office nor the appeals court even mention this.

            In both the story and the comments, that's the prevailing complaint. FTFA:

            > Their claim that it is a “complete rewrite” is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a “clean room” implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights.

            I mean, I know it's passe to read the story, but I still do it so my comments are on the story, not just the title taken out of context.

            > But anyway, since we're here, let's entertain this. So you're saying that training data is the differentiator.

            Well, that's the complaint in the story and in the comment section, so it makes sense to address that and that alone.

            > OK. So in that case, would training on "your own data" make this ok with you?

            Yes.

            > Would training on "synthetic" data be ok?

            If provenance of "synthetic data" does not depend on some upstream ingesting someone else's work, then yes.

            > Would a model that sees no "proprietary" code be ok?

            If the model does not depend on someone else's work, then Yes.

            > Would a hypothetical model trained just on RL with nothing but a compiler and endless compute be ok?

            Yes.

            *Note: Let me clarify that "someone else's work" means someone who has not consented or licended their work for ingestion and subsequent reproduction under the terms that AI/LLM training does it. If someone licensed you their work to train a model, then have at it.

            • NitpickLawyer 2 hours ago
              Ah! I think I get where the confusion was. I was quoting something from another comment, and specifically commenting on that.

              > > To me it sounds like the AI-written work can not be coppywritten

              I was only commenting on that.

        • user34283 2 hours ago
          I'm thinking that the relevant question would be whether the part where we want to know if is copyrightable is an intellectual invention of a human mind.

          "Ingesting someone else's code" does not seem very useful here - it's hardly quantifiable, nor is "ingestion" the key question I believe.

    • laksjhdlka 6 hours ago
      They say "if" it's a new work, then it might not be copyrightable, I guess. You suppose that it's still the original work, and hence it's still got that copyright.

      I think they are rhetorically asking if your position is correct.

    • __alexs 2 hours ago
      AI written absolutely is copyrightable. There are just some unresolved tensions around where the lines are and how much and what kind of involvement humans need to have in the process.
  • emsign 3 hours ago
    By design you can't know if the LLM doing the rewrite was exposed to the original code base. Unless the AI company is disclosing their training material, which they won't because they don't want to admit breaking the law.
    • shevy-java 2 hours ago
      > By design you can't know if the LLM doing the rewrite was exposed to the original code base.

      I agree, in theory. In practice courts will request that the decision-making process will be made public. The "we don't know" excuse won't hold; real people also need to tell the truth in court. LLMs may not lie to the court or use the chewbacca defence.

      Also, I am pretty certain you CAN have AI models that explain how they originated to the decision-making process. And they can generate valid code too, so anything can be autogenerated here - in theory.

    • soulofmischief 3 hours ago
      Seeing the source for a project doesn't prevent me from ever creating a similar project, just because I've seen the code. The devil is in the details.
      • shevy-java 2 hours ago
        Agreed, but the courts can conclude that all LLMs who are not open about their decision, have stolen things. So LLMs would auto-lose in court.
        • orthoxerox 2 hours ago
          Or they can conclude otherwise.
    • gostsamo 3 hours ago
      it was exposed when it was shown the thing to rewrite.
      • shevy-java 2 hours ago
        In this context here I think that is a correct statement. But I think you can have LLMs that can generate the same or similar code, without having been exposed to the other code.
    • skeledrew 3 hours ago
      It doesn't even matter if the LLM was exposed during training. A clean-room rewrite can be done by having one LLM create a highly detailed analysis of the target (reverse engineering if it's in binary form), and providing that analysis to another LLM to base an implementation.
      • k__ 3 hours ago
        It doesn't matter for the LLM writing the analysis.

        It does matter for the one who implements it.

        Finding an LLM that's good enough to do the rewrite while being able to prove it wasn't exposed to the original GPL code is probably impossible.

      • xyzsparetimexyz 3 hours ago
        Why does it need 2 LLMs? LLMs aren't people. I'm not even sure that it needs to be done in 2 seperate contexts
        • skeledrew 1 hour ago
          It doesn't have to be 2 LLMs, but nowadays there's LLM auto-memory, which means it could be argued that the same LLM doing both analysis and reimplementation isn't "clean". And the entire purpose behind the "clean" is to avoid that argument.
        • shevy-java 2 hours ago
          Agreed. But even then I don't see the problem. Multiple LLMs could work on the same project.
    • d1sxeyes 3 hours ago
      Is it against the law for an LLM to read LGPL-licensed code?

      That’s a complex question that isn’t solved yet. Clearly, regurgitating verbatim LGPL code in large chunks would be unlawful. What’s much less clear is a) how large do those chunks need to be to trigger LGPL violations? A single line? Two? A function? What if it’s trivial? And b) are all outputs of a system which has received LGPL code as an input necessarily derivative?

      If I learn how to code in Python exclusively from reading LGPL code, and then go away and write something new, it’s clear that I haven’t committed any violation of copyright under existing law, even if all I’m doing as a human is rearranging tokens I understand from reading LGPL code semantically to achieve new result.

      It’s a trying time for software and the legal system. I don’t have the answers, but whether you like them or not, these systems are here to stay, and we need to learn how to live with them.

  • stuaxo 1 hour ago
    I don't see how (with current LLMs that have been trained on mixed licensed data) you can use the LLM to rewrite to a less restrictive license.

    You could probably use it to output code that is GPL'd though.

  • dathinab 1 hour ago
    IMHO/IMHU AI can't claim authorship and as such can't copyright their work.

    This doesn't prevent any form of automatic copyrighting by production of derivative code or similar. It just prevent anyone from claiming ownership of any parts unique to the derived work.

    Like think about it if a natural disaster changes (e.g. water damages) a picture you did draw then a) you can't claim ownership of the natural produced changes but b) still have ownership of the original picture contained in the changed/derived work.

    AI shouldn't change that.

    Which brings us to another 2 aspects:

    1. if you give an AI a project access to the code to rewrite it anew it _is_ a copyright violation as it's basically a side-by-side rewrite

    2. but if you go the clean room approach but powered by AI then it likely isn't a copyright violation, but also now part of the public domain, i.e. not yours

    So yes, doing clean room rewrites has become incredible cheap.

    But no just because it's AI it doesn't make code go away.

    And lets be realistic one of the most relevant parts of many open source project is it being openly/shared maintained. You don't get this with clean room rewrites no matter if AI or not.

    • Joel_Mckay 1 hour ago
      LLM are isomorphic plagiarism machines, and like all ectoparasites must steal from real people to exist. Note this includes its users. =3
  • mfabbri77 6 hours ago
    This has the potential to kill open source, or at least the most restrictive licenses (GPL, AGPL, ...): if a license no longer protects software from unwanted use, the only possible strategy is to make the development closed source.
    • GaryBluto 39 minutes ago
      If you'd be willing to close source your "libre" open source project because somebody might do something you don't like with it, you never wanted a "libre" project.
      • saagarjha 13 minutes ago
        In this case someone is making a non-libre project with it.
    • _dwt 6 hours ago
      Yes, this is the reason I've completely stopped releasing any open-source projects. I'm discovering that newer models are somewhat capable of reverse-engineering even compiled WebAssembly, etc. too, so I can feel a sort of "dark forest theory" taking hold. Why publish anything - open or closed - to be ripped off at negligible marginal cost?
    • abrookewood 3 hours ago
      It's not just open source, it is literally anything source-available, whether intentional or not.
    • user34283 3 hours ago
      I find the wording "protect from unwanted use" interesting.

      It is my understanding that what a GPL license requires is releasing the source code of modifications.

      So if we assume that a rewrite using AI retains the GPL license, it only means the rewrite needs to be open source under the GPL too.

      It doesn't prevent any unwanted use, or at least that is my understanding. I guess unwanted use in this case could mean not releasing the modifications.

      • mfabbri77 2 hours ago
        If the AI product is recognised as "derivative work" of a GPL-compliant project, then it must itself be licensed under the GPL. Otherwise, it can be licensed under any other license (including closed source/proprietary binary licenses). This last option is what threatens to kill open source: an author no longer has control over their project. This might work for permissive licenses, but for GPL/AGPL and similar licenses, it's precisely the main reason they exist: to prevent the code from being taken, modified, and treated as closed source (including possible use as part of commercial products or Sass).
  • buro9 32 minutes ago
    and in a single moment, the value of software patents to companies is fully restored... the software license by itself is not enough to protect software innovation, a non-trivial implementation can now be (reasonably) trivially re-implemented.

    I'm sure most people here would agree patents stifle innovation, but if copyright doesn't work for companies then they will turn to a different tool.

  • Retr0id 7 hours ago
    > In traditional software law, a “clean room” rewrite requires two teams

    Is the "clean room" process meaningfully backed by legal precedent?

    • karlding 7 hours ago
      I am not a lawyer, but from my understanding the legal precedent is NEC v. Intel which established that clean-room software development is not infringing, even if it performs the same functionality as the original.

      As an aside, this clean room engineering is one of the plot points of Season 1 of the TV show Halt and Catch Fire where the fictional characters do this with the BIOS image they dumped.

    • Firehawke 7 hours ago
      Sure. The reimplementation of the IBM PC BIOS that gave birth to IBM Compatibles is the canonical example.
    • estimator7292 7 hours ago
      Yes. Compaq's reverse engineering of the IBM PC BIOS set the precedent.
    • devmor 6 hours ago
      It is the reason AMD exists.
  • amelius 2 hours ago
    I think you should interpret it like this:

    You cannot copyright the alphabet, but you can copyright the way letters are put together.

    Now, with AI the abstraction level goes from individual letters to functions, classes, and maybe even entire files.

    You can't copyright those (when written using AI), but you __can__ copyright the way they are put together.

    • josephg 2 hours ago
      > You can't copyright those anymore (when written using AI), but you __can__ copyright the way they are put together.

      Sort of, but not really. Copyright usually applies to a specific work. You can copyright Harry Potter. But you can't copyright the general class of "Wizard boy goes to wizard school". Copyrights generally can't be applied to classes of works. Only one specific work. (Direct copies - eg made with a photocopier - are still considered the same work.)

      Patterns (of all sorts) usually fall under patent law, not copyright law. Patents have some additional requirements - notably including that a patent must be novel and non-obvious. I broadly think software patents are a bad idea. Software is usually obvious. Patents stifle innovation.

      Is an AI "copy" a copy like a photocopier would make? Or is it a novel work? It seems more like the latter to me. An AI copy of a program (via a spec) won't be a copy of the original code. It'll be programmed differently. Thats why "clean room reimplementations" are a thing - because doing that process means you can't just copy the code itself. But what do I know, I'm not a lawyer or a judge. I think we'll have to wait for this stuff to shake out before anyone really knows what the rules will end up being.

      Weird variants of a lot of this stuff have been tested in court. Eg the Google v Oracle case from a few years ago.

      • amelius 1 hour ago
        You have good points regarding how copyright works.

        > Software is usually obvious.

        Hardware and mechanical designs are usually described in CAD programs nowadays, so it comes pretty close to software; it's just that LLMs are not the right tool to "GenAI" them but I've seen plenty of these kinds of design that I know for sure that they are often not any less obvious than a lot of software. Treating software as "obvious therefore not patentable" is not accurate and not fair and is probably not going to help the profession in the AI age. But I agree that patents are bad for innovation.

        It is also not fair to claim that an AI-copy is fundamentally different from photocopying.

        I mean, in both cases it is like you are picking the worst case interpretation for the field of software engineering.

        > I think we'll have to wait for this stuff to shake out before anyone really knows what the rules will end up being.

        Yes, but it will help if we think deeply about this stuff ourselves because what law-makers come up with may not be what the profession needs.

        • josephg 13 minutes ago
          > It is also not fair to claim that an AI-copy is fundamentally different from photocopying.

          If you clean-room copy it, I think it is different. Eg, first get one agent to make a complete spec of what the program does. And a list of all the correctness guarantees it meets. Then feed that spec into another AI model to generate a program which meets that spec.

          The second program will not be based on any of the code in the first program. They'll be as different as any two implementations of the same idea are. I don't think the second program should be copyrighted. If it should, why shouldn't one C compiler should be able to own a copyright over all C compilers? Why doesn't the first JSON parsing library own JSON parsing? These seem the same to me. I don't see how AI models change anything, other than taking human effort out of the porting process.

  • dessimus 2 hours ago
    Interesting to see how this plays out. Conceivably if running an LLM over text defeats copyright, it will destroy the book publishing industry, as I could run any ebook thru an LLM to make a new text, like the ~95% regurgitated Harry Potter.
    • timschmidt 2 hours ago
      This has already been done via brute force for melodies: https://www.vice.com/en/article/musicians-algorithmically-ge...
      • amelius 2 hours ago
        Did they listen to their own creation?

        If not, maybe it should not constitute a valid case in court.

        Also, I'm wondering if they are not themselves liable considering they have every copyrighted work in there too.

    • kingstnap 2 hours ago
      You could already do that before LLMs?

      Persumably there is already a law around why I cant just go borrow a book from my library, type out some 95% regurgitated varient on my laptop, and then try to publish it somewhere?

      Edit: I looked it up and the thing that stops you from publishing a bootleg "Harold Potter and the Wizards Rock" is this legal framework around "The Abstractions Test".

      • dessimus 31 minutes ago
        I agree, but I'm not the one claiming that an AI-Assisted rewrite is sufficient enough to now claim that one ignore copyright and change the license.
    • amelius 2 hours ago
      If enough people do this, then it may speed up the lawmaking process.
  • Tomte 6 hours ago
    > The original author, a2mark , saw this as a potential GPL violation

    Mark Pilgrim! Now that‘s a name I haven‘t read in a long time.

  • anilgulecha 7 hours ago
    This is precedent setting. In this case the rewrite was in same language, but if there's a python GPL project, and it's tests (spec) were used to rewrite specs in rust, and then an implementation in rust, can the second project be legally MIT, or any other?

    If yes, this in a sense allows a path around GPL requirements. Linux's MIT version would be out in the next 1-2 years.

    • yjftsjthsd-h 5 hours ago
      > but if there's a python GPL project, and it's tests (spec) were used to rewrite specs in rust, and then an implementation in rust, can the second project be legally MIT, or any other?

      Isn't that what https://github.com/uutils/coreutils is? GNU coreutils spec and test suite, used to produce a rust MIT implementation. (Granted, by humans AFAIK)

    • mlaretallack 6 hours ago
      Its very important to understand the "how" it was done. The GPL hands the "compile" step, and the result is still GPL. The clean Room process uses 2 teams, separated by a specification. So you would have to

      1. Generate specification on what the system does. 2. Pass to another "clean" system 3. Second clean system implements based just on the specification, without any information on the original.

      That 3rd step is the hardest, especially for well known projects.

      • microtonal 6 hours ago
        So what if a frontier model company trains two models, one including 50% of the world's open source project and the second model the other 50% (or ten models with 90-10)?

        Then the model that is familiar with the code can write specs. The model that does not have knowledge of the project can implement them.

        Would that be a proper clean room implementation?

        Seems like a pretty evil, profitable product "rewrite any code base with an inconvenient license to your proprietary version, legally".

        • anilgulecha 6 hours ago
          LLM training is unnecessary in what we're discussing. Merely LLM using: original code -> specs as facts -> specs to tests -> tests to new code.
      • anilgulecha 6 hours ago
        1 is claude-code1, outputs tests as text.

        2. Dumped into a file.

        3. claude-code that converts this to tests in the target language, and implements the app that passes the tests.

        3 is no longer hard - look at all the reimplementations from ccc, to rewrites popping up. They all have a well defined test suite as common theme. So much so that tldraw author raised a (joke) issue to remove tests from the project.

    • nairboon 7 hours ago
      No, GPL still holds even if you transform the source code from one language to another language.
      • anilgulecha 6 hours ago
        That why I carved it out to just the specs. If they can be read as "facts", then the new code is not derived but arrived at with TTD.

        The thesis I propose is that tests are more akin to facts, or can be stated as facts, and facts are not copyright-able. That's what makes this case interesting.

        • nairboon 6 hours ago
          I assumed that "tests" refers to a program too, which in this example is likely GPL. Thus GPL would stick already on the AI-rewrite of GPL test code.

          If "tests" should mean a proper specification let's say some IETF RFC of a protocol, then that would be different.

          • anilgulecha 6 hours ago
            Yes, I had not specified in my original comment. But in the SOTA LLM world code/text boundary is so blurry, so as to be non-existent.
  • shevy-java 3 hours ago
    > In traditional software law, a “clean room” rewrite requires two teams

    So, I dislike AI and wish it would disappear, BUT!

    The argument is strange here, because ... how can a2mark ensure that AI did NOT do a clean-room conforming rewrite? Because I think in theory AI can do precisely this; you just need to make sure that the model used does that too. And this can be verified, in theory. So I don't fully understand a2mark here. Yes, AI may make use of the original source code, but it could "implement" things on its own. Ultimately this is finite complexity, not infinite complexity. I think a2mark's argument is in theory weak here. And I say this as someone who dislikes AI. The main question is: can computers do a clean rewrite, in principle? And I think the answer is yes. That is not saying that claude did this here, mind you; I really don't know the particulars. But the underlying principle? I don't see why AI could not do this. a2mark may need to reconsider the statement here.

    • foltik 20 minutes ago
      Turns out there’s no need to speculate. Someone pointed out on GH [0] that the AI was literally prompted to copy the existing code:

      > *Context:* The registry maps every supported encoding to its metadata. Era assignments MUST match chardet 6.0.0's `chardet/metadata/charsets.py` at https://raw.githubusercontent.com/chardet/chardet/f0676c0d6a...

      > Fetch that file and use it as the authoritative reference for which encodings belong to which era. Do not invent era assignments.

      [0] https://github.com/chardet/chardet/issues/327#issuecomment-4...

    • orthoxerox 2 hours ago
      Clean room is sufficient, but not necessary to avoid the accusations of license violation.

      a2mark has to demonstrate that v7 is "a work containing the v6 or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language", which is different from demanding a clean-room reimplementation.

      Theoretically, the existence of a publicly available commit that is half v6 code and half v7 can be used to show that this part of v7 code has been infected by LGPL and must thus infect the rest of v7, but that's IMO going against the spirit of the [L]GPL.

      • Orygin 46 minutes ago
        Please don't use loaded terms like "infect". The license does not infect, it has provisions and requirements. If you want to interact with it, you either accept them or don't use the project. In this case, the author of v7 is trying to steal the copyrighted work of other authors by re-licensing it illegally.
    • dspillett 2 hours ago
      > how can a2mark ensure that AI did NOT do a clean-room conforming rewrite?

      In cases like this it is usually incumbent on the entity claiming the clean-room situation was pure to show their working. For instance how Compaq clean-room cloned the IBM BIOS chip¹ was well documented (the procedures used, records of comms by the teams involved) where some other manufacturers did face costly legal troubles from IBM.

      So the question is “is the clean-room claim sufficiently backed up to stand legal tests?” [and moral tests, though the AI world generally doesn't care about failing those]

      --------

      [1] the one part of their PCs that was not essentially off-the-shelf, so once it could be reliably legally mimicked this created an open IBM PC clone market

    • titanomachy 2 hours ago
      The foundation model probably includes the original project in its training set, which might be enough for a court to consider it “contaminated”. Training a new foundation model without it is technically possible, but would take months and cost millions of dollars.
    • __alexs 2 hours ago
      I think the problem here is that an AI is not a legal entity. It doesn't matter if you as individual run an AI that takes the source, dumps out a spec that you then feed into another AI. The legal liability lies with the operator of the AI, the original copyleft license was granted to a person, not to a robot.

      Now if you had 2 entirely distinct humans involved in the process that might work though.

  • pu_pe 6 hours ago
    Licensing issues aside, the chardet rewrite seems to be clearly superior to the original in performance too. It's likely that many open source projects could benefit from a similar approach.
  • zozbot234 6 hours ago
    If you ask a LLM to derive a spec that has no expressive element of the original code (a clean-room human team can carefully verify this), and then ask another instance of the LLM (with fresh context) to write out code from the spec, how is that different from a "clean room" rewrite? The agent that writes the new code only ever sees the spec, and by assumption (the assumption that's made in all clean room rewrites) the spec is purely factual with all copyrightable expression having been distilled out.
    • gf000 6 hours ago
      I guess it depends on if the source data set is part of the training data or not (if it's open source it is likely part of it).

      A lawyer could easily argue that the model itself stores a representation of the original, and thus it can never do a "fresh context".

      And to be perfectly honest, LLMs can quote a lot of text verbatim.

    • miroljub 6 hours ago
      The new agent who writes code has probably at least parts of the original code as training data.

      We can't speak about clean room implementation from LLM since they are technically capable only of spitting their training data in different ways, not of any original creation.

      • dizhn 4 hours ago
        The conclusion of this would be that you can never license AI generated code since you can't get a release from the original authors.

        Of course in practice it would work exactly in the opposite fashion and AI generated code would be immune even if it copied code verbatim.

        • jesterswilde 3 hours ago
          I don't see what's wrong with that personally. If I pirated someone's software, and then sold it as my own and got caught, just because I sold a bunch of it doesn't mean those people who bought it now are in the clear. They are still using bootleg software in their business.
      • nubg 5 hours ago
        Only in the case of open source code
    • k__ 3 hours ago
      How do you prove the training data didn't contain the code?

      I'd assume an LLM trained on the original would also be contaminated.

  • gbuk2013 3 hours ago
    In mind, if you feed code into an AI model then the output is clearly a derivative work, with all the licensing implications. This seems objectively reasonable?
    • quotemstr 2 hours ago
      Nobody in this discussion knows what the words "derivative" and "work" mean individually, much less together
  • andrewstuart 37 minutes ago
    Ai rewrites great.

    But if it’s making the original author unhappy then why do it.

  • DrammBA 7 hours ago
    I like the idea of AI-generated ~code~ anything being public domain. Public data in, public domain out.
    • lejalv 7 hours ago
      This could be read as a reformulation of the old adage - "what's mine is mine, and what is yours, is mine too".

      So, you can pilfer the commons ("public") but not stuff unavailable in source form.

      If we expand your thought experiment to other forms of expression, say videos on YT or Netflix, then yes.

    • kshri24 7 hours ago
      I don't think you can classify "public data in" as public domain. Public data could also include commercial licenses which forbid using it in any way other than what the license states. Just because the source is open for viewing does not necessarily mean it is OSL.

      That's the core issue here. All models are trained on ALL source code that is publicly available irrespective of how it was licensed. It is illegal but every company training LLMs is doing it anyways.

      • fschuett 1 hour ago
        > It is illegal

        Only (?) in America. In the EU, scraping is legal by default unless explicitly opted out with machine-readable instructions like robots.txt. That covers "training input". For training output, the rule is: "if the output is unrecognizable to the input, the license of the input does not matter" (otherwise, any project X could sue project Y for copyright infringement even if the projects only barely resemble each other). The cases where companies actually got sued were where the output was a direct copy or repetition of the input, even if an LLM was involved.

        There is, however, a larger philosophical divide between the US and the EU based on history and religion. The US philosophy is highly individualistic, capitalistic, and considers "first-order principles." Copyright is a "property right": "I own this string of bits, you used them, therefore you owe me" (principle of absolute ownership).

        Continental philosophy is more social and considers "second-order / causal effects." Copyright is a "personality right" that exists within a social ecosystem. The focus is on the effect of the action rather than a singular principle like "intellectual property." If the new code provides a secondary benefit to society and doesn't "hurt" the original creator's unique intellectual stamp, the law is inclined to view it as a new work.

        In terms of legal sociology, America and Britain are more "individual-property-atomistic" thanks to their Protestant heritage, focusing on the rights of the individual (sola me, and my property, and God). Meanwhile, Europe was, at least to a large part, Catholic (esp. France), which focuses more on works, results, and effects on society to determine morality. While the states are officially secular, the heritage of this echoes in different definitions of what is considered "legal" or "moral", depending on which side of the ocean you are on.

      • thedevilslawyer 6 hours ago
        Copyright is not a blacklist but an allowlist of things kept aside for the holder. Everything else is free game. LLM ingestion comes under fair use so no worries. If someone can get their hand on it, nothing in law stops it from training ingestion.

        We can debate if this law is moral. Like the GP I took agree public data in -> public domain out is what's right for society. Copyright as an artificial concept has gone on for long enough.

        • kshri24 6 hours ago
          > LLM ingestion comes under fair use

          I don't think so. It is no where "limited use". Entirety of the source code is ingested for training the model. In other words, it meets the bar of "heart of the work" being used for training. There are other factors as well, such as not harming owner's ability to profit from original work.

          • thedevilslawyer 6 hours ago
            https://www.skadden.com/insights/publications/2025/07/fair-u...

            Both Meta and Anthropic were vindicated for their use. Only for Anthropic was their fine for not buying upfront.

            • shakna 3 hours ago
              Alsup absolutely did not vindicate Anthropic as "fair use".

              > Instead, it was a fair use because all Anthropic did was replace the print copies it had purchased for its central library with more convenient space-saving and searchable digital copies for its central library — without adding new copies, creating new works, or redistributing existing copies. [0]

              It was only fair use, where they already had a license to the information at hand.

              [0] https://storage.courtlistener.com/recap/gov.uscourts.cand.43...

            • kshri24 6 hours ago
              This hasn't gone to Supreme Court yet. And this is just USA. Courts in rest of the World will also have to take a call. It is not as simple as you make it out to be. Developers are spread across the World with majority living outside USA. Jurisdiction matters in these things.
              • thedevilslawyer 5 hours ago
                Copyright's ambit has been pretty much defined and run by US for over a century.

                You're holding out for some grace on this from the wrong venue. The right avenue would be lobbying for new laws to regulate and use LLMs, not try to find shelter in an archaic and increasingly irrelevant bit of legalese.

                • kshri24 4 hours ago
                  I don't disagree. However, just because your assertion of copyright being initially defined by US (which is not the fact. It was England that came up with it and was adopted by the Commonwealth which US was also a part of until its independence) does not mean jurisdiction is US. Even if US Supreme Court rules one way or the other, it doesn't matter as the rest of the World have its own definitions and legalese that need to be scrutinized and modernized.
        • gf000 6 hours ago
          There are hardly any rulings/laws about the topic, and it quite obviously changes the picture of licenses.
    • benob 6 hours ago
      What about doing that with movies and music?
      • zodmaner 6 hours ago
        The results would be the same: AI generated music and movies will be public domain.
        • nkmnz 4 hours ago
          So you’d lose all rights on pictures of yourselves if they were generated by AI? Would this be true even for nudes?
          • pseudalopex 41 minutes ago
            Copyright and privacy rights are different.
  • foota 7 hours ago
    I think the more interesting question here would be if someone could fine tune an open weight model to remove knowledge of a particular library (not sure how you'd do that, but maybe possible?) and then try to get it to produce a clean room implementation.
    • benob 6 hours ago
      I don't think this would qualify as clean room (the Library was involved in learning to generate programs as a whole). However, it should be possible to remove the library from the OLMO training data and retrain it from scratch.

      But what about training without having seen any human written program? Coul a model learn from randomly generated programs?

      • foota 5 hours ago
        > I don't think this would qualify as clean room (the Library was involved in learning to generate programs as a whole)

        Hm... I mean this is really one for the lawyers, but IMO you would likely successfully be able to argue that the marginal knowledge of general coding from a particular library is likely close to nil.

        The hard part here imo would be convincingly arguing that you can wipe out knowledge of the library from the training set, whether through fine tuning or trying to exclude it from the dataset.

        > But what about training without having seen any human written program? Coul a model learn from randomly generated programs?

        I think the answer at this point is definitely no, but maybe someday. I think it's a more interesting question for art since it's more subjective, if we eventually get to a point where a machine can self-teach itself art from nothing... first of all how, but second of all it would be interesting to see the reaction from people opposed to AI art on the basis of it training off of artists.

        Honestly given all I've seen models do, I wouldn't be too surprised if you could somehow distill a (very bad) image generation model off of just an LLM. In a sense this is the end goal of the pelican riding a bicycle (somewhat tongue in cheek), if the LLM can learn to draw anything with SVGs without ever getting visual inputs then it would be very interesting :)

  • dspillett 2 hours ago
    > Accepting AI-rewriting as relicensing could spell the end of Copyleft

    The more restrictive licences perhaps, though only if the rewriter convinces everyone that they can properly maintain the result. For ancient projects that aren't actively maintained anyway (because they are essentially done at this point) this might make little difference, but for active projects any new features and fixes might result in either manual reimplementation in the rewritten version or the clean-room process being repeated completely for the whole project.

    > chardet 7.0 is a ground-up, MIT-licensed rewrite of chardet. Same package name, same public API —

    (from the github description)

    The “same name” part to me feels somewhat disingenuous. It isn't the same thing so it should have a different name to avoid confusion, even if that name is something very similar to the original like chardet-ng or chardet-ai.

    • conartist6 2 hours ago
      Who cares if it can be maintained. The system now penalizes the original creator for creating it and gives thieves the ability to conduct legal theft at a gargantuan scale, the only limit being how creative the abuser is in making money.

      With the incentives set up like that, the era of open software cooperation would be ended rapidly.

  • skeledrew 3 hours ago
    Looks like copyright just died.
    • cedws 3 hours ago
      *for ordinary people. If you use AI to steal from rich and powerful people, expect the law to come down on you like a tonne of bricks. If you steal from authors, artists, and developers no worries.
  • jacquesm 1 hour ago
    If you don't understand the meaning of what a 'derived work' is then you should probably not be doing this kind of thing without a massive disclaimer and/or having your lawyer doing a review.

    There is no such thing as the output of an LLM as a 'new' work for copyright purposes, if it were then it would be copyrightable and it is not. The term of art is 'original work' instead of 'new'.

    The bigger issue will be using tools such as these and then humans passing off the results as their own because they believe that their contribution to the process whitewashes the AI contributions to the point that they rise to the status of original works. "The AI only did little bits" is not a very strong defense though.

    If you really want to own the work-product simply don't use AI during the creation. You can use it for reviews, but even then you simply do not copy-and-paste from the AI window to the text you are creating (whether code or ordinary prose isn't really a difference).

    I've seen a copyright case hinge on 10 lines of unique code that were enough of a fingerprint to clinch the 'derived work' assessment. Prize quote by the defendant: "We stole it, but not from them".

    There is a very blurry line somewhere in the contents of any large LLM: would a model be able to spit out the code that it did if it did not have access to similar samples and to what degree does that output rely on one or more key examples without which it would not be able to solve the problem you've tasked it with?

    The lower boundary would be the most minimal training set required to do the job, and then to analyze what the key corresponding bits were from the inputs that cause the output to be non-functional if they were dropped from the training set.

    The upper boundary would be where completely non-related works and general information rather than other parties copyrighted works would be sufficient to do the creation.

    The easiest way to loophole this is to copyright the prompt, not the work product of the AI, after all you should at least be able to write the prompt. Then others can re-create it too, but that's usually not the case with these AI products, they're made to be exact copies of something that already exists and the prompt will usually reflect that.

    That's why I'm a big fan of mandatory disclosure of whether or not AI was used in the production of some piece of text, for one it helps to establish whether or not you should trust it, who is responsible for it and whether the person publishing it has the right to claim authorship.

    Using AI as a 'copyright laundromat' is not going to end up well.

  • blamestross 4 hours ago
    Intellectual property laundering is the core and primary value of LLMs. Everything else is "bonus".
  • tgma 2 hours ago
    Isn't AFC test applicable here?
  • b65e8bee43c2ed0 1 hour ago
    at this point, every corporation in the world has AI slop in their software. any attempt to outlaw it would attract enough funding from the oligarchs for the opposition to dethrone any party. no attempts will be made in the next three years, obviously, and then it will be even more late than it is now.

    and while particularly diehard believers in democracy may insist that if they kvetch hard enough they can get things they don't like regulated out of existence, they pointedly ignore the elephant in the room. they could succeed beyond their wildest dreams - get the West to implement a moratorium on AI, dismantle every FAGMAN, Mossad every researcher, send Yudkowskyjugend death squads to knock down doors to seize fully semiautomatic assault GPUs, and none of it will make any fucking difference, because China doesn't give a fuck.

  • gspr 4 hours ago
    > If “AI-rewriting” is accepted as a valid way to change licenses, it represents the end of Copyleft. Any developer could take a GPL-licensed project, feed it into an LLM with the prompt “Rewrite this in a different style,” and release it under MIT. The legal and ethical lines are still being drawn, and the chardet v7.0.0 case is one of the first real-world tests.

    This isn't even limited to "the end of copyleft"; it's the end of all copyright! At least copyright protecting the little guy. If you have deep enough pockets to create LLMs, you can in this potential future use them to wash away anyone's copyright for any work. Why would the GPL be the only target? If it works for the GPL, it surely also works for your photographs, poetry – or hell even proprietary software?

  • RcouF1uZ4gsC 2 hours ago
    > The copyright vacuum: If AI-generated code cannot be copyrighted (as the courts suggest), then the maintainers may not even have the legal standing to license v7.0.0 under MIT or any license.

    I believe this is a misunderstanding of the ruling. The code can’t be copyrighted by a LLM. However, the code could be copyrighted by the person running the LLM.

  • duskdozer 2 hours ago
    This is such scummy behavior.
  • oytis 57 minutes ago
    Is it just me, or HN recently started picking up a social media dynamics with contributions reacting/responding to each other?
    • altairprime 56 minutes ago
      It’s always happened occasionally. Sometimes you’ll also see informative supporting links popup in the feed, though those generally get minimal traction.
  • verdverm 7 hours ago
    Interesting questions raised by recent SCOTUS refusal to hear appeals related to AI an copyright-ability, and how that may affect licensing in open source.

    Hoping the HN community can bring more color to this, there are some members who know about these subjects.

  • est 6 hours ago
    Uh, patricide?

    The key leap from gpt3 to gpt-3.5 (aka ChatGPT) was code-davinci-002, which is trained upon Github source code after OpenAI-Microsoft partnership.

    Open source code contributed much to LLM's amazing CoT consistency. If there's no Open Source movement, LLM would be developed much later.

  • himata4113 6 hours ago
    I mean in my opinion GPL licensed code should just infect models forcing them to follow the license.

    You can do this a lot by saying things like: complete the code "<snippet from gpl licensed code>".

    And if now the models are GPL licensed the problem of relicensing is gone since the code produced by these models should in theory be also GPL licensed.

    Unfortunately, there is a dumb clause that computer generated code cannot be copyrighted or licensed to begin with.

    • kshri24 6 hours ago
      > Unfortunately, there is a dumb clause that computer generated code cannot be copyrighted or licensed to begin with.

      Can you point to the clause? I have never seen it in any GPL license.

      • himata4113 5 hours ago
        it's the general copyright protection 'law' fair use and all that, varies by country tho.
  • spwa4 5 hours ago
    Can we do the same with universal music? Because that's easy and already possible. Or Microsoft Windows? Because we all know the answer: if it works, essentially any government will immediately call it illegal.

    Because if this isn't allowed, that makes all of the AI models themselves illegal. They are very much the product of using others' copyrighted stuff and rewriting it.

    But of course this will be allowed because copyright was never meant to protect anyone small. And that it's in direct contradiction with what applies to large companies? Courts won't care.

    • gspr 4 hours ago
      The dark future possibility here is that the big guy is allowed to launder the intellectual property of the little guy, but not vice versa.
      • vetrom 3 hours ago
        That dark future is now, look at case law as applied to the AI operators vs the 'little guys'.
        • spwa4 3 hours ago
          Even big copyright firms. Disney especially is known for rehashing existing material and then not allowing anyone else to do the same with their stuff. Disney does not have a lot of original stories.
  • Cantinflas 3 hours ago
    "If “AI-rewriting” is accepted as a valid way to change licenses, it represents the end of Copyleft. "

    Software in the AI era is not that important.

    Copyleft has already won, you can have new code in 40 seconds for $0.70 worth of tokens.

    • p0w3n3d 3 hours ago
      Just take the code and let it AI rewrite. But... AI was taught on all the OpenSource Code available. Lot of them were GPL I think... So...
    • WesolyKubeczek 2 hours ago
      Let’s then abolish all copyright on all software, what ever could go wrong?