A new AI‑linked software tool, Malus.sh, has ignited intense debate within the open‑source community by claiming it can use artificial intelligence to recreate existing software projects from scratch, effectively “liberating” them from original copyright license obligations. The project, which is being developed by a limited liability company and already lists paying customers, has drawn sharp criticism from developers who fear that such tools could enable organisations to replicate sophisticated software functions without the attribution or copyleft requirements typical of open‑source licensing.
The emergence of Malus.sh highlights a growing concern that generative AI is increasingly capable of bypassing traditional copyright protections. While the tool’s website employs provocative language, describing the process as a way to create “clean room” clones of open‑source libraries, the implications for the broader software‑services economy are significant.
The ‘Clean Room’ Automation Shift
Malus.sh relies on an automated application of the “clean room” design method—a historical reverse‑engineering technique that uses two separate teams to recreate software functions without violating copyright. By using AI to automate the role of the “clean team,” Malus.sh can theoretically generate code that is legally distinct from the original source material.
Industry observers note that this process, which once required months of manual effort, can now be executed by AI in a matter of days. This shift has placed immense pressure on existing legal frameworks that govern software licensing, attribution, and intellectual property. The project’s ability to strip away copyleft obligations—such as those found in standard General Public License (GPL) requirements—has left many in the open‑source community questioning whether copyright can still reliably protect their work.
Growing Alarm Over Open‑Source Integrity
The debate gained traction recently after a new version of the popular Python library “chardet” was released as a “ground‑up, MIT‑licensed rewrite,” built entirely with Anthropic’s Claude Code. Developers who had spent years contributing to the original library expressed alarm that such a rewrite could replace an established project without providing any credit to the original authors.
For many in the software industry, the issue goes beyond simple attribution. If AI can recreate complex, costly enterprise tools by simply “observing” their functional outputs, the business model for many software‑as‑a‑service (SaaS) providers could be rendered obsolete. Critics argue that this commoditisation of software functionality, powered by AI, could discourage future investment in open‑source development, as the economic incentive to build and maintain high‑quality libraries is undermined.
Corporate Risk and Legal Ambiguity
The rapid evolution of these tools has also raised red flags for corporate legal departments. Companies that rely on open‑source software now face the risk of unknowingly integrating “cloned” code into their own products, potentially exposing them to downstream legal liabilities or reputational issues if that code is later found to infringe on the original project’s intellectual property.
While some developers believe the trend of AI‑led code rewriting is irreversible, the industry is increasingly calling for clearer legal guidance on what constitutes a “legally distinct” clone in the age of generative AI. As Malus.sh continues to attract users, the friction between AI‑enabled innovation and traditional copyright law is expected to become a central challenge for software companies and the broader open‑source movement throughout the year.
