Opinion 6:56 PM. April 11, 2025. Write it down. That's the precise moment the tech-bro-niverse imploded due to the gravitational force of irony at its core. That was the moment Jack Dorsey posted "Delete all IP law" on X. A little later, Elon Musk added his approval with "I agree."
Is this a considered intellectual stance born from a closely argued radical reassessment of the legal, economic, and cultural framework of modern times, or a petulant outburst from an entitled billionaire? We report, you decide. Fortunately for the world beyond Jack's egosphere, deleting IP law is still outside presidential powers, for now, at least. It's also a good thing for Jack and Elon themselves, whose entire empires are built on IP law.
Although there's much that would bear reform, the system of patents, copyright, and trademarks underpins all innovation and commerce, nationally and globally. Who would invest in hardware companies if R&D had a payback time of however many weeks it took to be ripped off? Who would invest in software if any employee could post the source code publicly? Would Linux have evolved without the GPL?
Without IP law, the structure of the 21st century would degenerate into warlord-managed fiefdoms where the organizations with the most power to intimidate would take what they wanted and deny others. Jack and Elon may fondly imagine they'd be among those warlords.
Rather than demonstrating the risible braggadocio of teenage boys on their first beer binge, our dynamic duo should look to the inevitable consequences of acting as if IP law had indeed been deleted. The technology they wish to give free rein to feed on human brains is demonstrably flawed. Not only is no one able to stop it hallucinating on unsullied training data – careful with that coding supply chain, Eugene – it can be provoked into a full-blown bad trip with data that appears OK to humans but contains carefully engineered digital LSD: adversarial noise.
Just like Dr Hofmann's original infamous elixir, adversarial noise is effective in very small amounts, adding tiny tweaks to data that sound like one thing to humans but are perceived by certain kinds of generative AI very differently. If used in training data, this can unhinge the resultant model; if used in data an unsullied model is trying to analyse, it can embed hidden commands or perceptions that corrupt the AI's output.
All this is possible because no matter how the hipsters gussy it up, AI isn't intelligent and works completely differently to our own wetware. Once you set out to deceive it, it goes off the rails. The CIA tested LSD as a mind-control drug last century, but it didn't work out. Adversarial noise does.
Take a look at the work of musician and IP activist Benn Jordan. As he puts it in his latest highly entertaining and informative video, he was trying to work with lawmakers to compel large AI companies to document their training data and have a licensing structure to pay creatives for the use of their work. As he says, the magic mute button for anyone pushing a new paid-for generative AI product is "What data did you use to train your base model?" Either they don't know, or admitting that they just scraped everything makes them liable for copyright claims. Big, big copyright claims. So let's get it out into the open.
- Official abuse of state security has always been bad, now it's horrifying
- When even Microsoft can't understand its own Outlook, big tech is stuck in a swamp of its own making
- Time to make C the COBOL of this century
- The biggest microcode attack in our history is underway
All that came to an end on January 20, when the question became "how do we take this into our own hands?" One answer is adversarial noise. Researchers at the University of Tennessee, Knoxville had already created a technology called HarmonyCloak that sounds fine to human ears but completely breaks AI's ability to recognize harmony and rhythm. The results are comically horrific. Adding this to his own Poisonify system, which makes AI misidentify instruments, Jordan brews up a potion that makes an acid casualty of artificial intelligence. Degenerative AI.
These are early days, but this stuff works. Put out protected music and it will poison any AI that feeds on it. That immediately protects musicians and visual artists, as this works on diffusion models common to both audio and graphical content. Enough of it, and business models break down as well. That's before the potential for pranking and attacking voice recognition systems.
This may not appear to affect LLMs directly, where adding noise is much harder to disguise from ordinary users.
The lesson's the same, Jack and Elon. If you don't demonstrably regularize your training data, your product is vulnerable, your business even more so. IP law provides the framework within which you can do that, protecting you from attack and formalizing your intellectual supply chain.
On which you utterly depend. You dolts. ®