AI-Generated Content Is Just Another Term For Stealing, For Now
AI is on the verge of changing the universe and there’s no denying it. It’s going to happen fast, faster than even the most optimistic voices expected, and in ways no one can predict. The world is going to look very very different a year or so from now and I’d be lying if I said I know its shape. What I do know in the here and now is that AI-generated content isn’t so much generated as stolen.
If you’ve played around with any of the popular AI bots like ChatGPT then you’ve probably asked it to write something. What it spits out definitely seems like something a human could have written. That’s because in most cases, it is. Much of what ChatGPT and its counterparts do involves scanning existing content from various sources and then cribbing it together in a different order with a few different words to make it seem like it’s delivering something new.
But it isn’t delivering something new. It’s delivering something stolen from human authors. Reworded yes and combining multiple authors into one article, but stolen nonetheless. A complicated theft is still theft.
You don’t have to take my word for it. I asked ChatGPT whether its responses are stolen content and here’s what it said…
“I generate responses based on the patterns and relationships learned from the vast amount of text data I was trained on. It is possible that my generated responses may contain similarities to existing content. This is because my training data consists of a large and diverse range of texts from the internet, which may include previously published works.”
Even it seems to admit what it’s doing is wrong saying…
“In such cases, it is important to give credit to the original authors and not present my generated responses as original content.”
Of course, ChatGPT does not do that. It never credits original authors or sources and always presents its generated content as original work. Even that response was most likely cribbed from some other old grouch writing an article on the internet about why he thinks AI-generated content is theft. Maybe next time, ChatGPT will steal from this article instead.
In addition to being stolen (lawsuits are already in progress), most AI content is also full of hot air. Most of what you’ll get in an AI-generated article is vague nothing based on vapor. For example, here’s a paragraph I had ChatGPT write about the new series Star Trek: Strange New Worlds…
“Visually, the show is stunning, with detailed sets, impressive special effects, and breathtaking alien landscapes. The attention to detail and the love for the source material is evident in every frame. The soundtrack is also noteworthy, with a beautiful score that complements the action on screen.”
The above reads like a marketing pamphlet because, in essence, that’s exactly what it is. ChatGPT has never seen Star Trek: Strange New Worlds. It doesn’t even really understand what television is. It did however, ingest a bunch of marketing materials that used these same words and phrases, so it grabbed them and combined them together in a paragraph that made it seem like it was telling you something about Strange New Worlds while actually telling you nothing at all.
That’s not to say what AI is doing isn’t impressive. It absolutely is. It’s also, to some extent, a trick. AI doesn’t have intelligence, despite the I in its acronym.
Heedless of those possible issues, various online publishers have rushed to use Artificial Intelligence content as a way to stop paying writers and save money. Given the average attention span of most internet users, it’s unlikely anyone who visits their site will notice for a while. AI is good enough to pass for the real thing and maybe it’ll work out fine.
Eventually, you have to expect the big media outlets will start suing. AI is stealing content and information without permission and it’s hard to imagine that will be allowed to continue.
All of this has happened before. It’s no different than the content scrapers of the early 2000s. AI is just better at stealing and hiding the fact that it’s doing so. Even those rudimentary content scrapers, who copy/pasted content wholesale and dropped it on their website, got away with it for a while. There was a brief period where sites stealing content outranked the sites that came up with the content on Google.
It didn’t last. Consumers and search engines figured out what those early scrapers were doing and buried that scam at the bottom of internet visibility. The same is likely to happen with AI-generated content. When it happens, much like the rise of AI itself, it will happen fast.
The AI revolution is here but it’s not likely to turn out the way anyone thinks it will. I believe AI’s niche won’t be as a replacement for writers, but maybe as a supplement to them. AI as a helper, and not a replacer, will be a better world for humans to live in.
About The Author
Josh Tyler is an entrepreneur and the CEO of Walk Big, a fast-growing media company. He’s been writing, working and investing in the online content business full-time since starting his first media website, Cinema Blend, way back in the year 2000.