AI and Creators: Guess Who Doesn't Win This Battle

someone writing in a journal

Photo by Unseen Studio on Unsplash.

The problems with AI tools employed in business contexts are many and varied, but they boil down to variations on three points:

  1. They lie;

  2. They aren’t very good; and

  3. They’re ethically sketchy.

The sketchy ethics take many forms. There’s evidence that AI tools favor the English language, reinforce gender and ethnic stereotypes, ape misinformation, and – oh, yeah – violate copyright protections.

Maybe copyright violations seem picayuny compared to the spread of misinformation and the perpetuation of ruling-class hegemony, but it depends on where you sit.

If you have a lot of intellectual property, if you’re Getty Images or The New York Times, AI’s use of your content to create new content is a BF deal. You’ve invested a lot in that content with the idea of monetizing it, and you don’t like it one bit that AI is scraping it and using it for free.

And if you’re on the other side, if you’re Microsoft or OpenAI, you want more people using more of your tools unimpeded. Ride a painted pony let the spinning wheel spin. Hellyeah.

With everyone’s eyes lighting up with dollar signs like Scrooge McDuck at the thought of a brave AI-powered new world, it’s no surprise that publishers are talking to AI-tool owners about a working arrangement.

Talk about A-listers: The publishers include News Corp, Axel Springer, The New York Times, and The Guardian. The tool-wielders include OpenAI, Google, Microsoft, and Adobe.

No room for folks who eat with their fingers at that table. No sirree.

The guest list brings up an interesting question, and the main subject of this blog, which is: Where’s mine?

Why I deserve some

I’ve been a professional writer for 47 years. I sold my first story to Trains magazine (still going, bless its heart) in 1976, before I was out of high school. Since then I’ve written for outlets as big as The New York Times and as small as a church bulletin. I currently blog semi-anonymously for a number of clients.

In between writing my own stuff and writing for outlets who’ve published my pieces on the internet, credited or uncredited, there’s a lot of me out there for AI to scrape.

Maybe that’s why, unlike many of my contemporaries, I can ask Google Bard to write a biography of me and it can comply – wrongly, and hilariously so, but it has enough information to take a shot. (Host of the "Hot Food Cool Tunes" podcast? News to me!)

That’s also why I can ask an AI tool to write a blog in one of the highly specialized areas where I produce a lot of content and it can come back with something that sounds like a high-school freshman trying to imitate me.

The one-liners are flat, the enthusiasm is misplaced, and the ear is solid tin, but I can tell it’s using Kit Kiefer’s writings to try to sound like Kit Kiefer, because that’s what’s out there for it to scrape.

What’s the problem?

So is this a problem or just really weird fan fiction? Depends on where you stand.

My primitive understanding of the law is that there’s nothing inherently illegal about reading everything J. K. Rowling ever wrote or listening to every Queen song and attempting to write something in that vein based on that knowledge, assuming a couple of things:

  1. You’re not trying to pass it off as something written by Rowling or Queen; and

  2. You’re not trying to make money off of it.

The source works are protected by copyright, so I can’t lift whole passages or melodies. But I'm totally allowed to be inspired by them.

Copyright starts at creation, so the content I create for an organization enjoys copyright protection without me having to file anything. That means if someone decided to sell my article about knee surgery under their name, I'm protected and could sue for damages.

However, there’s nothing in copyright law that outlines who gets the money from someone else making money off of a ChatGPT-generated piece of content built off of my work.

The real creators

Now, my stuff’s admittedly small potatoes. The real problems start when you consider real creators, and prompts that ask AI to consult only their works in creating new content.

For instance …

  • How would Ray Davies get compensated when AI writes a “new” Kinks lyric?

  • How would Jack Davis’ estate get paid when DALL-E2 creates a piece of art that apes Davis’ work for MAD magazine?

  • When the cadences from It are artificially recombined in the service of selling lemon-lime soda, how would the money get back to Stephen King?

While it's swell that Miss Bromo and Mr. Seltzer are getting together to decide the fate of content on the internet, it sure seems like a lot of these questions aren’t being addressed, which means the real creators are going to be left on the outside looking in.

Again.

Remedies

This is the part where I suggest answers, and I have to admit I'm a little tapped out.

I'm pretty sure OpenAI is not going to enter into an agreement with me and the tens of thousands of other mes out there, and even if it did, what would we get paid? Spotify wages.

It also seems to me that any creator looking to get compensation would have to be able to prove that AI stole their stuff, and I'm with Potter Stewart on pornography with this one. I know it when I see it, but not everyone’s going to see what I see.

Besides, does the world really need a bureaucracy that decides the validity of AI plagiarism claims? I’d rather eat my own hair than go through that process.

Alternately, the creators of AI tools could contribute to a fund, but who’s being funded? Spare me the spectacle of watching Google and The New York Times swap spit over AI.

We’re not helped by the vague agreements that many creators and publishers have when it comes to internet publishing. If a reader of corporate content clicks on a “Buy Now” button, how often does the creator of that content get paid for the conversion? About as often as someone spots an ivory-billed woodpecker.

Ultimately, the discussions surrounding AI tools and the content they use to fuel their production process should spur us to re-examine the creation, monetization, sharing, use, and re-use of content on the internet.

The current model is reminiscent of the work-for-hire schemes that Marvel used to publish some of the world’s best-known intellectual property while paying the actual creators peanuts. Only in this case, the peanuts are virtual.

A better way forward would be for these discussions to produce a standard internet publishing contract that covers profit-sharing from AI-generated, creator-derived content, for the creator and the entity they create for.

It’s tempting to paint the current discussions between the Googles and NYTs over AI-generated content as a rich-get-richer thing, because that’s what it is.

And if that’s not what it is, at least we know what it isn’t. It’s not a poor-get-richer thing.