The First Commandment of AI Usage

ArsTechnica recently released their newsroom AI policy, and it’s interesting to see the conclusions they’ve come to. This is a problem a lot of people have struggled with, but for journalists it’s especially tricky — not only was AI created using their (often stolen) work, it could put them out of business. It probably feels kind of gross to use it.

It feels that way for a lot of people. It feels that way for me, and as I’ve tried to create a personal moral framework, the question of whether it’s “ok” to use has really stumped me. Really, it’s been more of a question of “we have to use this, how can we use it in a moral way.” It’s a powerful tool, but a tool nonetheless. It doesn’t have intrinsic morality, anymore than excel does. But the production of it, and how it’s frequently used today, is pretty awful.

Anyway, you can wrap yourself up knots trying to minimize cognitive dissonance around AI usage, but for myself, I’ve decided on the first commandment of AI usage.

Thou shalt not use AI in a way that destroys or damages the things that made AI possible.

AI relies on the written word, written by human beings both over centuries (in books) and more recently (all the news on the web). If you are using AI to consume your news, the money that used to go to news organizations is instead flowing to the AI companies (and they’re not passing it on — they’re pretty far in the hole as it is). Extrapolate this out and it becomes obvious that news will disappear because there’s no money in being the unpaid source for everyone’s AIs.

So don’t use it for that.

Likewise, visual AI models rely on gobs and gobs of ingested images — images created by people over hundreds (in some cases thousands) of years. Don’t have AI generated images that could’ve been created by a human. Don’t patronize organizations that use AI instead of skilled artisans. If you fall into the trap of “well it’s cheaper to do it this way” then humans will be undercut for almost everything, and we will lose visual artists — the same artists whose work was required to create AI.

So don’t use it for that.

Eventually AI will be able to write (decent) books and potentially create entire films. If that happens and becomes popular, obviously human writers and film makers will be crowded out just by sheer volume (something that is already happening in music, where almost half of music submissions to one streaming service are AI generated). If that happens those people won’t be able to make a living and the career of “writer” (which was required to produce the work that AI needed to exist) will cease to be.

So don’t use it for that.

You get the point. Don’t use AI in a way that destroys or damages the things that made AI possible.

That includes personal things. For example, don’t use AI in a way that destroys or damages your ability to concentrate and think deeply — something AI is already doing. Deep thinking was required to make AI, which is kind of a miraculous achievement. AI could also eliminate our children’s ability to think deeply entirely if we let it.

So don’t use it for that.

I know this model puts a lot of pressure on the individual to make decisions for themselves, to recognize the downstream effects of their actions, and to realize when they are personally being impacted. It’s a lot to ask.

Unfortunately, the government has abdicated its responsibility to protect the citizenry. So it’s up to us.


Leave a comment