Patronize People

Perplexity.ai is an “AI” (large language model, in reality) company that is trying to be an “answer engine” as opposed to a search engine. No longer would you need to go to other websites to see the results of a question. You ask the question, and perplexity finds the answer and presents it to you in a handy, summarized form.

In order to do this, Perplexity is plagiarizing vast swaths of the internet, even going around paywalls to create summaries of articles that you’d normally need to pay for.

What’s the end results of this business model? Let’s look at how it would work if Perplexity were as successful as they hope to be.

  1. Perplexity steals content from other websites that pay people to research/report/create said content and presents it to users.
  2. The revenue that would have gone to those other websites instead goes to perplexity.
  3. Perplexity makes a lot of money. The websites that created the content see their revenue crater.
  4. Those websites can no longer pay to hire people to research/report/create and they go out of business.
  5. Eventually Perplexity drives all companies researching/reporting/creating out of business except for two types of organizations:
    • Those that have an agenda and are using their “reporting” as propaganda, so they’re OK subsidizing it
    • Those who are doing their reporting on a volunteer basis and thus have less resources

If Perplexity is successful, the end result will be the decimation of journalism as a profession. Ironically, though, people will have grown accustomed to coming to Perplexity for their information and will trust what the AI puts in front of them. Without realizing it, they will be served propaganda which is now functionally disguised as legitimate reporting — laundered through Perplexity’s increasingly inaccurate and even harmful algorithms.

How can I feel so confident that this is the direction things will go? Because Perplexity is making many of the same promises Facebook and Google did when they inserted themselves between consumers and news organizations. Those didn’t go well either.

What is plagiarism?

If you take someone else’s work, and copy it word for word, and try to pass it off as your own, that’s plagiarism.

But what if you get out a thesaurus and change every … 10th word? Is that plagiarism? What if you change every 5th? Or 3rd? How many words would you need to change for it to no longer be plagiarism?

What if you created a program that automatically changed almost every word? So “The quick brown fox jumps over the lazy dog” is changed to “The agile amber canidae leaps across the recumbent canine.” Is that plagiarism?

Many LLMs are basically this — plagiarism machines that obfuscate the plagiarism much better than any other plagiarist. They don’t create. They can’t create. By definition they can only copy.

As someone who writes, it bothers me a lot that my thoughts might get sucked up into a database and spat out without any of the context I typically give them. But honestly, the plagiarism question is academic at this point.

They have and will continue to plagiarize, and they have and will continue to pay as little as possible to get away with it — blatantly stealing from those without the resources to demand some form of remuneration, and only giving those with the means to demand the smallest amount required.

But ignore that! And let’s get back to the question question from the beginning. What if they’re successful? What is the end result?

Ladders are important

George R.R. Martin wrote a blog post about “mini-rooms.” Typically a show has a writers room full of writers of all levels. Martin talks about his experience joining a show as a junior writer, submitting his first script, and going through the entire process of getting it filmed. He points out that it was pivotal for his development: “There is no film school in the world that could have taught me as much about television production as I learned on TWILIGHT ZONE during that season and a half.”

Because shows are shorter now and production schedules are tighter streamers often assemble what’s called a “mini room.” A mini room is a couple of senior writers, and maybe one or two junior writers. They work on scripts, but when the show actually begins production the junior writers are dismissed — they don’t stick around for adjustments or table reads or filming. They don’t see how it plays out, and they don’t improve their craft.

Mini-rooms are abominations … If the Story Editors of 2023 are not allowed to get any production experience, where do the studios think the Showrunners of 2033 are going to come from?

If you remove the first rungs of a ladder, how are people supposed to climb it? Even the couple of years where mini rooms were allowed has led to a decrease in show quality. As Constance Grady points out on Vox:

The new WGA contract essentially killed off mini rooms, but for the next few years, we’ll be living in the creative ecosystem they birthed. That’s a world where upcoming talent had limited opportunities to learn the craft of their medium, and it has started to show.

I recently sat through Hulu’s Death and Other Details, an expensive-looking murder mystery starring Mandy Patinkin and a host of big names that fits right into Poniewozik’s rubric of mid TV. It was riddled with the kind of basic errors that even bad TV shows used to know how to avoid, mistakes that feel like not knowing a period is supposed to go at the end of the sentence. The act breaks all fell in the wrong place so that they killed tension instead of heightening it. Murder suspects would learn crucial information offscreen instead of onscreen, where the audience could see their reaction and evaluate how suspicious they were. Mysteries have a formula, and the people who made them used to know that. Now, that kind of basic knowledge is a lot less widespread than it used to be.

Making good television is a skill, and so is making alluringly addictive television. The industry hasn’t been set up to nurture either ability for a while.

Without those crucial entry level positions the overall quality suffers, and the future promises only more suffering.

This isn’t just true for TV. Look at the world of programming, where a developer might start in QA or as an entry level programmer. They’ll write simple code that will be reviewed with a fine-tooth comb by a more senior programmer who will not only correct their code, but teach them to avoid those mistakes in the past. It takes time, but eventually that entry level coder will gain the experience needed to write solid code on their own (though it should still always be reviewed, I mean, come on).

But LLMs promise to replace that entry level coder. In the interest of saving cost they’ll “empower” more senior coders to simply ask the LLM for the code they want, then those senior coders can review it and integrate it into their code. The senior coder is more productive, the LLM company has made money and the coder’s company has actually SAVED money because they didn’t have to hire that entry level coder. Everyone wins!

For now.

But what happens when that coder leaves? Or retires? Who will take their place? The bottom rungs have been removed and now no one can follow them up the ladder.

The same holds true for every industry. We need entry level pentesters and analysts to get senior pentesters and analysts in cybersecurity, but entry level jobs ads are out numbered by those seeking an experienced cybersecurity professional 5 to 1. Where are those experienced cybersecurity professionals going to come from?

The solution is a dead end

Companies selling LLMs might say “Why, you have LLMs now! You’ll NEVER need another entry level person! And eventually our LLM will be good enough to write that advanced code. You won’t need to replace your senior engineers because by then WE can be your senior engineers!”

The social and economical implications of this are apocalyptic. But we’re going to ignore them for now and focus on the real problem here.

If code only comes from LLMs, who is writing the code they need to be trained on? Who is creating new techniques and theories? Much like the problem with journalism, the end result is an LLM starved of content — forced to use lower and lower quality inputs because they’re the only things available.

But by then it would be too late. There wouldn’t be good coders — none of them could climb the ladder with the bottom rungs removed. Where are the senior engineers of 2034? They lost their jobs to LLMs in 2024. They’re all working as “prompt engineers” now.

The call for radical action

Here is the point in my blog when I make a call for radical action. And here it is!

Don’t use perplexity.ai. They’re crooks and, if they are successful, they will decimate journalism and potentially much of publishing of any kind on the web.

But also, look at the end result of what you’re doing before you do it.

We have heard all of these forecasts that AI will change the world. It’ll free mankind. It’ll destroy mankind. It’ll make everything better. It’ll make everything worse.

What’s crazy is that most of that stuff is coming from the companies making AI. Even the seemingly bad press like AGI potentially wiping out humanity is coming from inside OpenAI! It is in their interest to convince the world that what they’re doing is inevitable and inescapable. There are literally, LITERALLY, tens or hundreds of billions of dollars on the line for them.

But don’t listen to them! AI is just another tool and we shouldn’t let the people selling the tool tell us that it’s the only tool we’ll ever need and it’ll replace every other tool. If someone said that about a wrench or a hammer we would be skeptical. Why aren’t we skeptical about “digital god??”

So I’m not saying don’t use GenAI, I’m saying extrapolate out how your use will effect the world you live in. I’m also saying that you should examine the companies that are selling you AI to see which ones are stealing and which ones are doing their best to, you know, not steal. And then maybe use the ones that aren’t stealing.

Most importantly, as the title says, Patronize People. If journalism is important to you, pay people who do good journalism. If music is important to you, support people who make good music. If art is important to you, support artists who create their own art.

AI is, at this point, just a tool. People are people. Use AI if you’d like, but not at the expense of people.


Leave a comment