Categories
At Large Letter From An Editor

Sweet Dreams

Did you see the video of President Trump singing the Eurythmics’ 1980’s hit, “Sweet Dreams”? He’s really pretty good, to be honest. Except honesty has nothing to do with it. The video — all of it, including the imitation of Trump’s voice — was created by a Google artificial intelligence program, an algorithm trained on Trump’s voice and speech patterns and tasked with creating this bizarre cover song.

The video was only online for a couple of days, but it’s just another example of what we’re all going to be facing in the coming years: The fact that most human creative endeavors can be replicated by artificial intelligence, including novels, screenplays, television scripts, videos of politicians or celebrities (or any of us), pornography, political propaganda, advertising jingles, emails, phone calls, “documentaries,” and even the news. It’s going to be a huge influence in our lives, and it has an enormous potential for creating mischief via disinformation and the manipulation of “reality.”

That’s why seven companies — Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI — met with President Biden last Friday to announce a voluntary commitment to standards in the areas of safety and security. The companies agreed to:

  • Security test their AI products, and share information about their products with the government and other organizations attempting to manage the risks of AI.
  • Implement watermarks or other means of identifying AI-generated content.
  • Deploy AI tools to tackle society’s challenges, including curing disease and combating climate change.
  • Conduct research on the risks of bias and invasion of privacy from the spread of AI.


Again, these were voluntary agreements, and it bears noting that these seven companies are fierce competitors and unlikely to share anything that costs them a competitive edge. The regulation of artificial intelligence will soon require more than a loose, voluntary agreement to uphold ethical standards.

The U.S. isn’t alone in trying to regulate the burgeoning AI industry. Governments around the globe — friendly, and not so friendly — are doing the same. Learning the secrets of AI is the new global arms race. Using AI disinformation to control or influence human behavior is a potential weapon with terrifying prospects.

It’s also a tool that corporations are already using. I got an email this week urging me to buy an AI program that would generate promotional emails for my company. All I had to do was give the program the details about what I wanted to promote and the AI algorithm would do the rest, cranking out “lively and engaging” emails sure to win over my customers. I don’t have a company, but if I did, the barely unspoken implication was that this program could eliminate a salary.

It’s part of what’s driving the strike by screen actors and writers against the major film and television studios: The next episode of your favorite TV show could be “written” by an AI program, thereby eliminating a salary. Will the public care — or even know — if, say, the latest episode of Law & Order was generated by AI? Will Zuckerberg figure out how to use AI to coerce you into giving Meta even more of your personal information? (Does it even Meta at this point? Sorry.) You can be sure we’ll find out the answer to those questions fairly soon.

And we’ve barely even begun to see how AI can be utilized in the dirty business of politics. Florida Governor Ron DeSantis’ campaign used an AI-generated voice of Donald Trump in an ad that ran in Iowa last week. Trump himself never spoke the words used in the ad, but if you weren’t aware of that, you might be inclined to believe he did. Which is, of course, the point: to fool us, to make the fake seem real. It’s coming. It’s here. Stay woke, y’all.

Sweet dreams are made of this
Who am I to disagree
I travel the world and the seven seas
Everybody’s looking for something