Categories
Music Music Features

The ELVIS Act

There was a sizable Memphis contingent attending a press conference in Nashville last month, and not just because it concerned new bipartisan legislation known as the ELVIS Act. That’s not about naming another street after The King, but rather a recognition of how the distinctive, instantly recognizable voices of recording artists need new protections in the brave new world of artificial intelligence (AI). Officially speaking, it’s the Ensuring Likeness Voice and Image Security Act, which Gov. Bill Lee’s office describes as “a bill updating Tennessee’s Protection of Personal Rights law to include protections for songwriters, performers, and music industry professionals’ voice[s] from the misuse of artificial intelligence.” And among the catalysts for the legislation, it turned out, was the concern one Memphian felt over the risks of such misuse.

That would be Gebre Waddell, whose company Sound Credit is focused on ensuring recognition of music industry workers’ contributions to the recording arts via a custom platform that catalogs credits, like the liner notes of your dreams. That being the sea in which Waddell swims, confronting AI’s ability to mimic artists’ work came naturally to him, but he didn’t do so as a representative of Sound Credit, or as the secretary/treasurer of the Recording Academy, or as a member of the Tennessee Entertainment Commission (other hats that Waddell wears).

Photo: Bing AI

Rather, it all began with some casual party banter. Last year, Waddell was attending one of many celebrations honoring hip-hop’s 50th anniversary when a common concern kept coming up in conversation. “So we were chatting on the lawn and conversations just started turning towards AI,” he recalls. “This was not long after the fake Drake/fake The Weeknd thing happened.”

That was the phenomenon where, as Billboard reported last April, “a track called ‘Heart on My Sleeve,’ allegedly created with artificial intelligence to sound like it was by Drake and The Weeknd became the hottest thing in music.” It was quickly pulled from streaming services after raising concerns over potentially widespread deep fakes of human hitmakers, but the issue lingered in the minds of music industry influencers.

“As we were chatting,” Waddell recalls, “I was like, ‘You know, we just need to add AI language into an existing state’s right of publicity law, and then that could create some momentum for a federal law.’ That was just an idea that I threw out there and people were saying, ‘That that would be great, you could probably pull some people together.’ So I came home and set up some Zooms.”

A “right of publicity law” is one that protects against unauthorized uses of a person’s name or likeness for commercial (and certain other) purposes, but there is no federal standard, only a hodgepodge of different states’ statutes. Tennessee has one of the country’s toughest right of publicity laws, but it does not feature language about AI. Waddell decided to fix that.

“I drafted a version of what the legislation could look like,” says Waddell. “Then I invited a number of people to a Zoom meeting to discuss it, and I showed them what I drafted. And it really created some momentum.” Clearly, this was permeating the zeitgeist, and the Recording Industry Association of America (RIAA) soon drafted their own version. The momentum only increased. “Boom, the very next thing to happen was the press conference,” says Waddell.

The Recording Academy, which last year helped launch the Human Artistry Campaign to protect human-created music in the face of AI, was there in force, as were other organizations, all eager to witness the first proposed state legislation to explicitly target AI fakes. As the Recording Academy’s news page noted, “The ELVIS Act is expected to be quickly considered by the state’s legislature, and with support from the Governor could soon become the first law of its kind. And the Recording Academy hopes it will also become model legislation for other states to follow. That same day, leaders on Capitol Hill took a similar step to protecting creators’ identity with the bipartisan introduction of the No AI FRAUD Act (H.R. 6943).”

Waddell, for his part, is feeling encouraged. “I fully support it. I think that, as it’s currently written, it’s exactly what we need. And the thing I’m really proud of is that it carries a West Tennessee namesake: It ended up being called the ELVIS Act. It started with the involvement of a Memphian and ended up having a very Memphis kind of name.”

Categories
News News Blog News Feature

Attorneys General Want Congressional Review of AI-Created Child Pornography

A bipartisan group of District Attorneys General urged Congress to broaden its review of artificial intelligence (AI) to specifically include its use in creating deepfake images of child pornography. 

Tennessee Attorney General Jonathan Skrmetti joined colleagues from 54 states and territories in a Tuesday letter asking federal officials to examine AI’s use in making child sexual abuse material (CSAM). The letter gave an example of how the process works.      

“AI tools can rapidly and easily create ‘deepfakes’ by studying real photographs of abused children to generate new images showing those children in sexual positions,” the letter reads. “This involves overlaying the face of one person on the body of another. Deepfakes can also be generated by overlaying photographs of otherwise unvictimized children on the internet with photographs of abused children to create new CSAM involving the previously unharmed children.” 

The group said AI can also be used to create sexualized images an videos of children who “do not exist.”

“AI can combine data from photographs of both abused and non-abused children to animate new and realistic sexualized images of children who do not exist, but who may resemble actual children,” reads the letter. “Creating these images is easier than ever, as anyone can download the AI tools to their computer and create images by simply typing in a short description of what the user wants to see. And because many of these AI tools are ’open- source,’ the tools can be run in an unrestricted and un-policed way.”

The group of attorneys general want Congress to form a special commission to specifically study how AI can be used to exploit children. They also want federal lawmakers to move to expand existing restrictions on CSAM to explicitly cover AI-generated CSAM.  

Categories
News News Blog News Feature

Tennessee Officials Begin to Grapple with Artificial Intelligence as Tech Takes Hold

Tennessee lawmakers and legal officials are adding their voices to a growing chorus of leaders interested in regulating artificial intelligence (AI) as the revolutionary technology begins to take hold in the state. 

Many internet users have by now dipped a toe in AI programs. The Flyer recently asked a text-to-image AI generator to create a photo of “Memphis in the future” (results below). We’ve also asked ChatGPT, so far the most user-friendly and low-barrier AI program, to “write a news story about Memphis.” Turns out, that phrase was too vague, and the program basically spit out the city’s Wikipedia page. 

Memphis Flyer via Diffusion Bee

However, ABC24 reporters got a better response in May when they asked a specific question: What should Memphis do to improve its crime problems? The program said city leaders should focus on community policing, building better trust relationships with police officers, investing money in programs that get at the root of crime, and youth development programs like early childhood education. 

However, AI leaders from all over the world issued a dire warning about the technology last month. That warning (maybe even in its succinctness) made headlines across the globe and seemed to rattle leaders. 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement housed at the website for the Center for AI Safety. 

If that doesn’t hit home, maybe you’re the kind of person to consider a dire warning from another … expert: Joe Rogan. He warned of AI’s power and influence when someone used ChatGPT to make a real-sounding but totally fake episode of his controversial podcast The Joe Rogan Experience last month. 

It seems, AI has moved from the pages of comic books and sci-fi novels, to laboratories and to early adopters, and to Main Street internet pretty quickly. And lawmakers are trying now to get a handle on it. 

Last month, Sen. Marsha Blackburn (R-Tennessee) worried that such programs could be used to create real-sounding but totally fake versions of country songs. She told Fox News that ChatGPT “pulls it right up, and then you can lay in that voice. Give me a voice that sounds like Garth Brooks. Give me a voice that sounds like Reba McEntire singing.” The idea could have major implications for Nashville’s — and the state’s — music industry.

Blackburn expressed concern this week that governments could use AI “to further their surveillance operations.” 

“I’ve watched what has happened in China and how they are using AI to grow the surveillance state,” Blackburn said. “They’re very aggressive in this, and we know that they have used it.”

In Nashville this week, Tennessee Attorney General Jonathan Skrmetti urged the National Telecommunications and Information Administration (NTIA) to create governance policies for AI, especially as it “is developed or used to make decisions that result in legal or other significant effects on people.” Of special concern to the AG (and the other AGs who signed a letter this week) was the use of sensitive data like medical information, biometric data, or personal information about children in AI and the possible outputs from it, like deepfakes.

“For example, consumers must be told when they are interacting with an AI rather than a human being and whether the risks of using an AI system are negligible or considerable,” reads the letter. 

That letter says any governance shouldn’t dampen innovation in the AI space, however. This is about the same as legislators said about the internet when it became more widely available.

That innovation in AI has already started to spread across Tennessee and in Memphis. For example, the University of Memphis’ Institute for Intelligent Systems lists more than 20 AI projects underway at the school. 

One project, AutoTutor, “is a computer tutor that helps students learn by holding a conversation in natural language.” That project has won nearly $5 million in research grants from the federal government. Another project, Personal Assistant for Life Long Learning (PAL3), will guide new Navy sailors in performing their mission essential shipboard duties. The Memphis portion funding this project is $400,000. 

Further east, Oak Ridge National Laboratory, the federally funded research and development lab in East Tennessee, launched the Artificial Intelligence Initiative to help its scientists use AI to accelerate their discoveries. Further east in Knoxville, the University of Tennessee launched the $1-million AI Tennessee Initiative in March to fund researchers to use AI in “smart manufacturing, climate-smart agriculture and forestry, precision health and environment, future mobility, and AI for science.”

Categories
Opinion The Last Word

AI Robots Invade the Classroom — So What?

The future tapped me quietly on the shoulder the other day and suggested that I take a moment to learn about the writing bots. They’re coming!

Excuse me, they’re here. And they struck me as alien invaders, this recent manifestation of artificial intelligence on the internet, which college students, high school students — anybody — can download, feed a topic, and get it to write an essay for them. Is this technology’s next step, after Roomba the robot vacuum cleaner? Humanity is relieved of one more odious task — writing stuff.

“The chatbot,” Kalley Huang pointed out recently in the New York Times, “generates eerily articulate and nuanced text in response to short prompts, with people using it to write love letters, poetry, fan fiction — and their schoolwork.” Apparently, all you need to do to get the AI bot to produce a piece of prose (or poetry?) is give it a subject and whatever other information is necessary to define the topic you want it to blather about. It can then access the entire internet for its data and produce whatever — your English paper, your love sonnet. The possibility of student cheating has suddenly become dire enough that college professors are starting to rethink their writing assignments.

I have some advice for them. But before I get to that, I need to calm my own pounding heart. Writing — to me, as a lifelong journalist, essayist, poet, editor, writing teacher — can be difficult as hell, but every hour devoted to a project is a wondrous adventure, a reach into the great unknown, a journey of discovery, of learning, of becoming. I have described the columns I write as “prayers disguised as op-eds,” and it’s that word, prayer, that swelled and started palpitating as I stumbled on the existence of the writing bot. Should we let AI start writing our prayers? Should we shrug and simply stop being our fullest selves? Life is messy and writing is messy — it has to be. Truth is messy. If we turn the writing process over to the AI bots, my existential fear is that humanity has taken a step toward ending its evolution, ensconcing itself in a prison of conveniences.

“Due to its free nature and ability to write human-like essays on almost any topic, many students have been reaching for this model for their university assignments,” according to the website PC Guide, focusing its attention on an AI bot called ChatGPT, which recently proved smart enough to pass a law bar exam. “And if you are a student hoping to use this in the future, you may have concerns about whether your university can detect ChatGPT.” These words start to get at my primary concern about the whole phenomenon: Critics are missing the point, as they lament that the university’s grading system is under assault. OMG, has cheating gotten easier?

And suddenly it gets clear. When it comes to writing, there’s always been a gaping hole in the American educational system, a mainstream misunderstanding of the nature — the value — of actually learning to write … finding your words, finding your wisdom, finding your voice. Let me repeat: Finding your voice. That’s where it starts. Without it, what do you have? I fear this is a silent question that plagues way too many students — way too many people of all ages — who were taught, or force-fed, spelling and grammar and the yada yada of thematic construction: opening paragraph, whatever, conclusion.

I quote my mentor and longtime friend, the late Ken Macrorie, one of the teachers who bucked this system oh so many decades ago, when I was an undergraduate at Western Michigan University. He was a professor in the English department: “This dehydrated manner of producing writing that is never read is the contribution of the English teacher to the total university,” he wrote in his 1970 book, Uptaught. He was writing about his own career. He was trapped in a system that disdained most undergrads and their writing and often managed to force the worst out of them, aka academic writing, such as: “I consider experience to be an important part in the process of learning. For example, in the case of an athlete, experience plays an important role.”

Dead language! May it rest in peace. Artificial intelligence can no doubt do just as well, probably a lot better. Macrorie quoted this oh so typical example in his book — the kind of writing that is devoid of not only meaning but soul. His breakthrough discovery was what he called free writing: He had his students, on a regular basis, sit down and write for 20 minutes or longer without stopping — just let the words flow, let fragments of truth emerge, and share what you have written. Worry later about spelling, grammar, and such. First you have to find your voice.

I wound up taking his advanced writing class in 1966, two years after he began using free writing as his starting place. Wow. I found my way in … into my own soul. I learned that truth is not sheerly an external entity to be found in some important book. We all have it within us. Doing a “free write” is a means of panning for gold.

And this is the context in which I ponder this recent bit of techno-news: that students don’t have to rely on plagiarism to fake an essay. They can simply prompt a bot and let it do the work.

But that’s not the essence of our social dilemma. As long as the system — let’s call it artificial education — focuses on “teaching to the test” and insists on reducing individual intelligence to a number, and in so many ways ignores and belittles the complex and awakening potential of each student, we have a problem. AI isn’t the cause, but it helps expose it.

Robert Koehler (koehlercw@gmail.com), syndicated by PeaceVoice, is a Chicago award-winning journalist and editor. He is the author of Courage Grows Strong at the Wound.

Categories
We Recommend We Recommend

MoSH’s “Artificial Intelligence” Exhibitions

We are all scared of the robots overtaking us. Is this a gross generalization? Of course. But if horror movies (*cough* M3GAN) have offered us any insight into humankind, it’s that a lot of us are a little bit skeptical of what has been dubbed artificial intelligence (AI) even though we use it every day, from opening our phones with facial recognition to asking Alexa to play our favorite jams. In most cases, you could even say we take AI for granted without truly understanding what it is or how it works. That’s what the Museum of Science & History is seeking to rectify, with two new exhibitions opening this week: “Artificial Intelligence: Your Mind & the Machine” and “Web of Innovation: AI in Memphis.”

The “Artificial Intelligence” exhibition has traveled throughout the country and features interactive displays that will demonstrate, for instance, how a computer recognizes faces or how a self-driving vehicle navigates a street. “It really tries to explain how the human brain and how computers interact in the world, and how our brains and AI will work in the future,” says Raka Nandi, director of exhibits and collections. “Visitors will learn about the history of AI, what it is, what it isn’t. … AI is really the way in which we try to make machines behave and think like humans.”

To accompany the traveling exhibition, MoSH has also curated “Web of Innovation,” which highlights the use of AI technology among local entrepreneurs and researchers, such as those at the Institute for Intelligent Systems at the University of Memphis, St. Jude Children’s Research Hospital, and even FedEx. “We tend to think that all of this is happening on the West Coast, but right here in Memphis there are innovators who are doing a lot of good stuff that is making the city better,” Nandi says. “We’re hoping that the local component, as well as the traveling one, inspires young people to focus on career-connected learning and to really think about how AI is part of their daily world and also how it’ll be a big part of their life in the future.”

Nandi adds that the museum hopes people of all ages will see and enjoy the exhibit with all its interactives that make complex ideas much more accessible (and fun). Prior to working on these exhibitions, Nandi admits that even she didn’t know much about AI. “I think we all feel like we understand AI, but we don’t,” she says.

By the time visitors leave the exhibits, Nandi hopes that they will also consider philosophical questions that might be raised. “Machines are using complex mathematical equations to recognize things, to make decisions,” she says. “But it’s just that — it’s math. It’s not a moral code. It’s not societal cues; it’s not social cues. Those are all human ways of thinking that cannot be mimicked by a machine.”

“Artificial Intelligence: Your Mind & the Machine” and “Web of Innovation: AI in Memphis,” Museum of Science & History, Sunday, January 22 – May 6.