Within days of being sworn in as a state senator, Sen. Jodi Habush Sinykin, D-Whitefish Bay, made her first site visit to Rogers Behavioral Health Hospital. There, health care providers informed her of a disturbing trend: a rise in AI-generated deepfake pornography of young girls.
Staff heard girls describe scenes of students huddled over their screens, laughing at fake nude images of their female classmates. Sinykin said this behavior creates a difficult environment for girls trying to navigate their academics and social lives.
“I had heard about it earlier and was just horrified — as a mom of four children, including a daughter, and just myself, of course, being female — and how this is being misused in such a rampant way,” Sinykin told The Daily Cardinal.
A 2023 report from Thorn, a nonprofit tech company building tools to protect children from sexual abuse, found at least 1 in 10, or 11%, to as many as 1 in 5 students across the country knew of classmates or friends who used AI to generate sexually intimate deepfakes of other students.
When Sinykin later heard of a bill that would penalize the creation and distribution of nonconsensual deepfake pornography, she shared it with Milwaukee County prosecutors, who aided her in writing it, alongside the Sensitive Crimes Unit — a unit that handles cases involving women and other vulnerable groups.
Sinykin said prosecutors raised concerns about the bill’s language, as the original version of the legislation focused on the perpetrator’s “intent,” which they said would make convictions difficult. Instead, prosecutors recommended focusing on whether a deepfake was created or shared without the subject’s consent.
The bill unanimously passed both chambers, and was sent to Gov. Tony Evers, who signed it into law Oct. 3.
Before the legislation passed, Wisconsin law classified the nonconsensual taking and sharing of nude photos as a Class I felony. The new law expands that prohibition to include AI-generated “deepfake” images, making it a Class I felony to create or share an intimate image of someone without their consent with intent to coerce, harass or intimidate them.
Bill sponsor Sen. André Jacque, R-New Franken, said the legislation was written with growing public concern about AI in mind. He described the bill as a way to ensure technological advancements are handled responsibly.
“I just think it's really important, as we look at the promise of AI in a number of contexts, that we are at the same time putting guardrails on its use to make sure that we are not letting it go in a dark direction that can do quite a bit of harm,” Jacque told the Cardinal. “This is absolutely an example of a bright line that will offer some protection, especially as people realize that it'll be enforced.”
The bill included personal research from its sponsors as well as external studies. Sinykin cited a Sensity AI report which found that 90-95% of deepfake videos since 2018 consisted of nonconsensual pornography, with about 90% of those being of women.
Alan Rubel, a professor at the University of Wisconsin-Madison, told the Cardinal deepfakes can trigger emotional responses that affect people’s trust in social and informational environments, and that loss of trust can easily be abused.
He noted that these fabricated images can also influence how people form relationships, calling deepfakes an easy tool for exploitation and manipulation that can ultimately reshape how individuals approach personal connections.
Rubel also emphasized technology’s rapid advancements, and the constant creation of new tools that could cause harm.
“It [technology] always moves quickly, and it's always ripe for some kind of abuse, no matter whose hands it's in, “ Rubel said
Rubel said many people remain unfamiliar with today's technology, creating a gap between users and their understanding of potential dangers created by ever-evolving technology.
“Most people have not explored the technology in-depth. It takes some effort to understand it, and it can happen in a way that's that surreptitious. I think that we're not well prepared,” Rubel said. “We were not well prepared even before exploitative images were generated by AI — [we weren’t] prepared for exploitative images that were not synthetic. This will be even more difficult to address.”
Rubel said while the law targets individuals who create deepfakes, it does not address those who distribute the tools used to produce them.
“There's always people who are going to exploit any technology into either malicious, ill conceived or stupid stuff with it. The entities — the people that make the technology — share some responsibility,” Rubel said. “If they have noticed that technology is being used to create deep fakes, [they] should share some liability as well.”
He added that while millions of people can use a single technology to cause harm, there’s typically only one source creating it.
“If you could stop the one technology … that's a more efficient way,” Rubel said. “And I think that's where a lot of the responsibility lies.”
Jacque, Sinykin and Rubel all encouraged parents to stay engaged with how their children use technology. Sinykin went further to emphasize the importance of educating children about what they shouldn’t participate in and how to respond if they encounter or learn about deepfake distribution.
“It's just yet another deeply concerning problem that we have to inform our kids about,” Sinykin said.
She added that parents should urge their children to contact law enforcement or an adult if they encounter deepfakes to help ensure perpetrators are identified and convicted. Sinykin also stressed the need for conversation between parents and children to make sure they are not personally affected by deepfakes.
Zoey Elwood is copy chief for The Daily Cardinal. She also covers state news.





