Skip to Content, Navigation, or Footer.
The Daily Cardinal Est. 1892
Tuesday, February 10, 2026
Lauren Aguila Van Hise Hall.jpeg

UW experts talk AI research ethics

A panel of University of Wisconsin-Madison researchers shared insight into how generative artificial intelligence is changing research.

Researchers at the University of Wisconsin-Madison discussed ethical concerns stemming from the rise of generative artificial intelligence in academia and research at a Jan. 30 panel.

The panel, which included experts from the UW-Madison Data Science Institute, Libraries and Institutional Review Boards Office (IRB), provided recommendations for researchers, offering definitions and opportunities for ethical AI use in research. 

Dr. Anna Haensch, associate research professor for the Data Science Institute and associate director of the Digital Scholarship Hub for the UW-Madison Libraries, said AI often complicates research practices due to its ability to instantly write long research papers that “hallucinate” information, diminishing the quality of the research done by a team of scholars.

Jennifer Patiño, the digital scholarship coordinator for the UW-Madison Libraries, recommended researchers “make a plan” when working with AI. 

“Understand how [AI tools] work, thinking about their terms of use, licensing that might be involved and their actual functions,” Patiño said.

Patiño pointed to the functions of UW-Madison’s Research Data Services center, which helps researchers with publication.

“We are a free campus-wide resource, and we help researchers to make their data sightable, reproducible and publicly accessible,” Patiño said, “We help researchers with data management plans, and also to share their data, to be in compliance with funders and their publications.”

She explained how many publishers take issue with AI tools harvesting data from privately owned publications. She said this process creates new, complex legal questions, like who owns the intellectual property and research being published and whether AI models can be trained to use that information.

“There are some publishers and journals that completely ban AI use. There are some that allow AI use for minimal things,, such as improving the language, and then there are some that allow unspecified use as long as you disclose that use,” Patiño said. “It's really important to take a look at what both the publisher and the journal are saying, that their policies might clarify each other.”

Casey Pellien, IRB’s associate director of Minimal Risk Research, and Lisa Wilson, IRB’s director, delivered the final presentation of the panel together.

They focused on the IRB’s perspective of AI use for human-subject research, particularly as the technology becomes more prevalent. Wilson and Pellien defined human research as an investigator “studies or analyzes” information from a human subject, whether that information is physical, mental, social or another form of data. They then study that information to draw a conclusion about human behavior.

“The proper ethical treatment of a new technology is very complicated and incomplete,” Wilson said.

Enjoy what you're reading? Get content from The Daily Cardinal delivered to your inbox

Wilson urged researchers to “read” and “understand” an AI tool’s  “terms of use” before using it, noting concerns regarding where the data put into the AI tool goes, the risks of reidentification and the subjects’ ability to withdraw their data once it’s put into the technology.

“Those are things to think about in your consent form,” Wilson said.

Panelists directed attention toward AI concerns at the university level, saying information entered into generative AI tools like ChatGPT, Anthropic Claude, Google Gemini and others is subject to individual organizational policies.

At UW-Madison, information that is classified as public may be entered into generative AI tools, but sharing “sensitive, restricted or otherwise protected institutional data” like passwords and documents is prohibited, according to UW-Madison Chief Information Security Officer Jeffrey Savoy’s 2024 statement on acceptable AI use.

Haensch told The Daily Cardinal in an email AI is affecting careers beyond the research lab. 

She pointed to legislation at the federal level, such as the AI-Related Job Impacts Clarity Act and the PREPARE Act, aiming to regulate AI interference in the workforce, especially hiring, training and human replacement by AI.

Haensch also highlighted the Trump Administration’s executive order from December, “Removing Barriers to American Leadership in Artificial Intelligence,” which aimed to increase AI use across the U.S. by threatening to withhold $21 billion of Broadband Equity and Access Deployment funding.

Support your local paper
Donate Today
The Daily Cardinal has been covering the University and Madison community since 1892. Please consider giving today.

Powered by SNworks Solutions by The State News
All Content © 2026 The Daily Cardinal