Skip to Content, Navigation, or Footer.
The Daily Cardinal Est. 1892
Thursday, November 06, 2025
AILandscape.png

Can AI write research papers?

As artificial intelligence continues to permeate daily life, researchers nationwide have grappled with the ripples of this new technology.

In March, the world’s most impactful research journal Nature published an article by a researcher claiming his scientific paper was seemingly peer-reviewed by artificial intelligence without his consent.

“Here is a revised version of your review with improved clarity and structure,” the peer review read — a telltale sign of AI’s writing. 

Timothée Poisot, the researcher in question, was upset with the artificially generated peer review, but researchers across the board are split on the use of AI in academia. In a survey of over 300 US computational biologists and AI researchers, around 40% of respondents said “the AI was either more helpful than the human reviews, or as helpful.”

Liner, an AI startup based in California, operates a web browser hoping to tap into researcher’s hopes and fears about AI with their “research first” app that offers AI generated peer reviews, automatic citations and answer surveys with AI simulations.

Head of Liner U.S. operations Kyum Kim told The Daily Cardinal his company had “the world's most accurate AI search engine.” 

By comparing its scores with other top chatbots in a hallucination benchmark called SimpleQA, where Large Language Models (LLMs) are measured by the amount of fake sources they cite, Liner beat out every other major competitor with 95% of answers being correct. For comparison, ChatGPT 4o, one of the default models used when asking ChatGPT a question, had an accuracy score of 38%.

“When you go to Liner, citations are very prominent because it's all about the sources. We care about the accuracy of sources, the relevance of sources and the credibility of sources,” Kim said. “We provide line-by-line citations for every question query.”

Using this unique approach to sourcing, Liner hopes to pitch themselves to not just academia, but also professional fields like financial analysts and lawyers, “where accuracy matters the most,” Kim said.

With a reported 12 million users worldwide, Liner hopes to “build trust in AI again” through their database of over 250 million academic papers for citations, hypothesis generation and tracing research through different papers. 

Students and staff at UW-Madison account for some of those users. According to a spokesperson for Liner, around 750 people holding a “wisc.edu” email use Liner for academic and research purposes at UW-Madison. The Cardinal could not verify their claims.

Three papers partially or fully written by Liner were accepted into Agents4Science, a Stanford University-sponsored research competition billing itself as “the first open conference where AI serves as both primary authors and reviewers of research papers.”

While all three papers were featured in the contest, they faced scrutiny from human reviewers. Two of the three papers were “borderline rejected” by some reviewers, meaning, while theoretically sound, certain technical aspects like unexplored ideas or the writing itself were flawed. The third paper, which had the least documented use of their program, scored the highest and was “borderline accepted” by a human judge.

Enjoy what you're reading? Get content from The Daily Cardinal delivered to your inbox

Kim isn’t worried about these hiccups or other criticisms, though. Instead, he said his company was “betting on the future” of AI-integrated research.

“What we're building here, I think, is a good example of AI helping people do good things: creating new knowledge and authentic science,” Kim said.

AI in research

Just like Nature’s reporting seems to indicate, professors across the board are split on AI tools in research, and whether or not companies like Liner really can create “authentic science” with their generative tools.

Ken Keefover-Ring, a professor of botany and geography at UW-Madison, was apprehensive about using AI tools like Liner in his research on Monarda genus’ essential oils. He was primarily concerned with using AI tools to generate results and review data, like Liner’s survey simulator and citation generator claim to do.

“At some point [AI] just undermines the whole [research] process,” Keefover-Ring said. “Science is already under siege: ‘Oh, those scientists, they don't know what they're talking about.’ And if we don't stop this, it’ll only get worse.”

He was particularly concerned with Liner’s “Survey Simulator” feature, worrying that researchers might pass off the AI responses as a real survey, therefore passing off an AI-generated “pseudo replication” as real and effectively falsifying statistical data.

Without AI, Keefover-Ring’s already seen researchers falsify data to make their research relevant, pressured by the “publish or perish” mindset in research, that rewards professors who publish meaningful results quickly. AI might make faking their data even easier.

Even before AI was created, thousands of articles have been retracted over false data. One website, Redacted Watch, keeps a database of over 67,000 retracted articles tracing from 1927 to today.

“We're always trying to find loopholes and everything, and some people are going to be trying to cut corners with AI,” Keefover-Ring said. “But also, is it really worth all this hassle to do all of this and not really be that confident at the end whether it's real or not?”

Some professors have found less intrusive ways to integrate AI into their research. Tsung-Wei Huang, a professor of computer engineering whose research involves optimizing computer infrastructure, told The Cardinal he uses generative AI like ChatGPT for his “daily work.”

“Whether it's writing code or writing email, especially because lots of the time that code is very boilerplate, it can all easily be done by ChatGPT,” Huang said.

AI has allowed Huang to optimize much of the menial work out of his day-to-day job, leaving him more time to interact with students or implement high-level code which he said AI hasn’t been able to replicate yet.

“This is very new to everybody, and I'm pretty sure even those who are not experts in engineering or machine learning are benefiting from AI,” he said. “This is a whole-stack innovation, and everybody needs to work together to overcome some of the new challenges involved.”

Support your local paper
Donate Today
The Daily Cardinal has been covering the University and Madison community since 1892. Please consider giving today.

Powered by SNworks Solutions by The State News
All Content © 2025 The Daily Cardinal