6 Sources
6 Sources
[1]
Hundreds of suspicious journals flagged by AI screening tool
Researchers have identified more than 1,000 potentially problematic open-access journals using an artificial intelligence (AI) tool that screened around 15,000 titles for signs of dubious publishing practices. The approach, described in Science Advances on 27 August, could be used to help tackle the rise in what the study authors call "questionable open-access journals" -- those that charge fees to publish papers without doing rigorous peer review or quality checks. None of the journals flagged by the tool has previously been on any kind of watchlist, and some titles are owned by large, reputable publishers. Together, the journals have published hundreds of thousands of research papers that have received millions of citations. The study suggests that "there's a whole group of problematic journals in plain sight that are functioning as supposedly respected journals that really don't deserve that qualification", says Jennifer Byrne, a research-integrity sleuth and cancer researcher at the University of Sydney, Australia. The tool is available online in a closed beta version, and organizations that index journals, or publishers, can use it to review their portfolios, says study co-author Daniel Acuña, a computer scientist at the University of Colorado Boulder. But, he adds, the AI sometimes makes mistakes, and is not designed to replace detailed evaluations of journals and individual publications that might result in a title being removed from an index. "A human expert should be part of the vetting process" before any action is taken, he says. The AI tool can analyse a vast amount of information from journals' websites and the papers they publish, and search for red flags -- such as short turnaround times for publishing articles and high rates of self-citation. It also assesses whether members of a journal's editorial board are affiliated with well known, reputable research institutions, and checks how transparent publications are about licensing and fees. Several of the criteria used to train the tool come from best-practice guidance developed by the Directory of Open Access Journals (DOAJ), an index of open-access journals run by the non-profit DOAJ Foundation in Roskilde, Denmark. Cenyu Shen, the DOAJ's deputy head of editorial quality, who is based in Helsinki, says that the number of problematic journals is rising, and that their "tactics are becoming more sophisticated". "We are observing more instances where questionable publishers acquire legitimate journals, or where paper mills purchase journals to publish low-quality work," she adds. (Paper mills are businesses that sell fake papers and authorships.) The DOAJ's own quality checks on journals are done mostly manually and are initiated only after receiving complaints. In 2024, the directory investigated 473 journals, a rise of 40% compared with 2021. "The time our team spent on these investigations also grew significantly by nearly 30%, to 837 hours," says Shen. AI tools could help to speed up some of these assessments, Acuña says. He and his colleagues trained their model on 12,869 journals that are currently indexed in the DOAJ as legitimate, as well as 2,536 that the directory had flagged as violating its quality standards. When the researchers asked the AI to evaluate 15,191 open-access journals listed in the public database Unpaywall, it identified 1,437 journals as questionable. The team estimated that some 345 of these were mistakenly flagged: they included discontinued titles, book series and journals from small, learned-society publishers. The researchers also found that the tool had failed to flag a further 1,782 questionable journals, based on estimates of error rates. The team also tested the tool's performance under two stringency levels. When tuned to the loosest setting, it flagged 8,800 journals, missing fewer than 150 problematic titles but wrongly flagging 6,100. At the stricter setting -- which minimizes the risk of such false alarms -- it flagged only about 240 journals, but 2,600 problematic ones went undetected. The tool "can prioritize either comprehensive screening or precise low-noise identification" says Acuña. "What we present in the paper is this balanced approach." Byrne says the tool's tuning flexibility is an "appealing feature". The authors say that their tool is still provisional and that they hope to further refine it. Aside from concerns about accuracy, Shen says that automated checks "may disadvantage non-English-language journals, and ranking editors by institutional affiliation may undervalue editors from less-well-funded institutions or developing countries". "The key challenge lies in selecting features that AI can measure reliably and without bias, and understanding how accurate those features are when combined into a predictive model," she says. Still, such tools could help assessors to deal with the sheer volume of journals that require scrutiny. "Ensuring the integrity of open-access publishing ultimately requires human oversight and rigorous, evidence-driven assessment," says Shen. But, she adds, "if accuracy improves, AI could certainly play a useful supporting role, helping us manage scale and reduce the labour-intensive nature of reviews".
[2]
AI exposes 1,000+ fake science journals
Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers -- for a hefty fee. Such publications are sometimes referred to as "predatory" journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting. "There has been a growing effort among scientists and organizations to vet these journals," Acuña said. "But it's like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name." His group's new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors? Acuña emphasizes that the tool isn't perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable. But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said. "In science, you don't start from scratch. You build on top of the research of others," Acuña said. "So if the foundation of that tower crumbles, then the entire thing collapses." The shake down When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality -- or, at least, that's the goal. A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase "predatory" journals to describe these publications. Often, they target researchers outside of the United States and Europe, such as in China, India and Iran -- countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high. "They will say, 'If you pay $500 or $1,000, we will review your paper,'" Acuña said. "In reality, they don't provide any service. They just take the PDF and post it on their website." A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.) But keeping pace with the spread of those publications has been daunting for humans. To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ's data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet. Among those journals, the AI initially flagged more than 1,400 as potentially problematic. Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable. "I think this should be used as a helper to prescreen large numbers of journals," he said. "But human professionals should do the final analysis." A firewall for science Acuña added that the researchers didn't want their system to be a "black box" like some other AI platforms. "With ChatGPT, for example, you often don't understand why it's suggesting something," Acuña said. "We tried to make ours as interpretable as possible." The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level. The new AI system isn't publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data -- what he calls a "firewall for science." "As a computer scientist, I often give the example of when a new smartphone comes out," he said. "We know the phone's software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science."
[3]
AI spies questionable science journals, with some human help
"Louis, I think this is the beginning of a beautiful friendship" About 1,000 of a set of 15,000 open access scientific journals appear to exist mainly to extract fees from naive academics. A trio of computer scientists from the University of Colorado Boulder, Syracuse University, and China's Eastern Institute of Technology (EIT) arrived at this figure after building a machine learning classier to help identify "questionable" journals and then conducting a human review of the results - because AI falls short on its own. A questionable journal is one that violates best practices and has low editorial standards, existing mainly to coax academics into paying high fees to have their work appear in a publication that fails to provide expected editorial review. As detailed in a research paper published in Science Advances, "Estimating the predictability of questionable open-access journals," scientific journals prior to the 1990s tended to be closed, available only through subscriptions paid for by institutions. The open access movement changed that dynamic. It dates back to the 1990s, as the free software movement was gaining momentum, when researchers sought to expand the availability of academic research. One consequence of that transition, however, was that costs associated with peer-review and publication were shifted from subscribing organizations to authors. "The open access movement was set out to fix this lack of accessibility by changing the payment model," the paper explains. "Open-access venues ask authors to pay directly rather than ask universities or libraries to subscribe, allowing scientists to retain their copyrights." Open access scientific publishing is now widely accepted. For example, a 2022 memorandum from the White House Office of Science and Technology Policy directed US agencies to come up with a plan by the end of 2025 to make taxpayer-supported research publicly available. But the shift toward open access has led to the proliferation of dubious scientific publications. For more than a decade, researchers have been raising concerns about predatory and hijacked [PDF] journals. The authors credit Jeffrey Beall, a librarian at the University of Colorado, with applying the term "predatory publishing" in 2009 to suspect journals that try to extract fees from authors without editorial review services. An archived version of Beall's List of Potentially Predatory Journals and Publishers can still be found. The problem with a list-based approach is that scam journals can change their names and websites with ease. In light of these issues, Daniel Acuña (UC Boulder), Han Zhuang (EIT), and Lizheng Liang (Syracuse), set out to see whether an AI model might be able to help separate legitimate publications from the questionable ones using detectable characteristics (e.g. authors that frequently cite their own work). "Science progresses through relying on the work of others," Acuña told The Register in an email. "Bad science is polluting the scientific landscape with unusable findings. Questionable journals publish almost anything and therefore the science they have is unreliable. "What I hope to accomplish is to help get rid of this bad science by proactively helping flagging suspected journals so that professionals (who are scarce) can focus their efforts on what's most important." Acuña is also the founder of ReviewerZero AI, a service that employs AI to detect research integrity problems. Winnowing down a data set of nearly 200,000 open access journals, the three computer scientists settled on a set of 15,191 of them. They trained a classifier model to identify dubious journals and when they ran it on the set of 15,191, the model flagged 1,437 titles. But the model missed the mark about a quarter of the time, based on subsequent human review. "About 1,092 are expected to be genuinely questionable, ~345 are false positives (24 percent of the flagged set), and ~1,782 problematic journals would remain undetected (false negatives)," the paper says. "At a broader level, our technique can be adapted," said Acuña. "If we care a lot about false positives, we can flag more stringently." He pointed to a passage in the paper that says under a more stringent setting, only five false alarms out of 240 would be expected. Acuña added that while many AI applications today aim for full automation, "for such delicate matters as the one we are examining here, the AI is not there yet, but it helps a lot." The authors are not yet ready to name and shame the dubious journals - doing so could invite a legal challenge. "We hope to collaborate with indexing services and assist reputable publishers who may be concerned about the degradation of their journals," said Acuña. "We could make it available in the near future to scientists before they submit to a journal." ®
[4]
New AI tool can spot shady science journals and safeguard research integrity
One of the big benefits of open-access journals is that they make research articles freely and immediately available to everyone online. This increases exposure for scientists and their work, ensuring there are no barriers, such as cost, to knowledge. Anyone with an internet connection can access the research from anywhere. However, the rapid growth of this model has also led to the rise of questionable journals that exploit publishing fees paid by authors. They often promise quick publication but lack a rigorous peer-review process. Now there's a new AI tool that can spot telltale signs of shady journals, helping scientists avoid publishing in disreputable outlets. In a paper published in Science Advances, researchers describe how they trained AI to act like a detective. They fed it more than 12,000 high-quality journals and around 2,500 low-quality or questionable publications. These were once part of the Directory of Open Access Journals (DOAJ) but were removed for violating guidelines. The AI then learned to look for red flags on journal websites and in their publications, such as a lack of information about the editorial board, sloppy website design and low citation numbers. The researchers then applied their trained model to a dataset of 93,804 open-access journals from Unpaywall, an online service that helps users find free versions of scholarly papers that are usually behind paywalls. It flagged more than 1,000 previously unknown suspect journals that collectively publish hundreds of thousands of articles. The study does not name individual journals, partly due to concerns about potential legal reprisals. It does, however, state that many of the iffy ones are from developing countries. Although this AI-based method is good at finding questionable journals at scale, it has some limitations. Currently, the system has a false positive rate of 24%, which means that it flags roughly one out of every four legitimate journals as suspect. As the researchers write in their paper, this means that human experts will also be needed. "Our findings demonstrate AI's potential for scalable integrity checks, while also highlighting the need to pair automated triage with expert review." Protecting scientific integrity The authors believe that further research can refine the AI tool's features and help it keep up with evolving tactics of questionable publishers. This will be an ongoing battle that requires sharp human eyes and smarter AI systems. Humans and machines working together can help guide authors away from deceptive outlets and protect the integrity of scientific publishing across the world.
[5]
AI Tool Flags Predatory Journals, Building a Firewall for Science - Neuroscience News
Summary: A new AI system developed by computer scientists automatically screens open-access journals to identify potentially predatory publications. These journals often charge high fees to publish without proper peer review, undermining scientific credibility. The AI analyzed over 15,000 journals and flagged more than 1,000 as questionable, offering researchers a scalable way to spot risks. While the system isn't perfect, it serves as a crucial first filter, with human experts making the final calls. A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out "questionable" scientific journals. The study, published Aug. 27 in the journal "Science Advances," tackles an alarming trend in the world of research. Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers -- for a hefty fee. Such publications are sometimes referred to as "predatory" journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting. "There has been a growing effort among scientists and organizations to vet these journals," Acuña said. "But it's like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name." His group's new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors? Acuña emphasizes that the tool isn't perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable. But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said. "In science, you don't start from scratch. You build on top of the research of others," Acuña said. "So if the foundation of that tower crumbles, then the entire thing collapses." When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality -- or, at least, that's the goal. A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase "predatory" journals to describe these publications. Often, they target researchers outside of the United States and Europe, such as in China, India and Iran -- countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high. "They will say, 'If you pay $500 or $1,000, we will review your paper,'" Acuña said. "In reality, they don't provide any service. They just take the PDF and post it on their website." A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.) But keeping pace with the spread of those publications has been daunting for humans. To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ's data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet. Among those journals, the AI initially flagged more than 1,400 as potentially problematic. Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable. "I think this should be used as a helper to prescreen large numbers of journals," he said. "But human professionals should do the final analysis." Acuña added that the researchers didn't want their system to be a "black box" like some other AI platforms. "With ChatGPT, for example, you often don't understand why it's suggesting something," Acuña said. "We tried to make ours as interpretable as possible." The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level. The new AI system isn't publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data -- what he calls a "firewall for science." "As a computer scientist, I often give the example of when a new smartphone comes out," he said. "We know the phone's software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science." Estimating the predictability of questionable open-access journals Questionable journals threaten global research integrity, yet manual vetting can be slow and inflexible. Here, we explore the potential of artificial intelligence (AI) to systematically identify such venues by analyzing website design, content, and publication metadata. Evaluated against extensive human-annotated datasets, our method achieves practical accuracy and uncovers previously overlooked indicators of journal legitimacy. By adjusting the decision threshold, our method can prioritize either comprehensive screening or precise, low-noise identification. At a balanced threshold, we flag over 1000 suspect journals, which collectively publish hundreds of thousands of articles, receive millions of citations, acknowledge funding from major agencies, and attract authors from developing countries. Error analysis reveals challenges involving discontinued titles, book series misclassified as journals, and small society outlets with limited online presence, which are issues addressable with improved data quality. Our findings demonstrate AI's potential for scalable integrity checks, while also highlighting the need to pair automated triage with expert review.
[6]
A firewall for science: AI tool identifies 1,000 'questionable' journals
A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out "questionable" scientific journals. The study, published Aug. 27 in the journal Science Advances, tackles an alarming trend in the world of research. Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers -- for a hefty fee. Such publications are sometimes referred to as "predatory" journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting. "There has been a growing effort among scientists and organizations to vet these journals," Acuña said. "But it's like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name." His group's new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors? Acuña emphasizes that the tool isn't perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable. But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said. "In science, you don't start from scratch. You build on top of the research of others," Acuña said. "So if the foundation of that tower crumbles, then the entire thing collapses." The shake down When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality -- or, at least, that's the goal. A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase "predatory" journals to describe these publications. Often, they target researchers outside of the United States and Europe, such as in China, India and Iran -- countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high. "They will say, 'If you pay $500 or $1,000, we will review your paper,'" Acuña said. "In reality, they don't provide any service. They just take the PDF and post it on their website." A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.) But keeping pace with the spread of those publications has been daunting for humans. To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ's data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet. Among those journals, the AI initially flagged more than 1,400 as potentially problematic. Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable. "I think this should be used as a helper to prescreen large numbers of journals," he said. "But human professionals should do the final analysis." Acuña added that the researchers didn't want their system to be a "black box" like some other AI platforms. "With ChatGPT, for example, you often don't understand why it's suggesting something," Acuña said. "We tried to make ours as interpretable as possible." The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level. The new AI system isn't publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data -- what he calls a "firewall for science." "As a computer scientist, I often give the example of when a new smartphone comes out," he said. "We know the phone's software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science."
Share
Share
Copy Link
Researchers have developed an AI tool that identified more than 1,000 potentially problematic open-access journals out of 15,000 analyzed. The tool aims to help combat the rise of questionable journals that undermine scientific credibility.
Researchers have developed an artificial intelligence (AI) tool capable of identifying potentially problematic open-access journals, addressing a growing concern in the scientific community. The study, published in Science Advances on August 27, 2025, describes how the AI system flagged over 1,000 questionable journals out of approximately 15,000 analyzed
1
.Source: Neuroscience News
The open-access movement, which began in the 1990s, aimed to make scientific research more accessible by shifting publication costs from subscribers to authors. However, this model has inadvertently led to the proliferation of "predatory" or questionable journals that charge high fees without providing proper peer review or quality checks
2
.The research team, led by Daniel Acuña from the University of Colorado Boulder, trained their AI model using data from the Directory of Open Access Journals (DOAJ). The system analyzes various factors, including:
1
Source: Tech Xplore
When applied to a dataset of 15,191 open-access journals, the AI tool initially flagged 1,437 as potentially problematic. After accounting for false positives, the researchers estimate that about 1,092 journals are genuinely questionable
3
.These flagged journals have collectively published hundreds of thousands of research papers, receiving millions of citations. This highlights the potential scale of the issue and its impact on scientific integrity
1
.While the AI tool shows promise, it is not without limitations:
4
.1
.5
.The researchers emphasize that the AI tool should be used as a first-line screening method, with human experts making the final decisions on journal legitimacy
2
.Related Stories
This AI-powered approach offers a scalable solution to a growing problem in scientific publishing. By helping researchers avoid questionable journals, it aims to protect the integrity of scientific literature and ensure that research builds upon a solid foundation
5
.Source: The Register
As Acuña states, "In science, you don't start from scratch. You build on top of the research of others. So if the foundation of that tower crumbles, then the entire thing collapses"
2
.The research team plans to refine the AI tool further and make it available to universities and publishing companies. They envision it as a "firewall for science," helping to safeguard the quality and reliability of scientific publications in the digital age
5
.Summarized by
Navi
[2]
[3]
13 May 2025•Science and Research
12 Jul 2025•Science and Research
12 Aug 2025•Science and Research
1
Business and Economy
2
Technology
3
Technology