AI Tool Flags Over 1,000 Suspicious Open-Access Journals, Aiding Scientific Integrity

Reviewed byNidhi Govil

6 Sources

Share

Researchers have developed an AI tool that identified more than 1,000 potentially problematic open-access journals out of 15,000 analyzed. The tool aims to help combat the rise of questionable journals that undermine scientific credibility.

AI Tool Flags Suspicious Open-Access Journals

Researchers have developed an artificial intelligence (AI) tool capable of identifying potentially problematic open-access journals, addressing a growing concern in the scientific community. The study, published in Science Advances on August 27, 2025, describes how the AI system flagged over 1,000 questionable journals out of approximately 15,000 analyzed

1

.

Source: Neuroscience News

Source: Neuroscience News

The Rise of Questionable Journals

The open-access movement, which began in the 1990s, aimed to make scientific research more accessible by shifting publication costs from subscribers to authors. However, this model has inadvertently led to the proliferation of "predatory" or questionable journals that charge high fees without providing proper peer review or quality checks

2

.

AI-Powered Screening Process

The research team, led by Daniel Acuña from the University of Colorado Boulder, trained their AI model using data from the Directory of Open Access Journals (DOAJ). The system analyzes various factors, including:

  1. Editorial board composition
  2. Website quality and grammar
  3. Publication turnaround times
  4. Self-citation rates
  5. Author affiliations
  6. Transparency about licensing and fees

    1

Results and Implications

Source: Tech Xplore

Source: Tech Xplore

When applied to a dataset of 15,191 open-access journals, the AI tool initially flagged 1,437 as potentially problematic. After accounting for false positives, the researchers estimate that about 1,092 journals are genuinely questionable

3

.

These flagged journals have collectively published hundreds of thousands of research papers, receiving millions of citations. This highlights the potential scale of the issue and its impact on scientific integrity

1

.

Limitations and Future Improvements

While the AI tool shows promise, it is not without limitations:

  1. False positive rate: The system has a 24% false positive rate, meaning it incorrectly flags some legitimate journals

    4

    .
  2. Potential bias: There are concerns that the tool may disadvantage non-English language journals or those from less-funded institutions

    1

    .
  3. Evolving tactics: Questionable publishers may adapt their methods to evade detection

    5

    .

The researchers emphasize that the AI tool should be used as a first-line screening method, with human experts making the final decisions on journal legitimacy

2

.

Impact on Scientific Integrity

This AI-powered approach offers a scalable solution to a growing problem in scientific publishing. By helping researchers avoid questionable journals, it aims to protect the integrity of scientific literature and ensure that research builds upon a solid foundation

5

.

Source: The Register

Source: The Register

As Acuña states, "In science, you don't start from scratch. You build on top of the research of others. So if the foundation of that tower crumbles, then the entire thing collapses"

2

.

Future Developments

The research team plans to refine the AI tool further and make it available to universities and publishing companies. They envision it as a "firewall for science," helping to safeguard the quality and reliability of scientific publications in the digital age

5

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo