Curated by THEOUTPOST
On Wed, 4 Sept, 4:08 PM UTC
2 Sources
[1]
Intellectual property and data privacy: the hidden risks of AI
Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world's biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. "Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it," he says. But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft's Bing, Google's Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet -- which probably includes Poisot's work. But because chatbots don't often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI's statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science. "There's an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there's no way to know who did what and where the information is coming from and who should be credited," he says. Since ChatGPT's arrival in November 2022, it seems that there's no part of the research process that chatbots haven't touched. Generative AI (genAI) tools can now perform literature searches; write manuscripts, grant applications and peer-review comments; and even produce computer code. Yet, because the tools are trained on huge data sets -- that often are not made public -- these digital helpers can also clash with ownership, plagiarism and privacy standards in unexpected ways that cannot be addressed under current legal frameworks. And as genAI, overseen mostly by private companies, increasingly enters the public domain, the onus is often on users to ensure that they are using the tools responsibly. The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box -- a series of algorithms that aren't fully understood, even by their creators -- and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model's answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person's location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both. Chatbots are powerful in part because they have learnt from nearly all the information on the Internet -- obtained through licensing agreements with publishers such as the Associated Press and social-media platforms including Reddit, or through broad trawls of freely accessible content -- and they excel at identifying patterns in mountains of data. For example, the GPT-3.5 model, which underlies one version of ChatGPT, was trained on roughly 300 billion words, which it uses to create strings of text on the basis of predictive algorithms. AI companies are increasingly interested in developing products marketed to academics. Several have released AI-powered search engines. In May, OpenAI announced ChatGPT Edu, a platform that layers extra analytical capabilities onto the company's popular chatbot and includes the ability to build custom versions of ChatGPT. Two studies this year have found evidence of widespread genAI use to write both published scientific manuscripts and peer-review comments, even as publishers attempt to place guardrails around the use of AI by either banning it or asking writers to disclose whether and when AI is used. Legal scholars and researchers who spoke to Nature made it clear that, when academics use chatbots in this way, they open themselves up to risks that they might not fully anticipate or understand. "People who are using these models have no idea what they're really capable of, and I wish they'd take protecting themselves and their data more seriously," says Ben Zhao, a computer-security researcher at the University of Chicago in Illinois who develops tools to shield creative work, such as art and photography, from being scraped or mimicked by AI. When contacted for comment, an OpenAI spokesperson said the company was looking into ways to improve the opt-out process. "As a research company, we believe that AI offers huge benefits for academia and the progress of science," the spokesperson says. "We respect that some content owners, including academics, may not want their publicly available works used to help teach our AI, which is why we offer ways for them to opt out. We're also exploring what other tools may be useful." In fields such as academia, in which research output is linked to professional success and prestige, losing out on attribution not only denies people compensation, but also perpetuates reputational harm. "Removing peoples' names from their work can be really damaging, especially for early-career scientists or people working in places in the global south," says Evan Spotte-Smith, a computational chemist at Carnegie Mellon University in Pittsburgh, Pennsylvania, who avoids using AI for ethical and moral reasons. Research has shown that members of groups that are marginalized in science have their work published and cited less frequently than average, and overall have access to fewer opportunities for advancement. AI stands to further exacerbate these challenges, Spotte-Smith says: failing to attribute someone's work to them "creates a new form of 'digital colonialism', where we're able to get access to what colleagues are producing without needing to actually engage with them". Academics today have little recourse in directing how their data are used or having them 'unlearnt' by existing AI models. Research is often published open access, and it is more challenging to litigate the misuse of published papers or books than that of a piece of music or a work of art. Zhao says that most opt-out policies "are at best a hope and a dream", and many researchers don't even own the rights to their creative output, having signed them over to institutions or publishers that in turn can enter partnerships with AI companies seeking to use their corpus to train new models and create products that can be marketed back to academics. Representatives of the publishers Springer Nature, the American Association for the Advancement of Science (which publishes the Science family of journals), PLOS and Elsevier say they have not entered such licensing agreements -- although some, including those for the Science journals, Springer Nature and PLOS, noted that the journals do disclose the use of AI in editing and peer review and to check for plagiarism. (Springer Nature publishes Nature, but the journal is editorially independent from its publisher.) Other publishers, such as Wiley and Oxford University Press, have brokered deals with AI companies. Taylor & Francis, for example, has a US$10-million agreement with Microsoft. The Cambridge University Press (CUP) has not yet entered any partnerships, but is developing policies that will offer an 'opt-in' agreement to authors, who will receive remuneration. In a statement to The Bookseller magazine discussing future plans for the CUP -- which oversees 45,000 print titles, more than 24,000 e-books and more than 300 research journals -- Mandy Hill, the company's managing director of academic publishing, who is based in Oxford, UK, said that it "will put authors' interests and desires first, before allowing their work to be licensed for GenAI". Some authors are unsettled by the news that their work will be fed into AI algorithms (see 'How to protect your intellectual property from AI'). "I don't feel confident that I can predict all the ways AI might impact me or my work, and that feels frustrating and a little frightening," says Edward Ballister, a cancer biologist at Columbia University in New York City. "I think institutions and publishers have a responsibility to think about what this all means and to be open and communicative about their plans." Some evidence suggests that publishers are noting scientists' discomfort and acting accordingly, however. Daniel Weld, chief scientist at the AI search engine Semantic Scholar, based at the University of Washington in Seattle, has noticed that more publishers and individuals are reaching out to retroactively request that papers in the Semantic Scholar corpus not be used to train AI models. International policy is only now catching up with the burst of AI technology, and clear answers to foundational questions -- such as where AI output falls under existing copyright legislation, who owns that copyright and what AI companies need to consider when they feed data into their models -- are probably years away. "We are now in this period where there are very fast technological developments, but the legislation is lagging," says Christophe Geiger, a legal scholar at Luiss Guido Carli University in Rome. "The challenge is how we establish a legal framework that will not disincentivize progress, but still take care of our human rights." Even as observers settle in for what could be a long wait, Peter Yu, an intellectual-property lawyer and legal scholar at Texas A&M University School of Law in Fort Worth, says that existing US case law suggests that the courts will be more likely to side with AI companies, in part because the United States often prioritizes the development of new technologies. "That helps push technology to a high level in the US when a lot of other countries are still trying to catch up, but it makes it more challenging for creators to pursue suspected infringement." The European Union, by contrast, has historically favoured personal protections over the development of new technologies. In May, it approved the world's first comprehensive AI law, the AI Act. This broadly categorizes uses of AI on the basis of their potential risks to people's health, safety or fundamental rights, and mandates corresponding safeguards. Some applications, such as using AI to infer sensitive personal details, will be banned. The law will be rolled out over the next two years, coming into full effect in 2026, and applies to models operating in the EU. The impact of the AI Act on academia is likely to be minimal, because the policy gives broad exemptions for products used in research and development. But Dragoş Tudorache, a member of the European Parliament and one of the two lead negotiators of the AI Act, hopes the law will have trickle-down effects on transparency. Under the act, AI companies producing "general purpose" models, such as chatbots, will be subject to new requirements, including an accounting of how their models are trained and how much energy they use, and will need to offer opt-out policies and enforce them. Any group that violates the act could be fined as much as 7% of its annual profits. Tudorache sees the act as an acknowledgement of a new reality in which AI is here to stay. "We've had many other industrial revolutions in the history of mankind, and they all profoundly affected different sectors of the economy and society at large, but I think none of them have had the deep transformative effect that I think AI is going to have," he says.
[2]
At least 10% of research may already be co-authored by AI
"Certainly, here is a possible introduction for your topic..." began a recent article in Surfaces and Interfaces, a scientific journal. Attentive readers might have wondered who exactly that bizarre opening line was addressing. They might also have wondered whether the ensuing article, on the topic of battery technology, was written by a human or a machine. It is a question ever more readers of scientific papers are asking. Large language models (LLMs) are now more than good enough to help write a scientific paper. They can breathe life into dense scientific prose and speed up the drafting process, especially for non-native English speakers. Such use also comes with risks: LLMs are particularly susceptible to reproducing biases, for example, and can churn out vast amounts of plausible nonsense. Just how widespread an issue this was, though, has been unclear. In a preprint posted recently on arXiv, researchers based at the University of Tübingen in Germany and Northwestern University in America provide some clarity. Their research, which has not yet been peer-reviewed, suggests that at least one in ten new scientific papers contains material produced by an LLM. That means over 100,000 such papers will be published this year alone. And that is a lower bound. In some fields, such as computer science, over 20% of research abstracts are estimated to contain LLM-generated text. Among papers from Chinese computer scientists, the figure is one in three. Spotting LLM-generated text is not easy. Researchers have typically relied on one of two methods: detection algorithms trained to identify the tell-tale rhythms of human prose, and a more straightforward hunt for suspicious words disproportionately favoured by LLMs, such as "pivotal" or "realm". Both approaches rely on "ground truth" data: one pile of texts written by humans and one written by machines. These are surprisingly hard to collect: both human- and machine-generated text change over time, as languages evolve and models update. Moreover, researchers typically collect LLM text by prompting these models themselves, and the way they do so may be different from how scientists behave. The latest research by Dmitry Kobak, at the University of Tübingen, and his colleagues, shows a third way, bypassing the need for ground-truth data altogether. The team's method is inspired by demographic work on excess deaths, which allows mortality associated with an event to be ascertained by looking at differences between expected and observed death counts. Just as the excess-deaths method looks for abnormal death rates, their excess-vocabulary method looks for abnormal word use. Specifically, the researchers were looking for words that appeared in scientific abstracts with a significantly greater frequency than predicted by that in the existing literature (see chart 1). The corpus which they chose to analyse consisted of the abstracts of virtually all English-language papers available on PubMed, a search engine for biomedical research, published between January 2010 and March 2024, some 14.2m in all. The researchers found that in most years, word usage was relatively stable: in no year from 2013-19 did a word increase in frequency beyond expectation by more than 1%. That changed in 2020, when "SARS", "coronavirus", "pandemic", "disease", "patients" and "severe" all exploded. (Covid-related words continued to merit abnormally high usage until 2022.) By early 2024, about a year after LLMs like ChatGPT had become widely available, a different set of words took off. Of the 774 words whose use increased significantly between 2013 and 2024, 329 took off in the first three months of 2024. Fully 280 of these were related to style, rather than subject matter. Notable examples include: "delves", "potential", "intricate", "meticulously", "crucial", "significant", and "insights" (see chart 2). The most likely reason for such increases, say the researchers, is help from LLMs. When they estimated the share of abstracts which used at least one of the excess words (omitting words which are widely used anyway), they found that at least 10% probably had LLM input. As PubMed indexes about 1.5m papers annually, that would mean that more than 150,000 papers per year are currently written with LLM assistance. This seems to be more widespread in some fields than others. The researchers' found that computer science had the most use, at over 20%, whereas ecology had the least, with a lower bound below 5%. There was also variation by geography: scientists from Taiwan, South Korea, Indonesia and China were the most frequent users, and those from Britain and New Zealand used them least (see chart 3). (Researchers from other English-speaking countries also deployed LLMs infrequently.) Different journals also yielded different results. Those in the Nature family, as well as other prestigious publications like Science and Cell, appear to have a low LLM-assistance rate (below 10%), while Sensors (a journal about, unimaginatively, sensors), exceeded 24%. The excess-vocabulary method's results are roughly consistent with those from older detection algorithms, which looked at smaller samples from more limited sources. For instance, in a preprint released in April 2024, a team at Stanford found that 17.5% of sentences in computer-science abstracts were likely to be LLM-generated. They also found a lower prevalence in Nature publications and mathematics papers (LLMs are terrible at maths). The excess vocabulary identified also fits with existing lists of suspicious words. Such results should not be overly surprising. Researchers routinely acknowledge the use of LLMs to write papers. In one survey of 1,600 researchers conducted in September 2023, over 25% told Nature they used LLMs to write manuscripts. The largest benefit identified by the interviewees, many of whom studied or used AI in their own work, was to help with editing and translation for those who did not have English as their first language. Faster and easier coding came joint second, together with the simplification of administrative tasks; summarising or trawling the scientific literature; and, tellingly, speeding up the writing of research manuscripts. For all these benefits, using LLMs to write manuscripts is not without risks. Scientific papers rely on the precise communication of uncertainty, for example, which is an area where the capabilities of LLMs remain murky. Hallucination -- whereby LLMs confidently assert fantasies -- remains common, as does a tendency to regurgitate other people's words, verbatim and without attribution. Studies also indicate that LLMs preferentially cite other papers that are highly cited in a field, potentially reinforcing existing biases and limiting creativity. As algorithms, they can also not be listed as authors on a paper or held accountable for the errors they introduce. Perhaps most worrying, the speed at which LLMs can churn out prose risks flooding the scientific world with low-quality publications. Academic policies on LLM use are in flux. Some journals ban it outright. Others have changed their minds. Up until November 2023, Science labelled all LLM text as plagiarism, saying: "Ultimately the product must come from -- and be expressed by -- the wonderful computers in our heads." They have since amended their policy: LLM text is now permitted if detailed notes on how they were used are provided in the method section of papers, as well as in accompanying cover letters. Nature and Cell also allow its use, as long as it is acknowledged clearly. How enforceable such policies will be is not clear. For now, no reliable method exists to flush out LLM prose. Even the excess-vocabulary method, though useful at spotting large-scale trends, cannot tell if a specific abstract had LLM input. And researchers need only avoid certain words to evade detection altogether. As the new preprint puts it, these are challenges that must be meticulously delved into.
Share
Share
Copy Link
A significant portion of research papers may already be co-authored by AI, raising questions about authorship, ethics, and the future of scientific publishing.
Recent studies suggest that artificial intelligence (AI) may be playing a larger role in academic research than previously thought. According to a survey conducted by Nature, at least 10% of research papers published in 2023 might have been co-authored by AI tools 1. This revelation has sparked discussions about the implications for scientific publishing, authorship credit, and research integrity.
The Nature survey, which included responses from over 1,600 researchers, revealed that 39% of participants had used AI tools like ChatGPT for tasks related to their research papers 1. These tasks ranged from literature reviews to drafting sections of papers. Notably, 14% of respondents admitted to using AI for writing entire drafts, raising concerns about the extent of AI's involvement in academic writing.
The increasing use of AI in research has prompted discussions about ethical considerations and the need for transparency. Many researchers and publishers are calling for clear guidelines on disclosing AI involvement in papers. Some journals have already implemented policies requiring authors to declare any use of AI tools in their work 2.
The integration of AI in research writing is expected to have significant implications for scientific publishing. While AI tools can enhance efficiency and assist in various aspects of research, concerns have been raised about potential biases, errors, and the authenticity of AI-generated content. Publishers and academic institutions are grappling with how to adapt their policies and practices to this new reality.
As AI technology continues to advance, its role in research is likely to grow. This trend presents both opportunities and challenges for the scientific community. While AI can potentially accelerate the pace of research and discovery, it also raises questions about the nature of human creativity and originality in academic work. Striking a balance between leveraging AI capabilities and maintaining the integrity of scientific research will be crucial in the coming years.
Universities and research institutions are beginning to recognize the need to adapt to this new era. Some are developing guidelines for the appropriate use of AI in research and teaching students how to effectively and ethically incorporate AI tools into their work 1. This proactive approach aims to ensure that the benefits of AI in research can be harnessed while mitigating potential risks and ethical concerns.
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
Exeter University pioneers AI-friendly assessments as higher education grapples with ChatGPT's impact. The move sparks debate on academic integrity and the future of education in the AI era.
2 Sources
2 Sources
Exploring the potential of AI in combating pandemics while addressing concerns about its misuse in bioterrorism. Experts weigh in on the delicate balance between technological advancement and global security.
2 Sources
2 Sources
An in-depth look at the current state of AI content detection, exploring various tools and methods, their effectiveness, and the challenges faced in distinguishing between human and AI-generated text.
2 Sources
2 Sources
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved