2 Sources
[1]
AI in universities: How large language models are transforming research
Generative AI, especially large language models (LLMs), present exciting and unprecedented opportunities and complex challenges for academic research and scholarship. As the different versions of LLMs (such as ChatGPT, Gemini, Claude, Perplexity.ai and Grok) continue to proliferate, academic research is beginning to undergo a significant transformation. Students, researchers and instructors in higher education need AI literacy knowledge, competencies and skills to address these challenges and risks. In a time of rapid change, students and academics are advised to look to their institutions, programs and units for discipline-specific policy or guidelines regulating the use of AI. Researcher use of AI A recent study led by a data science researcher found that at least 13.5% of biomedical abstracts last year showed signs of AI-generated text. Large language models can now support nearly every stage of the research process, although caution and human oversight are always needed to judge when use is appropriate, ethical or warranted -- and to account for questions of quality control and accuracy. LLMs can: * Help brainstorm, generate and refine research ideas and formulate hypotheses; * Design experiments and conduct and synthesize literature reviews; * Write and debug code; * Analyze and visualize both qualitative and quantitative data; * Develop interdisciplinary theoretical and methodological frameworks; * Suggest relevant sources and citations, summarize complex texts and draft abstracts; * Support the dissemination and presentation of research findings, in popular formats. However, there are significant concerns and challenges surrounding the appropriate, ethical, responsible and effective use of generative AI tools in the conduct of research, writing and research dissemination. These include: * Misrepresentation of data and authorship; * Difficulty in replication of research results; * Data and algorithmic biases and inaccuracies; * User and data privacy and confidentiality; * Quality of outputs, data and citation fabrication; * And copyright and intellectual property infringement. AI research assistants, 'deep research' AI agents There are two categories of emerging LLM-enhanced tools that support academic research: 1. AI research assistants: The number of AI research assistants that support different aspects and steps of the research process is growing at an exponential rate. These technologies have the potential to enhance and extend traditional research methods in academic work. Examples include AI assistants that support: * Concept mapping (Kumu, GitMind, MindMeister); * Literature and systematic reviews (Elicit, Undermind, NotebookLM, SciSpace); * Literature search (Consensus, ResearchRabbit, Connected Papers, Scite); * Literature analysis and summarization (Scholarcy, Paper Digest, Keenious); * And research topic and trend detection and analysis (Scinapse, tlooto, Dimension AI). 2. 'Deep research' AI agents: The field of artificial intelligence is advancing quickly with the rise of "deep research" AI agents. These next-generation agents combine LLMs, retrieval-augmented generation and sophisticated reasoning frameworks to conduct in-depth, multi-step analyses. Research is currently being conducted to evaluate the quality and effectiveness of deep research tools. New evaluation criteria are being developed to assess their performance and quality. Criteria include elements such as cost, speed, editing ease and overall user experience -- as well as citation and writing quality, and how these deep research tools adhere to prompts. The purpose of deep research tools is to meticulously extract, analyze and synthesize scholarly information, empirical data and diverse perspectives from a wide array of online and social media sources. The output is a detailed report, complete with citations, offering in-depth insights into complex topics. In just a short span of four months (December 2024 to February 2025), several companies (like Google Gemini, Perplexity.ai and ChatGPT) introduced their "deep research" platforms. The Allen Institute for Artificial Intelligence, a non-profit AI research institute based in Seattle, is experimenting with a new open access research tool called Ai2 ScholarQA that helps researchers conduct literature reviews more efficiently by providing more in-depth answers. Emerging guidelines Several guidelines have been developed to encourage the responsible and ethical use of generative AI in research and writing. Examples include: * The Government of Canada Guide on the Use of Generative Artificial Intelligence. This counsels federal institutions and academics to explore potential uses of generative AI tools, and follow a recommended framework for decision-making about them, including responsible communication and transparency. * Guidance from publicly funded federal agencies -- collectively known as the Tri-Council Agency -- offering research grants and programs covering different research disciplines. * The Observatory in AI Policies in Canadian Post-Secondary Education, run by the firm Higher Education Strategy Associates, lists AI policies and guidelines developed by more than 30 Canadian higher education institutions. LLMs support interdisciplinary research LLMs are also powerful tools to support interdisciplinary research. Recent emerging research (yet to be peer reviewed) on the effectiveness of LLMs for research suggests they have great potential in areas such as biological sciences, chemical sciences, engineering, environmental as well as social sciences. It also suggests LLMs can help eliminate disciplinary silos by bringing together data and methods from different fields and automating data collection and generation to create interdisciplinary datasets. Helping to analyze and summarize large volumes of research across various disciplines can aid interdisciplinary collaboration. "Expert finder" AI-powered platforms can analyze researcher profiles and publication networks to map expertise, identify potential collaborators across fields and reveal unexpected interdisciplinary connections. This emerging knowledge suggests these models will be able to help researchers drive breakthroughs by combining insights from diverse fields -- like epidemiology and physics, climate science and economics or social science and climate data -- to address complex problems. Research-focused AI literacy Canadian universities and research partnerships are providing AI literacy education to people in universities and beyond. The Alberta Machine Intelligence Institute offers K-12 AI literacy programming and other resources. The institute is a not-for profit organization and part of Canada's Pan-Canadian Artificial Intelligence Strategy. Many universities are offering AI literacy educational opportunities that focus specifically on the use of generative AI tools in assisting research activities. Collaborative university work is also happening. For example, as vice dean of the Faculty of Graduate & Postdoctoral Studies at the University of Alberta (and an information science professor), I have worked with deans from the University of Manitoba, the University of Winnipeg and Vancouver Island University to develop guidelines and recommendations around generative AI and graduate and postdoctoral research and supervision. Considering the growing power and capabilities of large language models, there is an urgent need to develop AI literacy training tailored for academic researchers. This training should focus on both the potential and the limitations of these tools in the different stages of the research process and writing. This article is republished from The Conversation under a Creative Commons license. Read the original article.
[2]
How are LLMs transforming university-level research?
According to the University of Alberta's Ali Shiri, large language models and GenAI are having a significant effect on academic research. A version of this article was originally published by The Conversation (CC BY-ND 4.0) Generative AI, especially large language models (LLMs), present exciting and unprecedented opportunities and complex challenges for academic research and scholarship. As the different versions of LLMs (such as ChatGPT, Gemini, Claude, Perplexity.ai and Grok) continue to proliferate, academic research is beginning to undergo a significant transformation. Students, researchers and instructors in higher education need AI literacy knowledge, competencies and skills to address these challenges and risks. In a time of rapid change, students and academics are advised to look to their institutions, programmes and units for discipline-specific policy or guidelines regulating the use of AI. Researcher use of AI A recent study led by a data science researcher found that at least 13.5pc of biomedical abstracts last year showed signs of AI-generated text. Large language models can now support nearly every stage of the research process, although caution and human oversight are always needed to judge when use is appropriate, ethical or warranted - and to account for questions of quality control and accuracy. LLMs can: Help brainstorm, generate and refine research ideas and formulate hypotheses; Design experiments and conduct and synthesise literature reviews; Write and debug code; Analyse and visualise both qualitative and quantitative data; Develop interdisciplinary theoretical and methodological frameworks; Suggest relevant sources and citations, summarise complex texts and draft abstracts; Support the dissemination and presentation of research findings, in popular formats. However, there are significant concerns and challenges surrounding the appropriate, ethical, responsible and effective use of generative AI tools in the conduct of research, writing and research dissemination. These include: Misrepresentation of data and authorship; Difficulty in replication of research results; Data and algorithmic biases and inaccuracies; User and data privacy and confidentiality; Quality of outputs, data and citation fabrication; And copyright and intellectual property infringement. AI research assistants, 'deep research' AI agents There are two categories of emerging LLM-enhanced tools that support academic research: AI research assistants: The number of AI research assistants that support different aspects and steps of the research process is growing at an exponential rate. These technologies have the potential to enhance and extend traditional research methods in academic work. Examples include AI assistants that support: Concept mapping (Kumu, GitMind, MindMeister); Literature and systematic reviews (Elicit, Undermind, NotebookLM, SciSpace); Literature search (Consensus, ResearchRabbit, Connected Papers, Scite); Literature analysis and summarisation (Scholarcy, Paper Digest, Keenious); And research topic and trend detection and analysis (Scinapse, tlooto, Dimension AI) 'Deep research' AI agents: The field of artificial intelligence is advancing quickly with the rise of "deep research" AI agents. These next-generation agents combine LLMs, retrieval-augmented generation and sophisticated reasoning frameworks to conduct in-depth, multi-step analyses. Research is currently being conducted to evaluate the quality and effectiveness of deep research tools. New evaluation criteria are being developed to assess their performance and quality. Criteria include elements such as cost, speed, editing ease and overall user experience - as well as citation and writing quality, and how these deep research tools adhere to prompts. The purpose of deep research tools is to meticulously extract, analyse and synthesise scholarly information, empirical data and diverse perspectives from a wide array of online and social media sources. The output is a detailed report, complete with citations, offering in-depth insights into complex topics. In just a short span of four months (December 2024 to February 2025), several companies (like Google Gemini, Perplexity.ai and ChatGPT) introduced their "deep research" platforms. The Allen Institute for Artificial Intelligence, a non-profit AI research institute based in Seattle, is experimenting with a new open access research tool called Ai2 ScholarQA that helps researchers conduct literature reviews more efficiently by providing more in-depth answers. Emerging guidelines Several guidelines have been developed to encourage the responsible and ethical use of generative AI in research and writing. Examples include: The Government of Canada Guide on the Use of Generative Artificial Intelligence. This counsels federal institutions and academics to explore potential uses of generative AI tools, and follow a recommended framework for decision-making about them, including responsible communication and transparency. Guidance from publicly funded federal agencies -- collectively known as the Tri-Council Agency -- offering research grants and programs covering different research disciplines. The Observatory in AI Policies in Canadian Post-Secondary Education, run by the firm Higher Education Strategy Associates, lists AI policies and guidelines developed by more than 30 Canadian higher education institutions. LLMs support interdisciplinary research LLMs are also powerful tools to support interdisciplinary research. Recent emerging research (yet to be peer reviewed) on the effectiveness of LLMs for research suggests they have great potential in areas such as biological sciences, chemical sciences, engineering, environmental as well as social sciences. It also suggests LLMs can help eliminate disciplinary silos by bringing together data and methods from different fields and automating data collection and generation to create interdisciplinary datasets. Helping to analyse and summarise large volumes of research across various disciplines can aid interdisciplinary collaboration. "Expert finder" AI-powered platforms can analyse researcher profiles and publication networks to map expertise, identify potential collaborators across fields and reveal unexpected interdisciplinary connections. This emerging knowledge suggests these models will be able to help researchers drive breakthroughs by combining insights from diverse fields - like epidemiology and physics, climate science and economics or social science and climate data - to address complex problems. Research-focused AI literacy Canadian universities and research partnerships are providing AI literacy education to people in universities and beyond. The Alberta Machine Intelligence Institute offers K-12 AI literacy programming and other resources. The institute is a not-for profit organisation and part of Canada's Pan-Canadian Artificial Intelligence Strategy. Many universities are offering AI literacy educational opportunities that focus specifically on the use of generative AI tools in assisting research activities. Collaborative university work is also happening. For example, as vice-dean of the Faculty of Graduate & Postdoctoral Studies at the University of Alberta (and an information science professor), I have worked with deans from the University of Manitoba, the University of Winnipeg and Vancouver Island University to develop guidelines and recommendations around generative AI and graduate and postdoctoral research and supervision. Considering the growing power and capabilities of large language models, there is an urgent need to develop AI literacy training tailored for academic researchers. This training should focus on both the potential and the limitations of these tools in the different stages of the research process and writing. By Ali Shiri Ali Shiri is a professor of information science and the vice-dean of the Faculty of Graduate and Postdoctoral Studies at the University of Alberta. He received a PhD in information science from the University of Strathclyde Department of Computer and Information Sciences in Glasgow, Scotland and teaches, researches and writes about digital information interaction and retrieval, digital libraries, data, learning analytics, artificial intelligence and ethics. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
Share
Copy Link
Large language models are transforming university-level research, offering new opportunities and challenges across various disciplines. This article explores the impact of AI on academic processes, emerging tools, and ethical considerations.
Large Language Models (LLMs) are rapidly transforming the landscape of academic research, offering unprecedented opportunities and challenges for scholars across disciplines. As AI technologies like ChatGPT, Gemini, and Claude continue to evolve, universities are witnessing a significant shift in how research is conducted and disseminated 12.
The integration of AI in academic processes has led to the development of two key categories of tools:
AI Research Assistants: These tools support various aspects of the research process, including:
'Deep Research' AI Agents: These advanced tools combine LLMs with sophisticated reasoning frameworks to conduct in-depth, multi-step analyses. Companies like Google, Perplexity, and OpenAI have introduced deep research platforms in recent months 1.
Source: Phys.org
LLMs are now capable of supporting nearly every stage of the research process, from brainstorming ideas to disseminating findings. Key capabilities include:
A recent study revealed that at least 13% of biomedical abstracts in the past year showed signs of AI-generated text, highlighting the growing influence of these technologies 1.
While the potential benefits are significant, the use of AI in academic research also presents several challenges:
Source: Silicon Republic
To address these challenges, various organizations and institutions are developing guidelines for the responsible use of AI in research:
LLMs show great potential in supporting interdisciplinary research, particularly in fields such as biological sciences, chemical sciences, engineering, environmental sciences, and social sciences. These tools can help break down disciplinary silos by integrating data and methods from various fields 1.
As the field continues to evolve, researchers are developing new evaluation criteria to assess the performance and quality of deep research tools. Factors such as cost, speed, editing ease, and adherence to prompts are being considered alongside citation and writing quality 1.
The rapid advancement of AI in academic research underscores the need for students, researchers, and instructors to develop AI literacy and skills to navigate this changing landscape effectively. As universities adapt to these technologies, the future of academic research promises to be more efficient, interdisciplinary, and data-driven, while also demanding increased attention to ethical considerations and quality control.
Summarized by
Navi
[2]
A thriving black market for Nvidia's advanced AI chips has emerged in China, with at least $1 billion worth of processors smuggled into the country despite US export restrictions. The situation highlights the challenges in enforcing tech export controls and the high demand for cutting-edge AI hardware in China.
12 Sources
Technology
2 hrs ago
12 Sources
Technology
2 hrs ago
OpenAI is preparing to release its next-generation AI model, GPT-5, as early as August 2025. This highly anticipated launch promises enhanced capabilities and a unified approach to AI tasks.
7 Sources
Technology
2 hrs ago
7 Sources
Technology
2 hrs ago
Google introduces 'Web Guide', an AI-driven feature that reorganizes search results into thematic groups, potentially changing how users interact with search engines.
8 Sources
Technology
2 hrs ago
8 Sources
Technology
2 hrs ago
Google reports significant growth in AI-powered features across its products, with AI Overviews reaching 2 billion monthly users and Gemini app hitting 450 million users. The company processes 980 trillion monthly tokens, showcasing the increasing adoption of AI technologies.
6 Sources
Technology
18 hrs ago
6 Sources
Technology
18 hrs ago
Walmart announces the rollout of AI-powered 'super agents' to enhance customer experience, streamline operations, and boost e-commerce growth, aiming to compete with Amazon in the AI-driven retail landscape.
6 Sources
Technology
10 hrs ago
6 Sources
Technology
10 hrs ago