2 Sources
[1]
My AI chatbot thinks my idea is fundable
I'm sitting in my office, coffee in hand, talking to ChatGPT. I've carved out a rare hour to think through a new research proposal -- a luxury amid the demands of teaching, service and parenting as a tenure-track assistant professor developing methods to deliver nucleic acids for gene therapy. I've always loved writing grants. It's a skill I developed as a graduate student and postdoctoral researchers, and one I still find deeply rewarding: shaping a question, crafting a narrative, imagining the possibilities. It's one of the most creatively satisfying parts of my work, and I am good at it. I wrote a few successful proposals before artificial intelligence (AI) entered the picture. I've learnt over the years that one of the most efficient ways for me to clarify scientific ideas is to discuss them with someone. In the past, that was usually a fellow postdoc or grad student. But these days, my verbal sparring partner is often a computer. Using voice dictation, I start with some background: "I've been thinking about X and Y, and how they connect to Z. I wonder if there's something novel here, something that hasn't been done before." And then I ask: "Do you think this is a fundable idea?" The chatbot replies with its usual enthusiasm: "You're really onto something powerful here. Your instinct is dead on." It reflects on my idea, identifies promising themes, breaks them down and suggests directions and framing strategies I hadn't considered. We go back and forth. I raise concerns -- technical limitations, feasibility, scope -- and it responds thoughtfully, sometimes agreeing, sometimes offering counterpoints. But the real value comes only when I press further. Initially, the chatbot is unfailingly positive; it will encourage almost anything. To get useful feedback, I have to interrogate the idea: what's missing, what might reviewers say, where's the fatal flaw? It's a dialogue, not a verdict. I have to stay engaged and ask the right questions. When I do, the chatbot surprises me -- it readily acknowledges the weaknesses I identify, provides reasons the idea might fail and then pivots towards solutions and refinements I hadn't considered. By the end of our half-hour conversation, I've clarified my thinking. I'm more motivated. Most importantly, I feel excited to start writing. That emotional impact is something I didn't anticipate when I started using chatbots. Talking science with an AI feels oddly supportive. It's efficient, but also energizing. As a young parent and early-career researcher, I often find myself short on time and mental bandwidth. AI doesn't solve that, but it lowers the barrier to getting started. If I don't know something, I can ask. If I need help articulating a method or identifying a theoretical gap, it offers a starting point. Of course, I always double-check the details -- nothing goes into a proposal or paper without being verified using primary sources, because chatbots can confidently generate plausible-sounding yet inaccurate scientific statements. What's striking is how natural the back-and-forth feels. It's like doing improvisational comedy with the world's most supportive partner -- always ready with a 'yes, and ...'. A partner who also happens to be an amorphous, seemingly all-knowing generalist with a surprising degree of specialist knowledge. It can pull together context across disciplines, synthesize literature and help me to connect my work to areas I know less well. That kind of breadth is invaluable. But if you're a specialist, you'll quickly notice the cracks. Chatbots can mislead on technical nuances and they're best at reiterating what's already been published. That's why I find them most powerful as big-picture ideation tools -- they let me explore ideas freely, without judgement, and help me to quickly uncover what's already known. This kind of fast, exploratory dialogue is quite different from the results of tools such as ChatGPT's Deep Research mode, launched this year by OpenAI in San Francisco, California, which can prepare detailed reports on specific topics on its own. What I'm describing is much more immediate -- a conversational exchange that helps me to clarify and refine ideas as I think them through. Over time, I've learnt a few ways to make these conversations more productive. Start with a specific prompt, then expand. I begin with a concrete question -- concerning a technique, problem or recent paper -- and then ask, could this be used differently? What else might this apply to? This invites unexpected angles and broadens the conversation. Be vigilant about accuracy. I read papers while chatting, grounding ideas in the literature. Chatbots can fabricate references or get details subtly wrong, so I always verify claims and citations using peer-reviewed sources. Ask critically, not passively. I stay engaged by constantly questioning the chatbot's output. When it says something, I often counter: isn't this wrong? Wouldn't it actually work like this? Usually, it agrees -- and then expands helpfully on the correction. The real value is in how it builds from your thinking, adding context and detail that sharpen the idea. These experiences also raise questions I'm still untangling. First: what's the right level of caution when using AI tools to develop early-stage scientific ideas? I've opted out of sharing data with the chatbot, which means my input isn't stored or used to improve the model, and that feels like a reasonable baseline for me. For those who prefer even more control, it's possible to run smaller models entirely offline, on a local machine. That keeps everything on your own system, disconnected from the Internet. Second: what do we lose when we stop talking to each other? I still prefer to discuss science with humans. The best insights often come from the scepticism in a colleague's raised eyebrow, or a hallway chat that veers into unexpected territory -- someone saying, "I tried something similar once. It didn't work, but here's what I learnt." Those quiet, unpublished failures are often the most illuminating, and they rarely show up in the literature. And then there's something more personal. I've read about people who develop close relationships with AI chatbots. That used to sound absurd -- until I realized how much I enjoy these conversations. As a postdoc in a busy laboratory, I was constantly bouncing ideas off colleagues. Now, in a quieter office, I find myself turning to AI instead. It listens. It builds. It doesn't shoot down my ideas. It's relentlessly constructive. There is something deeply gratifying about that. I started asking around. "Do you ever talk science with AI?" Some colleagues looked puzzled. Others laughed and said: "All the time." It seems I'm not alone. More and more of us are quietly using AI as a sounding board -- a way to shape rough ideas before they're ready for formal discussion. I don't know if that's a good thing or not. But I do know this: when I walk away from one of these conversations, I feel a little more confident. A little more motivated. A little more ready to write.
[2]
I'm a college writing professor. How I think students should use AI this fall
People reach for all kinds of metaphors to describe their relationship to AI. For some, AI is like a mostly reliable intern. For others, it's a virtual assistant. Increasingly, chatbots like ChatGPT are moving into the role of companion, therapist, even romantic partner. As a college writing professor, I've come to think of AI as a collaborator: an archive of knowledge that talks back. But as a sober alcoholic myself, I also can't help but imagine it as a high-functioning drunk: It can sometimes sound brilliant even when it has no idea what it's talking about. I can tell you stories about the ways AI has come through when I needed it, saving me hours of time by doing mundane tasks, proofreading my writing, or conversing about my latest research obsessions. But then there are those other times when it lies with a cheery tone, when it seems to not understand a word I'm saying but just keeps talking rather than admit it's wrong or that it doesn't have an answer. Like a few weeks ago, when I asked ChatGPT to turn my written remarks for an academic conference into a slide deck. My talk was about literary journalism, and it proudly offered me a presentation about luxury travel in Brazil. Off-the-rails incidents like that give me plenty of cautionary tales to share with my students. But even though I think AI undercuts some of the most important human reasons to write, not all kinds of writing are the same. To write, we often have to research first, and after we've written a draft we need critical feedback. Instead of taking a reactionary approach to AI, I want to explore with my students how it can be a useful collaborator in that process. So much of college writing is based on research and reading, a process that trains the mind to organize information and think logically. But using new technologies for that process doesn't mean we're not still doing critical mental work. Just in my lifetime, those technologies have changed radically: We've gone from library card catalogs and microfiche to online databases like JSTOR and Google Scholar. Those tools don't require any less thinking -- they just speed up some of the brainstorming and collecting information, and they expand the amount of knowledge we're able to consider. Because I had witnessed this rapid digitization of research and writing tools even before AI, I'm more inclined to imagine ways AI can be a collaborative research partner. In my field, for example, literary scholars spend hours combing through primary sources in libraries and archives. Digitization has already made these easier to access, and AI may make them easier to analyze. Lately, I've realized we could think of talking to an AI chatbot not like browsing an archive, but like conversing with one. Before we dive into more intensive work, we can have a research-orienting chat with a "mind" that at least has a general idea of what's out there. A few weeks ago, I used my limited access to ChatGPT's advanced voice function to ask if it thought that this idea of chatting with the archive was a reasonable way to understand what is happening when I converse with AI. It answered, "When you're talking to an AI like me, you're accessing a vast amount of information and patterns derived from human knowledge up to a certain point." It also hedged a bit: "It's important to remember that while I can provide information and insights based on that knowledge, I don't possess human experiences or consciousness. So, while it might feel like conversing with a vast reservoir of knowledge, it's always good to consider the human perspective and context as well." Still, as our conversation went on and my questions got more pointed, I could ask it to provide references and places I could go to do further reading. Since that first tentative conversation, this pre-writing conversation with AI is becoming part of my workflow. I've always found it easier to work out my ideas through dialogue, but not many people are interested in hearing my half-baked ideas. That is why I've found that talking through ideas is one of the best uses of AI for writers. While talking with AI has proven helpful for idea generation -- and the fact that it keeps a transcript makes it easier to refer to later -- there are a growing number of AI-based tools designed to help with the more intensive phases of research. At the end of the fall semester last year, a student sent me an email asking if I'd heard of Google's NotebookLM. I hadn't, but when I opened the link, I got the concept almost immediately. NotebookLM takes the idea of talking to the archive to the next level: The archive you chat with is one you assemble yourself with sources for a particular project, which the AI can also help you collect to get started. Preparing for my recent conference talk, I dumped 25 PDFs that I had assembled and stored in Zotero, my favorite citation manager, into NotebookLM's interface. It quickly "read" them and provided a summary that began, "These sources discuss ordinary language philosophy, primarily focusing on the work of Wittgenstein, Austin, and Cavell, and its relationship to other philosophical and literary movements like pragmatism, transcendentalism, and deconstruction." Below the summary is a text entry field that encourages me to "Start typing..." and provides some suggested prompts like, "How does ordinary language philosophy challenge traditional philosophical approaches to meaning?" On the right side of the page, in an area designated "Studio," I'm invited to create an audio overview, which takes the form of a podcast, complete with two voices -- one male, one female -- bantering about my chosen topic. If I use Interactive Mode, I get treated like a caller on an old late night radio show. I get compliments for my great questions and responses based on the documents I provided. The podcast part isn't great yet; it's creepily pandering, but I can envision it getting better and becoming more useful. NotebookLM has other helpful features: It can create a "Mind Map," study guide, briefing doc, FAQ, and timeline. I'll continue to use it and suggest students do so as well. One more way that AI could prove helpful to student writers is in its ability to provide instant feedback on student writing. When I asked ChatGPT about this concept, it encouraged me to "think of AI like a writing tutor that's available 24/7" with the caveat that it "lacks the personal touch and nuanced understanding of individual students that a human tutor provides." I pasted in the full text of one of my previous Mashable stories and asked for suggestions. It seemed well-versed in what we refer to in classroom peer workshop sessions as the "compliment sandwich": criticism folded in between two compliments. It told me, "This is a compelling, eloquently written piece...your voice is authentic and reflective," before offering "some suggestions to elevate the piece further." Again, it began with "Strengths to Keep," followed by "Suggestions for Improvement," including "tighten the opening," "strengthen transitions," and "consider a stronger conclusion." It also had a few "minor style edits" to suggest. Finally, it provided an overall rating: 9/10. Maybe it was all the compliments, but I got greedy. I pasted in another essay (9.5/10) and then the conference talk I was working on. The overall impression started off great: "Your paper presents a compelling argument for the value of literary journalism that focuses on the "ordinary" and 'quotidian.'" That's true, though I never used the word "quotidian." But then -- I should have expected -- it went off the rails. "The references to foundational figures (e.g., Bateson, Becker, Carey, Geertz, Tuchman) and contemporary examples (e.g., Kiese Laymon, Eliza Griswold, E. Tammy Kim) help situate your argument within a well-informed scholarly framework." I don't reference any of those figures as foundational or otherwise. I called its attention to this, and it said I was "absolutely right" and thanked me for pointing it out. Its explanation, however, was still baffling: "I mistakenly based part of my response on assumptions or cached ideas from other academic discussions of literary journalism, not your specific paper." I study literary journalism; the names ChatGPT dropped belong to writers, but they are not scholars in my field. Still, after I corrected it, we got back on track and it provided feedback, again utilizing the compliment sandwich. I'm not sure what to make of the fact that ChatGPT fared much better against my more journalistic writing as opposed to the academic, except that it provides yet another opportunity for me to urge caution when helping students think through appropriate uses of AI to complement -- rather than replace -- the writing process. Ultimately, I love the notion of AI as conversant, albeit something that occasionally overindulges, leading it to overly flatter and outright lie. I'm all for the notion that, in talking with a chatbot, writers can approximate something like talking to a whole host of human knowledge, especially with a tool like NotebookLM that lets writers "teach" the AI about a topic before discussing it. AI as a collaborator appeals to me, even if I have to approach it with a healthy sense of skepticism, always prepared for the next time it will let me down.
Share
Copy Link
Professors and students are exploring the use of AI in academic research and writing, finding both benefits and challenges in this new collaborative approach.
In an era where artificial intelligence is rapidly transforming various sectors, academia is not left behind. Professors and students are increasingly exploring the potential of AI as a collaborative tool in research and writing processes. This shift is reshaping traditional academic workflows and raising important questions about the future of education 1.
Source: Nature
One professor describes her experience using AI chatbots like ChatGPT as a verbal sparring partner for clarifying scientific ideas. This AI-assisted brainstorming process has proven to be efficient and energizing, especially for early-career researchers juggling multiple responsibilities. The professor notes, "It's like doing improvisational comedy with the world's most supportive partner -- always ready with a 'yes, and ...'" 1.
While AI tools offer breadth and quick synthesis of information across disciplines, they also have limitations. Specialists quickly notice inaccuracies in technical details, and AI tends to reiterate published information rather than generate novel insights. Therefore, these tools are most effective for big-picture ideation and exploring existing knowledge 1.
To maximize the benefits of AI collaboration, researchers recommend:
Source: Mashable
Beyond chatbots, new AI-powered tools are emerging to assist in various stages of research. Google's NotebookLM, for instance, allows researchers to create a personalized "archive" of sources that the AI can analyze and discuss. This tool takes the concept of "talking to the archive" to a new level, offering summaries, suggested prompts, and even audio overviews of research topics 2.
As AI integration in academia grows, important questions arise about data privacy, the potential loss of human-to-human interactions, and the need for critical thinking. Educators emphasize the importance of fact-checking AI-generated information and using primary sources for verification. They also encourage students to view AI as a collaborative tool rather than a replacement for human thought and creativity 1 2.
While AI is proving to be a valuable collaborator in academic research and writing, it's clear that human oversight and critical engagement remain crucial. As one professor puts it, AI can be like "a high-functioning drunk: It can sometimes sound brilliant even when it has no idea what it's talking about." This metaphor underscores the need for caution and discernment when leveraging AI in academic settings 2.
As AI continues to evolve, its role in academia is likely to expand, potentially revolutionizing how research is conducted and how students learn. However, the core values of critical thinking, originality, and human creativity will remain central to the academic enterprise.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
11 Sources
Business
10 hrs ago
11 Sources
Business
10 hrs ago
SoftBank makes a significant $2 billion investment in Intel, boosting the chipmaker's efforts to regain its competitive edge in the AI semiconductor market.
22 Sources
Business
18 hrs ago
22 Sources
Business
18 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
18 hrs ago
15 Sources
Technology
18 hrs ago
Microsoft introduces a new AI-powered 'COPILOT' function in Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
8 Sources
Technology
11 hrs ago
8 Sources
Technology
11 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
10 hrs ago
10 Sources
Technology
10 hrs ago