Curated by THEOUTPOST
On Fri, 14 Mar, 4:01 PM UTC
2 Sources
[1]
5 Ways to Stay Smart When Using Gen AI, Explained by Computer Science Professors
Expertise Energy, Solar Power, Renewable Energy, Climate Issues, Virtual Power Plants, Grid Infrastructure, Electric Vehicles, Plug-in Hybrids, Energy-Savings Tips, Smart Thermostats, Portable Power Stations, Home Battery Solutions, EV Charging Infrastructure, Home There's an old saying in the journalism business: If your mother tells you she loves you, check it out. The point is that you need to be skeptical even of your most trusted sources. But what if, instead of your mother, it's a generative AI model like OpenAI's ChatGPT telling you something? Should you trust the computer? The takeaway from a talk given by a pair of Carnegie Mellon University computer scientists at South by Southwest this week? No. Check it out. This week, the Austin, Texas, conference has spotlighted artificial intelligence. Experts discussed the future and the big picture, with talks on trust, the changing workplace and more. Sherry Wu and Maarten Sap, assistant professors at Carnegie Mellon University's School of Computer Science, focused more on the here and now, with some tips on how best to use, and not misuse, the most common generative AI tools out there, like AI chatbots trained on large language models. "They're actually far from perfect and not actually suited for all the use cases that people want to use them for," Sap said. Here are five bits of advice on how to be smarter than the AI. Anyone who's had a joke fall flat on a social media site like Twitter or Bluesky will tell you how hard it is to convey sarcasm in text. And the posters on those sites (at least the human ones) know social cues that indicate when you're not being literal. An LLM doesn't. Today's LLMs take non-literal statements literally more than half of the time, Sap said, and they struggle with social reasoning. The solution, Wu said, is to be more specific and structured with your prompts. Make sure the model knows what you're asking it to produce. Focus on what exactly you want, and don't assume the LLM will extrapolate your actual question. Perhaps the biggest issue with generative AI tools is that they hallucinate, meaning they make stuff up. Sap said hallucinations can happen up to a quarter of the time, with higher rates in more specialized areas like law and medicine. The problem goes beyond just getting things wrong. Sap said chatbots can appear confident in an answer while being completely wrong. "This leaves humans vulnerable to relying on these expressions of certainty when the model is incorrect," he said. The solution to this is simple: Check the LLM's answers. You can check its consistency with itself, Wu said, by asking the same question several times or variations on the same question. You might see different outputs. "Sometimes you will see that the model doesn't really know what it is saying," she said. The most important thing is to verify with external sources. That also means you should be careful about asking questions to which you don't know the answer. Wu said generative AI's answers are most useful when they're on a subject you're familiar with, so you can tell what is real and what isn't. "Make conscious decisions about when to rely on a model and when not to," she said. "Do not trust a model when it tells you it is very confident." The privacy concerns with LLMs are abundant. It goes beyond giving information you wouldn't want to see on the internet to a machine that might regurgitate it to anyone who asks nicely. Sap said a demonstration with OpenAI's ChatGPT showed that, when asked to organize a surprise party, it told the person who was supposed to be surprised about the party. "LLMs are not good at reasoning who should know what and when and what information should be private," he said. Don't share sensitive or personal data with an LLM, Wu said. "Whenever you share anything produced by you to the model, always double-check if there's anything in that that you don't want to release to the LLM," she said. Chatbots have caught on partly because of how well they mimic human speech. But it's all mimicry; it's not truly human, Sap said. Models say things like "I wonder" and "I imagine" because they're trained on language that includes those words, not because they have an imagination. "The way that we use language, these words all imply cognition," Sap said. "It implies that the language model imagines things, that it has an internal world." Thinking of AI models as human can be dangerous it can lead to misplaced trust. LLMs don't operate the same way humans do, and treating them as if they're human can reinforce social stereotypes, Sap said. "Humans are much more likely to over-attribute human-likeness or consciousness to AI systems," he said. Despite claims about LLMs being capable of advanced research and reasoning, they just don't work that well yet, Sap said. Benchmarks that suggest a model can perform at the level of a human with a Ph.D. are just benchmarks, and the tests behind those analyses don't mean a model can work at that level for what you want to use it for. "There's this illusion of the robustness of AI capabilities going around that leads people to make rash decisions in their businesses," he said. When deciding whether or not you should use a generative AI model for a task, consider what are the benefits and potential harms of using it, and what are the benefits and potential harms of not using it, Wu said.
[2]
5 quick ways to tweak your AI use for better results - and a safer experience
These quick AI tips presented at SXSW can help you use the technology more effectively. It's increasingly difficult to avoid artificial technology (AI) as it becomes more commonplace. A prime example is Google searches showcasing AI responses. AI safety is more important than ever in this age of technological ubiquity. So as an AI user, how can you safely use generative AI (Gen AI)? Also: Here's why you should ignore 99% of AI tools - and which four I use every day Carnegie Mellon School of Computer Science assistant professors Maarten Sap and Sherry Tongshuang Wu took to the SXSW stage to inform people about the shortcomings of large language models (LLMs), the type of machine learning model behind popular generative AI tools, such as ChatGPT, and how people can exploit these technologies more effectively. "They are great, and they are everywhere, but they are actually far from perfect," said Sap. The tweaks you can implement into your everyday interactions with AI are simple. They will protect you from AI's shortcomings and help you get more out of AI chatbots, including more accurate responses. Keep reading to learn about the five things you can do to optimize your AI use, according to the experts. Because of AI's conversational capabilities, people often use underspecified, shorter prompts, like chatting with a friend. The problem is that when under instructions, AI systems may infer the meaning of your text prompt incorrectly, as they lack the human skills that would allow them to read between the lines. To illustrate this issue, in their session, Sap and Wu told a chatbot they were reading a million books, and the chatbot took it literally instead of understanding the person was superfluous. Sap shared that in his research he found that modern LLMs struggle to understand non-literal references in a literal way over 50% of the time. Also: Can AI supercharge creativity without stealing from artists? The best way to circumvent this issue is to clarify your prompts with more explicit requirements that leave less room for interpretation or error. Wu suggested thinking of chatbots as assistants, instructing them clearly about exactly what you want done. Even though this approach might require more work when writing a prompt, the result should align more with your requirements. If you have ever used an AI chatbot, you know they hallucinate, which describes outputting incorrect information. Hallucinations can happen in different ways, either outputting factually incorrect responses, incorrectly summarizing given information, or agreeing with false facts shared by a user. Sap said hallucinations happen between 1% and 25% of the time for general, daily use cases. The hallucination rates are even higher for more specialized domains, such as law and medicine, coming in at greater than 50%. These hallucinations are difficult to spot because they are presented in a way that sounds plausible, even if they are nonsensical. Also: AI agents aren't just assistants: How they're changing the future of work today The models often reaffirm their responses, using markers such as "I am confident" even when offering incorrect information. A research paper cited in the presentation said AI models were certain yet incorrect about their responses 47% of the time. As a result, the best way to protect against hallucinations is to double-check your responses. Some tactics include cross-verifying your output with external sources, such as Google or news outlets you trust, or asking the model again, using different wording, to see if the AI outputs the same response. Although it can be tempting to get ChatGPT's assistance with subjects you don't know much about, it is easier to identify errors if your prompts remain within your domain of expertise. Gen AI tools are trained on large amounts of data. They also require data to continue learning and become smarter, more efficient models. As a result, models often use their outputs for further training. Also: This new AI benchmark measures how much models lie The issue is that models often regurgitate their training data in their responses, meaning your private information could be used in someone else's responses, exposing your private data to others. There is also a risk when using web applications because your private information is leaving your device to be processed in the cloud, which has security implications. The best way to maintain good AI hygiene is to avoid sharing sensitive or personal data with LLMs. There will be some instances where the assistance you want may involve using personal data. You can also redact this data to ensure you get help without the risk. Many AI tools, including ChatGPT, have options that allow users to opt out of data collection. Opting out is always a good option, even if you don't plan on using sensitive data. The capabilities of AI systems and the ability to talk to these tools using natural language have led some people to overestimate the power of these bots. Anthropomorphism, or the attribution of human characteristics, is a slippery slope. If people think of these AI systems as human-adjacent, they may trust them with more responsibility and data. Also: Why OpenAI's new AI agent tools could change how you code One way to help mitigate this issue is to stop attributing human characteristics to AI models when referring to them, according to the experts. Instead of saying, "the model thinks you want a balanced response," Sap suggested a better alternative: "The model is designed to generate balanced responses based on its training data." Although it may seem like these models can help with almost every task, there are many instances in which they may not be able to provide the best assistance. Although benchmarks are available, they only cover a small proportion of how users interact with LLMs. Also: Even premium AI tools distort the news and fabricate links - these are the worst LLMs may also not work the best for everyone. Beyond the hallucinations discussed above, there have been recorded instances in which LLMs make racist decisions or support Western-centric biases. These biases show models may be unfit to assist in many use cases. As a result, the solution is to be thoughtful and careful when using LLMs. This approach includes evaluating the impact of using an LLM to determine whether it is the right solution to your problem. It is also helpful to look at what models excel at certain tasks and to employ the best model for your requirements.
Share
Share
Copy Link
Computer science professors from Carnegie Mellon University offer insights on effectively using generative AI tools while avoiding common pitfalls and maintaining safety.
As generative AI becomes increasingly ubiquitous in our daily lives, from Google searches to chatbots, the need for understanding its limitations and using it safely has never been more critical. At the recent South by Southwest (SXSW) conference, Carnegie Mellon University computer science professors Maarten Sap and Sherry Tongshuang Wu shared valuable insights on how to navigate the world of AI tools effectively 12.
One of the key challenges with AI models is their tendency to misinterpret vague or ambiguous prompts. Large Language Models (LLMs) struggle with non-literal language, often taking statements at face value more than 50% of the time 1. To combat this, users should:
As Wu suggests, "Focus on what exactly you want, and don't assume the LLM will extrapolate your actual question" 1.
Perhaps the most significant concern with generative AI is its propensity for "hallucinations" – generating false or inaccurate information. These hallucinations can occur up to 25% of the time in general use cases and even more frequently in specialized domains 2. To mitigate this risk:
"Make conscious decisions about when to rely on a model and when not to," advises Wu. "Do not trust a model when it tells you it is very confident" 1.
AI models often use their outputs for further training, which can lead to privacy concerns. Users should be cautious about sharing sensitive information with these systems. Sap demonstrated that some AI models struggle with information boundaries, potentially revealing private details inappropriately 1.
To maintain privacy:
While AI's ability to mimic human conversation is impressive, it's crucial to remember that these systems are not human and lack true cognition. Sap warns against attributing human-like qualities to AI, as it can lead to misplaced trust and reinforce social stereotypes 1.
To maintain a healthy perspective:
Despite claims of advanced capabilities, current AI models have significant limitations. Benchmarks suggesting human-level performance don't always translate to real-world applications. Sap cautions against making rash decisions based on inflated perceptions of AI capabilities 1.
When considering AI use:
By following these expert tips, users can harness the power of generative AI more effectively while maintaining safety and accuracy in their interactions with these increasingly prevalent tools.
A comprehensive guide on effective prompting methods to enhance interactions with AI chatbots, focusing on techniques to elicit more accurate, diverse, and insightful responses.
3 Sources
3 Sources
An exploration of how AI tools like ChatGPT can be used to boost programming output and improve research processes, along with tips for responsible and effective use.
6 Sources
6 Sources
As AI technology advances, it offers new tools for enhancing work productivity. However, its application in creative fields like novel writing raises concerns among authors. This story explores the potential benefits and controversies surrounding AI in various industries.
2 Sources
2 Sources
Experts caution against sharing sensitive personal information with AI chatbots, highlighting potential risks and privacy concerns. The article explores what types of information should never be shared with AI and why.
2 Sources
2 Sources
An exploration of AI hallucinations, their causes, and potential consequences across various applications, highlighting the need for vigilance and fact-checking in AI-generated content.
8 Sources
8 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved