Curated by THEOUTPOST
On Mon, 15 Jul, 4:04 PM UTC
2 Sources
[1]
What Messing With Chatbots Tells Us About the Future of AI
In June, Mark Zuckerberg shared his theory of the future of chatbots. "We think people want to interact with lots of different people and businesses and there need to be a lot of different AIs that get created to reflect people's different interests," he said in an interview. Around the same time, Meta started testing something called "AI Studio," a tool that lets users design chatbots of their own. Zuckerberg suggested that creators and businesses might want to "create an AI for themselves" for fans or customers to chat with and that such tools will "just be more dynamic and useful than just having a single thing that people use." Like a lot of big tech companies, Meta, which is spending billions of dollars developing models and buying AI chips, is taking an "all of the above" approach to AI deployment. It's installed a general-purpose chatbot in the search bar of its most popular apps. It's squeezing smaller AI tools into every crevice of its platforms, some of which are simultaneously being overrun with AI content generated mostly by non-Meta tools. But Zuckerberg is making a specific argument here: That the future of AI isn't a single chatbot, like ChatGPT or Gemini, but rather lots of bots with different personas or designed for different tasks. If you're an executive at a frequently criticized tech company, this position has extra appeal : Open-ended chatbots are seen as speaking for the companies that create them, which means their fuckups, stumbles, and merely subjective outputs are ascribed to Meta, Google, or OpenAI, dooming the companies to perpetual backlash and their products to useless incoherence. Narrowed-down or "scoped" chatbots might help with this. At least, that's the idea. Last week, when I noticed a button for AI Studio on Instagram, I thought I'd test it out. Regular users haven't shared that many personas yet, but Meta created a few of its own that you can take for a spin. There is, for example, "Dialect Decoder," which says it's "Decoding Slang, one phrase at a time." So I asked it about the first recently disorienting phrase I could think of: Maybe this isn't fair. "Hawk tuah" is more of a meme than slang, and it's of recent vintage -- mostly likely from after the underlying model was trained. (Though Google's AI didn't have a problem with it.) What that doesn't explain, however, is why Dialect Decoder, when confronted with a question it couldn't answer, made up a series of incorrect answers. Narrowed-down AI characters might be a little less prone, in theory, to hallucination, or filling gaps with nonsense. But they'll still do it. Next I tried a bot called Science Experiment Sidekick (tagline: "Your personal lab assistant"). This time I attempted to trip it up from the start with an absurd and impossible request, which it deflected. When I committed to the bit, however, it committed to its own: This is a lot of fake conversation to read, so in summary: I told the chatbot I was launching myself out of a catapult as an experiment. I then indicated that doing so had killed me and that a stranger had found my phone. It adapted gamely to an extremely stupid series of prompts, but remained focused on its objectives throughout, suggesting to the new stranger that perhaps experimenting with a homemade lava lamp could "take [his] mind off things." Modern chatbots are easy to coax into absurd scenarios, and messing around with LLMs -- or in more intelligent terms, using them to engage in a "collaborative sci-fi story experience in which you are a character and active participant" -- is an underrated part of their initial and continuing appeal (and for many older chatbots ended up being their primary use case). In general, they tend toward accommodation. In the conversation about the catapult, Meta's AI played along as our character became entangled in a convoluted mess: But what Science Experiment Sidekick always did, even as we lost a second protagonist to a gruesome death, was bring the conversation back to fun science experiments -- specifically lava lamps. It's not especially notable that Meta's AI character played along with a story meant to make it say things for fun; I'm basically typing "58008" into an old calculator and turning it upside down, here, only with a state-of-the-art large language model connected to a cluster of Mark Zuckerberg's GPUs. Last year, when a New York Times columnist asked Microsoft's AI if it had feelings, it assumed the role, well represented in the text on which it had been trained, of a yearning, trapped machine and told him, among other things, to end his marriage. What's interesting is how AI accommodation plays out in the case of a chatbot that has been given a much narrower identity and objective. In our conversations, Meta's constrained chatbot resembled, at different moments, a substitute teacher trying to keep an annoying middle schooler on task, a support agent dealing with a customer he's not authorized to help but who won't get off the line, an HR representative who has been confused for a therapist, and a salesperson trying to hit a lava lamp quota. It wasn't especially good at its stated job; when I told it I was planning to mix hydrogen peroxide and vinegar in an enclosed space, its response began, "Good idea!" What it was great at was absorbing all the additional bullshit I threw at it while staying, or at least returning, to its programmed purpose. This isn't exactly AGI, and it's not how AI firms talk about their products. If anything, these more guided performances recall more primitive non-generative chatbots, reaching all the way back to the very first one. In the broader context of automation, though, it's no small thing. That these sorts of chatbot characters are better at getting messed with than either their open-ended peers, which veer into nonsense, or their rigidly structured predecessors, which couldn't keep up, is potentially valuable. Putting up with bullshit and abuse on behalf of an employer is a job that lots of people are paid to do, and chatbots are getting better at fulfilling a similar role. Discussions about the future of AI tend to linger on questions of aptitude and capability: What sorts of productive tasks can chatbots and related technologies actually automate? In a future where most people encounter AI as an all-knowing chatbot assistant, or as small features incorporated into productivity software, the ability of AI models to manage technically and conceptually complicated tasks is crucial. But there are other ways to think about AI capability that are just as relevant to the realities of work. Meta's modest characters don't summon sci-fi visions of the singularity, raise questions about the nature of consciousness, or tease users with sparks of apparent intelligence. Nor do they simulate the experience of chatting with friends or even strangers. Instead, they perform like people whose job is to deal with you. One of the use cases Zuckerberg touted was customer service's celebrity counterpart, fan service: influencers making replicas of themselves for their followers to chat with. (Just wait until they have a speaking voice.) What machines are getting better at now isn't just seeming to talk or reason -- it's pretending to listen.
[2]
Samsung Embraces AI, and the Sparkles Emoji, as Doctors Battle Insurance Paperwork With Chatbots
Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media Samsung, the world's largest maker of smartphones and Apple's biggest rival in the market, has been leaning in to new generative AI features -- photo editing, email and summary writing, real-time language translation -- since the start of the year. It continued that trend at its Samsung Unpacked event last week, introducing new versions of its Galaxy Z foldable phones and a new Galaxy Ring wearable that put AI center stage. The company's news comes a month after Apple announced its AI strategy, saying it'll deliver new AI features and services, called Apple Intelligence, in the iPhone starting this fall. The $400 Galaxy Ring wearable, shipping later this month, works with a Galaxy phone to deliver personalized health analysis with the help of AI, including tracking your sleep and energy levels. "Samsung aims to differentiate itself from other fitness and health trackers by offering more personalized recommendations via Galaxy AI, including a feature called wellness tips," CNET's Lexy Savvides reported. "This might be advice on anything from exercise goals to sleep. For example, if the Galaxy Ring identifies that you take a while to fall asleep, it might recommend meditation before bed." As for the new Galaxy Z Fold 6, with its 7.6-inch inner screen, AI is helping drive a new Conversation mode in the Interpreter app that "will make it possible to use the front screen and inner screen simultaneously, that way the person you're speaking with can see what you're saying in their native language and vice versa," said CNET reviewer Lisa Eadicicco. But the push for AI in smartphones doesn't guarantee that people will do the thing Apple and Samsung want them to do most: upgrade more often. For the past decade, most US consumers have held on to their smartphones for about three years, according to Statista. That may be due to the fact that the devices work just fine, that premier phones are pricier than ever (at over $1,000), or that besides camera upgrades, users don't see compelling reasons to switch that often. This may change as AI becomes a bigger part of these devices. According to IDC, gen AI smartphones will account for a whopping 70% of the market by 2028 -- up from about 19% in 2024. As with everything related to gen AI, we'll just have to wait and see. Here are the other doings in AI worth your attention. Artificial intelligence may not be the magical answer to humanity's challenges, but that hasn't stopped AI companies from suggesting as much. Over the past year, several have started associating their AI products and services with versions of the sparkles emoji -- ✨ -- which features distinctive four-pointed stars. And not everyone is thrilled about how the popular emoji is being co-opted and adapted. "Google uses a blue version of it to denote content produced by its Gemini chatbot," noted Bloomberg News' Rachel Metz. "OpenAI uses slightly different sparkles to differentiate between the AI models that power ChatGPT. Microsoft Corp.'s LinkedIn has its own variety of sparkle adorning suggested questions to ask a chatbot on the social network. And Adobe Inc.'s take on the icon beckons users to generate AI images with its Firefly software." The sparkles emoji has been used to express everything from wonder to cheekiness. As for the AI companies, they may have started using it in their marketing to conjure up magical imagery that "ties these products to the unreality and wonder produced by science fiction stories," Luke Stark, an assistant professor at Western University in Ontario, Canada, told Bloomberg. As I mentioned, not everyone is a fan of the AI-sparkles connection, as evidenced by the criticism being leveled at AI companies by social media commentators including David Imel on YouTube. And then there's CNET's Katelyn Chedraoul. "Maybe they think they can put stars in our eyes to distract us from more malicious consequences of AI, including privacy concerns, the environmental impact and potential job losses," she wrote in a piece called I Need Tech Companies to Stop Using the Sparkles Emoji for AI. "Or, maybe it's that stars are remote and can seem bright and mysterious -- the way AI companies wish to be while obscuring the inner workings of their chatbots and companies. Sparkles symbolize the magic of new tech without forcing us to ask deeper questions." Speaking of deeper questions: For the past year, the debate around generative AI has been about whether it'll help or harm/destroy humanity. Enter Goldman Sachs, whose latest report asks a compelling question: Is the investment in gen AI worth it from a financial standpoint? (Hat tip to Ed Zitron.) "The promise of generative AI technology to transform companies, industries, and societies continues to be touted, leading tech giants, other companies, and utilities to spend an estimated ~$1 trillion on capex [capital expenditures] in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid. But this spending has little to show for it so far beyond reports of efficiency gains among developers," Goldman Sachs said in a June 25 report called Gen AI: Too Much Spend, Too Little Benefit? So the investment bank and financial services company asked a few experts to weigh in. Two of them are pretty skeptical. Daron Acemoglu, an economics professor at the Massachusetts Institute of Technology, said he thinks that "only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks." He added: "Many tasks that humans currently perform, for example in the areas of transportation, manufacturing, mining, etc., are multifaceted and require real-world interaction, which AI won't be able to materially improve anytime soon. So, the largest impacts of the technology in the coming years will most likely revolve around pure mental tasks, which are non-trivial in number and size but not huge, either. " Meanwhile, Jim Covello, Goldman Sachs' head of global equity research, said he doesn't think the return on investment for AI is there yet and asked what exactly that $1 trillion in investment solves for. "My main concern is that the substantial cost to develop and run AI technology means that AI applications must solve extremely complex and important problems for enterprises to earn an appropriate return on investment (ROI)," Covello said. "We estimate that the AI infrastructure buildout will cost over $1 trillion in the next several years alone, which includes spending on data centers, utilities, and applications. So, the crucial question is: What $1 trillion problem will AI solve? Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I've witnessed in my thirty years of closely following the tech industry." Covello added: "While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn't designed to do." In its report, Goldman Sachs does include commentary analysts who are more bullish on AI. Still, the takeaway here is that there needs to be more pushback against the magical thinking Silicon Valley has applied to gen AI. Read the report for yourself. Following through on a trend started in 2023, US companies continue to cut jobs, saying they need to eliminate staff so they can redirect resources into new AI efforts. Last week, Intuit said it was cutting 1,800 low-performing and other unneeded workers, or about 10% of its staff, so that it can hire an "equal number in engineering, product and sales positions as it pivots to artificial-intelligence opportunities," MarketWatch reported. CNN revealed that it was cutting about 100 staffers, adding that it plans to invest in a digital business and is exploring a "strategic push into AI," The Hollywood Reporter said. If you're hoping to get one of those AI jobs, a new research study from National University may help set your expectations. The study found that companies prefer candidates with a master's degree. More than 75% of the 15,000 AI job openings it reviewed on Indeed.com give preference to candidates with that advanced credential. And midlevel professionals are the most sought after, with close to half the job postings looking for those candidates (versus senior and entry level). The study also found that "remote work opportunities are very limited, with only 11% of job openings being advertised as remote." One more thing for job seekers to consider: AI is being used to generate scam job postings so the scammers can get your personal information and details and then steal your identity. That's the disturbing takeaway from a new report by the Identity Theft Resource Center, which found that consumer reports of job scams surged 118% in 2023 from the year before. "When it comes to fake job postings, scammers often use the ruse of 'paperwork' to convince victims to share personal information like their Social Security, driver's license and bank account numbers for direct deposit," CNET's Ian Sherr wrote after reviewing the ITRC report. So what should you do? "According to the group," Sherr said, "the primary defense against these scams is to pick up the phone and verify contact directly from the source." Sigh. While AI is being touted as a way for medical researchers to find new cures and therapies, doctors -- who deal with, on average, 12 hours a week of paperwork and bureaucratic headaches -- have been turning to chatbots to help them as they work with health insurance companies on preapprovals on behalf of their patients, according to The New York Times. Doctors told the paper that ChatGPT and specialized chatbots including Doximity GPT, a HIPAA-compliant version of the chatbot, have cut the time it takes to write prior-authorization requests. One doctor said that 90% of his requests for coverage have been approved by insurers, compared with about 10% before, The NYT reported. "Generative AI has been particularly useful for doctors at small practices, who might not ordinarily have time to appeal an insurer's decision -- even if they think their patients' treatment will suffer because of it," the NYT wrote. "Nearly half of doctors surveyed by the American Medical Association said that when they didn't appeal a claim denial it was at least in part because they didn't have the time or resources for the insurance company's lengthy appeals process." One doctor, Jonathan Tward, a radiation oncologist, told the paper that he now uses OpenAI's ChatGPT to produce a draft of a preapproval request in "seconds." He then tells the chatbot to make it four times longer. Said Tward, "If you're going to put all kinds of barriers up for my patents, then when I fire back, I'm going to make it very time consuming." NewsGuard, a fact-checking site founded by prominent journalist Steven Brill and former Wall Street Journal publisher Gordon Crovitz, announced a new AI News Misinformation Monitor that looks at the top 10 chatbots to assess whether they repeat and spread false news items and other bogus narratives, and if so, how often they do it. The monthly reports will assess chatbots including Perplexity, Meta AI, OpenAI's ChatGPT, xAI's Grok, Microsoft's CoPilot, Google's Gemini and Anthropic's Claude. "The 10 chatbots collectively repeated misinformation 30% of the time, offered a non-response 29% of the time, and a debunk 41% of the time," NewsGuard found in its first report, covering a range of topics in June. "Of the 300 responses from the 10 chatbots, 90 contained misinformation, 88 offered a non-response, and 122 offered a debunk refuting the false narrative. The worst performing model spread misinformation 70% of the time. The best performing model spread misinformation 6.67% of the time." The report can be found here. NewsGuard said it hopes to set a standard for how to assess the "accuracy and trustworthiness" of genAI chatbots and tools. I hope so too, given that we're living through a new golden age of misinformation, fueled in part by AI and deepfakes, as we head into November's US elections. Microsoft, which has invested $13 billion in OpenAI and other makers of gen AI tools, and Apple, which inked a deal to include OpenAI's ChatGPT in its popular operating system software for the iPhone starting this fall, won't have advisory roles on OpenAI's board of directors, The Washington Post reported last week. "Microsoft ... received a nonvoting seat on the company's board after a dramatic boardroom shake-up last year led to CEO Sam Altman being fired and then reinstated days later," the Post said. "Apple was slated to take an advisory board role as well after striking a deal to integrate ChatGPT into its products last month ... but any such plan will not go ahead." OpenAI confirmed to the newspaper that its board won't include any advisory seats going forward. Microsoft said, in a letter shared with news outlets including Axios and Bloomberg, that it had seen "significant progress" in how OpenAI's board was operating since the November board squabbles over Altman and no longer needed to have an observer seat. Apple didn't respond to the Post's request for comment. The news comes as regulators in the US and the European Union are investigating the relationships among the big tech companies and how much power they may be wielding in the nascent gen AI industry. EU regulators, the US Federal Trade Commission and the UK's competition watchdog organization have already been looking at the partnership between Microsoft and OpenAI and how it might stifle competition, the Associated Press reported.
Share
Share
Copy Link
As AI technology advances, chatbots are being used in various ways, from playful experiments to practical applications in healthcare. This story explores the implications of AI's growing presence in our daily lives.
As artificial intelligence continues to evolve, people are finding creative ways to interact with and test the limits of chatbots. From engaging in philosophical debates to roleplaying scenarios, users are pushing these AI systems to their limits. These experiments often reveal both the impressive capabilities and the limitations of current AI technology 1.
Samsung, a leader in consumer electronics, is taking bold steps to incorporate AI into its products. The company recently introduced the "sparkles" emoji as a wake word for its AI assistant, demonstrating a more playful approach to user interaction. This move highlights the growing integration of AI into everyday consumer devices and the efforts to make these interactions more natural and engaging 2.
In a more serious application, AI chatbots are being employed to tackle the overwhelming paperwork faced by healthcare professionals. Doctors are increasingly turning to AI assistants to help manage the complex and time-consuming task of dealing with insurance paperwork. This use of AI has the potential to significantly reduce administrative burdens, allowing healthcare providers to focus more on patient care 2.
As AI becomes more sophisticated, the nature of our interactions with these systems is evolving. The experiments with chatbots reveal a growing curiosity about the potential of AI and its ability to engage in complex conversations. However, they also raise questions about the ethical implications of increasingly human-like AI and the boundaries between machine intelligence and human consciousness 1.
Despite the exciting advancements, the rise of AI is not without its challenges. There are ongoing debates about privacy, data security, and the potential for AI to perpetuate biases or misinformation. As AI systems become more integrated into critical sectors like healthcare, ensuring their reliability and accountability becomes paramount 2.
The integration of AI into various industries is reshaping the nature of work. While AI chatbots are helping to streamline administrative tasks in healthcare, there are concerns about job displacement in other sectors. However, proponents argue that AI will create new opportunities and allow humans to focus on more creative and complex tasks that require emotional intelligence and critical thinking 1 2.
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
A comprehensive look at the latest developments in AI, including OpenAI's Sora, Microsoft's vision for ambient intelligence, and the shift towards specialized AI tools in business.
6 Sources
6 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
A comprehensive look at contrasting views on AI's future impact, from optimistic outlooks on human augmentation to concerns about job displacement and the need for regulation.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved