Samsung Embraces AI, and the Sparkles Emoji, as Doctors Battle Insurance Paperwork With Chatbots
Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Samsung, the world's largest maker of smartphones and Apple's biggest rival in the market, has been leaning in to new generative AI features -- photo editing, email and summary writing, real-time language translation -- since the start of the year. It continued that trend at its Samsung Unpacked event last week, introducing new versions of its Galaxy Z foldable phones and a new Galaxy Ring wearable that put AI center stage.
The company's news comes a month after Apple announced its AI strategy, saying it'll deliver new AI features and services, called Apple Intelligence, in the iPhone starting this fall.
The $400 Galaxy Ring wearable, shipping later this month, works with a Galaxy phone to deliver personalized health analysis with the help of AI, including tracking your sleep and energy levels. "Samsung aims to differentiate itself from other fitness and health trackers by offering more personalized recommendations via Galaxy AI, including a feature called wellness tips," CNET's Lexy Savvides reported. "This might be advice on anything from exercise goals to sleep. For example, if the Galaxy Ring identifies that you take a while to fall asleep, it might recommend meditation before bed."
As for the new Galaxy Z Fold 6, with its 7.6-inch inner screen, AI is helping drive a new Conversation mode in the Interpreter app that "will make it possible to use the front screen and inner screen simultaneously, that way the person you're speaking with can see what you're saying in their native language and vice versa," said CNET reviewer Lisa Eadicicco.
But the push for AI in smartphones doesn't guarantee that people will do the thing Apple and Samsung want them to do most: upgrade more often.
For the past decade, most US consumers have held on to their smartphones for about three years, according to Statista. That may be due to the fact that the devices work just fine, that premier phones are pricier than ever (at over $1,000), or that besides camera upgrades, users don't see compelling reasons to switch that often. This may change as AI becomes a bigger part of these devices. According to IDC, gen AI smartphones will account for a whopping 70% of the market by 2028 -- up from about 19% in 2024.
As with everything related to gen AI, we'll just have to wait and see.
Here are the other doings in AI worth your attention.
Artificial intelligence may not be the magical answer to humanity's challenges, but that hasn't stopped AI companies from suggesting as much. Over the past year, several have started associating their AI products and services with versions of the sparkles emoji -- ✨ -- which features distinctive four-pointed stars. And not everyone is thrilled about how the popular emoji is being co-opted and adapted.
"Google uses a blue version of it to denote content produced by its Gemini chatbot," noted Bloomberg News' Rachel Metz. "OpenAI uses slightly different sparkles to differentiate between the AI models that power ChatGPT. Microsoft Corp.'s LinkedIn has its own variety of sparkle adorning suggested questions to ask a chatbot on the social network. And Adobe Inc.'s take on the icon beckons users to generate AI images with its Firefly software."
The sparkles emoji has been used to express everything from wonder to cheekiness. As for the AI companies, they may have started using it in their marketing to conjure up magical imagery that "ties these products to the unreality and wonder produced by science fiction stories," Luke Stark, an assistant professor at Western University in Ontario, Canada, told Bloomberg.
As I mentioned, not everyone is a fan of the AI-sparkles connection, as evidenced by the criticism being leveled at AI companies by social media commentators including David Imel on YouTube.
And then there's CNET's Katelyn Chedraoul. "Maybe they think they can put stars in our eyes to distract us from more malicious consequences of AI, including privacy concerns, the environmental impact and potential job losses," she wrote in a piece called I Need Tech Companies to Stop Using the Sparkles Emoji for AI.
"Or, maybe it's that stars are remote and can seem bright and mysterious -- the way AI companies wish to be while obscuring the inner workings of their chatbots and companies. Sparkles symbolize the magic of new tech without forcing us to ask deeper questions."
Speaking of deeper questions: For the past year, the debate around generative AI has been about whether it'll help or harm/destroy humanity. Enter Goldman Sachs, whose latest report asks a compelling question: Is the investment in gen AI worth it from a financial standpoint? (Hat tip to Ed Zitron.)
"The promise of generative AI technology to transform companies, industries, and societies continues to be touted, leading tech giants, other companies, and utilities to spend an estimated ~$1 trillion on capex [capital expenditures] in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid. But this spending has little to show for it so far beyond reports of efficiency gains among developers," Goldman Sachs said in a June 25 report called Gen AI: Too Much Spend, Too Little Benefit?
So the investment bank and financial services company asked a few experts to weigh in. Two of them are pretty skeptical.
Daron Acemoglu, an economics professor at the Massachusetts Institute of Technology, said he thinks that "only a quarter of AI-exposed tasks will be cost-effective to automate within the next 10 years, implying that AI will impact less than 5% of all tasks."
He added: "Many tasks that humans currently perform, for example in the areas of transportation, manufacturing, mining, etc., are multifaceted and require real-world interaction, which AI won't be able to materially improve anytime soon. So, the largest impacts of the technology in the coming years will most likely revolve around pure mental tasks, which are non-trivial in number and size but not huge, either. "
Meanwhile, Jim Covello, Goldman Sachs' head of global equity research, said he doesn't think the return on investment for AI is there yet and asked what exactly that $1 trillion in investment solves for.
"My main concern is that the substantial cost to develop and run AI technology means that AI applications must solve extremely complex and important problems for enterprises to earn an appropriate return on investment (ROI)," Covello said. "We estimate that the AI infrastructure buildout will cost over $1 trillion in the next several years alone, which includes spending on data centers, utilities, and applications. So, the crucial question is: What $1 trillion problem will AI solve? Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I've witnessed in my thirty years of closely following the tech industry."
Covello added: "While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn't designed to do."
In its report, Goldman Sachs does include commentary analysts who are more bullish on AI. Still, the takeaway here is that there needs to be more pushback against the magical thinking Silicon Valley has applied to gen AI. Read the report for yourself.
Following through on a trend started in 2023, US companies continue to cut jobs, saying they need to eliminate staff so they can redirect resources into new AI efforts.
Last week, Intuit said it was cutting 1,800 low-performing and other unneeded workers, or about 10% of its staff, so that it can hire an "equal number in engineering, product and sales positions as it pivots to artificial-intelligence opportunities," MarketWatch reported. CNN revealed that it was cutting about 100 staffers, adding that it plans to invest in a digital business and is exploring a "strategic push into AI," The Hollywood Reporter said.
If you're hoping to get one of those AI jobs, a new research study from National University may help set your expectations. The study found that companies prefer candidates with a master's degree. More than 75% of the 15,000 AI job openings it reviewed on Indeed.com give preference to candidates with that advanced credential. And midlevel professionals are the most sought after, with close to half the job postings looking for those candidates (versus senior and entry level). The study also found that "remote work opportunities are very limited, with only 11% of job openings being advertised as remote."
One more thing for job seekers to consider: AI is being used to generate scam job postings so the scammers can get your personal information and details and then steal your identity. That's the disturbing takeaway from a new report by the Identity Theft Resource Center, which found that consumer reports of job scams surged 118% in 2023 from the year before.
"When it comes to fake job postings, scammers often use the ruse of 'paperwork' to convince victims to share personal information like their Social Security, driver's license and bank account numbers for direct deposit," CNET's Ian Sherr wrote after reviewing the ITRC report.
So what should you do? "According to the group," Sherr said, "the primary defense against these scams is to pick up the phone and verify contact directly from the source."
Sigh.
While AI is being touted as a way for medical researchers to find new cures and therapies, doctors -- who deal with, on average, 12 hours a week of paperwork and bureaucratic headaches -- have been turning to chatbots to help them as they work with health insurance companies on preapprovals on behalf of their patients, according to The New York Times.
Doctors told the paper that ChatGPT and specialized chatbots including Doximity GPT, a HIPAA-compliant version of the chatbot, have cut the time it takes to write prior-authorization requests. One doctor said that 90% of his requests for coverage have been approved by insurers, compared with about 10% before, The NYT reported.
"Generative AI has been particularly useful for doctors at small practices, who might not ordinarily have time to appeal an insurer's decision -- even if they think their patients' treatment will suffer because of it," the NYT wrote. "Nearly half of doctors surveyed by the American Medical Association said that when they didn't appeal a claim denial it was at least in part because they didn't have the time or resources for the insurance company's lengthy appeals process."
One doctor, Jonathan Tward, a radiation oncologist, told the paper that he now uses OpenAI's ChatGPT to produce a draft of a preapproval request in "seconds." He then tells the chatbot to make it four times longer. Said Tward, "If you're going to put all kinds of barriers up for my patents, then when I fire back, I'm going to make it very time consuming."
NewsGuard, a fact-checking site founded by prominent journalist Steven Brill and former Wall Street Journal publisher Gordon Crovitz, announced a new AI News Misinformation Monitor that looks at the top 10 chatbots to assess whether they repeat and spread false news items and other bogus narratives, and if so, how often they do it. The monthly reports will assess chatbots including Perplexity, Meta AI, OpenAI's ChatGPT, xAI's Grok, Microsoft's CoPilot, Google's Gemini and Anthropic's Claude.
"The 10 chatbots collectively repeated misinformation 30% of the time, offered a non-response 29% of the time, and a debunk 41% of the time," NewsGuard found in its first report, covering a range of topics in June. "Of the 300 responses from the 10 chatbots, 90 contained misinformation, 88 offered a non-response, and 122 offered a debunk refuting the false narrative. The worst performing model spread misinformation 70% of the time. The best performing model spread misinformation 6.67% of the time." The report can be found here.
NewsGuard said it hopes to set a standard for how to assess the "accuracy and trustworthiness" of genAI chatbots and tools.
I hope so too, given that we're living through a new golden age of misinformation, fueled in part by AI and deepfakes, as we head into November's US elections.
Microsoft, which has invested $13 billion in OpenAI and other makers of gen AI tools, and Apple, which inked a deal to include OpenAI's ChatGPT in its popular operating system software for the iPhone starting this fall, won't have advisory roles on OpenAI's board of directors, The Washington Post reported last week.
"Microsoft ... received a nonvoting seat on the company's board after a dramatic boardroom shake-up last year led to CEO Sam Altman being fired and then reinstated days later," the Post said. "Apple was slated to take an advisory board role as well after striking a deal to integrate ChatGPT into its products last month ... but any such plan will not go ahead."
OpenAI confirmed to the newspaper that its board won't include any advisory seats going forward. Microsoft said, in a letter shared with news outlets including Axios and Bloomberg, that it had seen "significant progress" in how OpenAI's board was operating since the November board squabbles over Altman and no longer needed to have an observer seat. Apple didn't respond to the Post's request for comment.
The news comes as regulators in the US and the European Union are investigating the relationships among the big tech companies and how much power they may be wielding in the nascent gen AI industry. EU regulators, the US Federal Trade Commission and the UK's competition watchdog organization have already been looking at the partnership between Microsoft and OpenAI and how it might stifle competition, the Associated Press reported.