The Microsoft 365 website on a laptop arranged in New York, US, on Tuesday, June 25, 2024.
The beginning of the year is a great time to do some basic cyber hygiene. We've all been told to patch, change passwords, and update software. But one concern that has been increasingly creeping to the forefront is the sometimes quiet integration of potentially privacy-invading AI into programs.
"AI's rapid integration into our software and services has and should continue to raise significant questions about privacy policies that preceded the AI era," said Lynette Owens, vice president, global consumer education at cybersecurity company Trend Micro. Many programs we use today -- whether it be email, bookkeeping, or productivity tools, and social media and streaming apps -- may be governed by privacy policies that lack clarity on whether our personal data can be used to train AI models.
"This leaves all of us vulnerable to uses of our personal information without the appropriate consent. It's time for every app, website, or online service to take a good hard look at the data they are collecting, who they're sharing it with, how they're sharing it, and whether or not it can be accessed to train AI models," Owens said. "There's a lot of catch up needed to be done."
Owens said the potential issues overlap with most of the programs and applications we use on a daily basis.
"Many platforms have been integrating AI into their operations for years, long before AI became a buzzword," she said.
As an example, Owens points out that Gmail has used AI for spam filtering and predictive text with its "Smart Compose" feature. "And streaming services like Netflix rely on AI to analyze viewing habits and recommend content," Owens said. Social media platforms like Facebook and Instagram have long used AI for facial recognition in photos and personalized content feeds.
"While these tools offer convenience, consumers should consider the potential privacy trade-offs, such as how much personal data is being collected and how it is used to train AI systems. Everyone should carefully review privacy settings, understand what data is being shared, and regularly check for updates to terms of service," Owens said.
One tool that has come in for particular scrutiny is Microsoft's connected experiences, which has been around since 2019 and comes activated with an optional opt-out. It was recently highlighted in press reports -- inaccurately, according to the company as well as some outside cybersecurity experts that have taken a look at the issue -- as a feature that is new or that has had its settings changed. Leaving the sensational headlines aside, privacy experts do worry that advances in AI can lead to the potential for data and words in programs like Microsoft Word to be used in ways that privacy settings do not adequately cover.
"When tools like connected experiences evolve, even if the underlying privacy settings haven't changed, the implications of data use might be far broader," Owens said.
A spokesman for Microsoft wrote in a statement to CNBC that Microsoft does not use customer data from Microsoft 365 consumer and commercial applications to train foundational large language models. He added that in certain instances, customers may consent to using their data for specific purposes, such as custom model development explicitly requested by some commercial customers. Additionally, the setting enables cloud-backed features many people have come to expect from productivity tools such as real-time co-authoring, cloud storage and tools like Editor in Word that provide spelling and grammar suggestions.
Ted Miracco, CEO of security software company Approov, said features like Microsoft's connected experiences are a double-edged sword -- the promise of enhanced productivity but the introduction of significant privacy red flags. The setting's default-on status could, Miracco said, opt people into something they aren't necessarily aware of, primarily related to data collection, and organizations may also want to think twice before leaving the feature on.
"Microsoft's assurance provides only partial relief, but still falls short of mitigating some real privacy concern," Miracco said.
Perception can be its own problem, according to Kaveh Vadat, founder of RiseOpp, an SEO marketing agency.
"Having the default to enablement shifts the dynamic significantly," Vahdat said. "Automatically enabling these features, even with good intentions, inherently places the onus on users to review and modify their privacy settings, which can feel intrusive or manipulative to some."
His view is that companies need to be more transparent, not less, in an environment where there is a lot of distrust and suspicion regarding AI.
Companies including Microsoft should emphasize default opt-out rather than opt-in, and might provide more granular, non-technical information about how personal content is handled because perception can become a reality.
"Even if the technology is completely safe, public perception is shaped not just by facts but by fears and assumptions -- especially in the AI era where users often feel disempowered," he said.