Curated by THEOUTPOST
On Wed, 17 Jul, 12:03 AM UTC
2 Sources
[1]
How trust and safety leaders at top tech companies are approaching the security threat of AI: 'Trust but verify'
Safety officers at large companies that have integrated AI tools like ChatGPT into their businesses are issuing similar warnings to their colleagues: "Trust, but verify." Speaking at Fortune's Brainstorm Tech conference on Tuesday, Salesforce chief trust officer Brad Arkin detailed how the company, and its CEO Marc Benioff, balance the demand from customers to offer cutting-edge AI services while ensuring that it does not open them up to vulnerabilities. "Trust is more than just security," Arkin said, adding that the company's key focus is to create new features for its users that don't go against their interests. Against the backdrop of breakneck adoption of AI, however, is the reality that AI makes it easier for criminals to attack potential victims. Malicious actors can operate without the barrier of language, for example, while also being able to more easily send a massive volume of social engineering scams like phishing emails. Companies have long dealt with the threat of so-called "shadow IT," or the practice of employees using hardware and software that is not managed by a firm's technology department. Shadow AI could create even more vulnerabilities, especially without proper training. Still, Arkin said that AI should be approached like any tool -- there will always be dangers, but proper instruction can lead to valuable results. Speaking on the panel, Cisco's chief security and trust officer Anthony Grieco shared advice that he passes on to employees about generative AI platforms like ChatGPT. "If you wouldn't tweet it, if you wouldn't put it on Facebook, if you wouldn't publish it publicly, don't put it into those tools," Grieco said." Even with proper training, the ubiquity of AI -- and the rise of increased cybersecurity threats -- means that every company has to rethink its approach to IT. A working paper published in October by the non-profit National Bureau of Economic Research found rapid adoption of AI across the country, especially among the largest firms. More than 60% of companies with more than 10,000 employees are using AI, the group said. Wendi Whitmore, the senior vice president for the "special forces unit" of the cybersecurity giant Palo Alto Networks, said on Tuesday that cybercriminals have deeply researched how businesses operate, including how they work with vendors and operators. As a result, employees should be trained to scrutinize every piece of communication for potential phishing or other related attacks. "You can be concerned about the technology and put some limitations around it," she said. "But the reality is that attackers don't have any of those limitations." Despite the novel perils, Accenture global security lead Lisa O'Connor touted the potential posed by what she called "responsible AI," or the need for organizations to implement a set of governance principles for how they want to adopt the technology. She added that Accenture has long embraced large language models, including working with Fortune on its own custom-trained LLM. "We drank our own champagne," O'Connor said.
[2]
Salesforce's AI chief says the company uses its Einstein products internally: 'We like to drink our own martinis'
Salesforce's AI chief says companies are frustrated by generative AI's unreliability, such as hallucinations, or when AI spouts incorrect or biased information, and that the problem keeps many of them from widely releasing products that incorporate it. That leads to a broader question for businesses, Clara Shih, CEO of Salesforce AI, said on stage at Fortune's Brainstorm Tech conference in Park City, Utah, on Monday: "Do I trust AI to drive business?'" For many, the answer isn't so simple. "Companies come to us and they want to know how they can really deploy these solutions in a way that actually moves the needle," she explained. Salesforce is well-positioned to tackle this challenge, Shih argued, because customers already have entrusted their data and business processes with Salesforce over the years. "It provides this ideal grounding for the AI to really inject the context that's needed for the models really to be able to perform," she said. Shih, who was appointed as the company's first-ever head of AI just a few months after OpenAI released ChatGPT in November 2022, said trust is critical for all of today's generative AI -- even beyond core trust issues such as data security and data privacy. She emphasized that Salesforce is implementing all of its AI internally, which helps the company understand their customers' concerns. "We like to drink our own martinis," Shih joked. "It's bumpy at times, but I really like that accountability of us being customer zero." Salesforce has actually been developing AI to help clients, who primarily use its software for customer service, sales, and automating their marketing, since releasing its original Einstein AI product in 2016. Inspired by Salesforce CEO Marc Benioff's admiration for Albert Einstein, it was focused on predictive AI, which was cutting edge technology at the time. A few months after OpenAI's buzzy ChatGPT launched in November 2022, Salesforce debuted EinsteinGPT, one of the first chatbots from a major company to be based on a large language model. Salesforce followed up with a variety of other AI products including Einstein Copilot, Copilot Studio, Prompt Builder, RAG, Hybrid Search, and Einstein Trust Layer. On Monday, Shih touted a new planned offering called Einstein Service Agent, a chatbot specifically designed to be used by customer service agents. The chatbot is trained on a company's data and can easily hand off its work to a human customer service agent if needed. Shih said the chatbot's focus may be expanded beyond customer service in the future. But, she added, trust is key for these types of products -- and trust itself depends on the specific context in which the tools are being used. For example, she said, "Do I trust the AI to help me figure out which customers to call on this order?" and "Do I trust the AI to help me answer customers' questions" using Salesforce's new chatbot? Clearly, at least for Salesforce, Shih insists the answer is yes.
Share
Share
Copy Link
Salesforce, Cisco, and Accenture form an alliance to address AI-related security concerns. Meanwhile, Salesforce's AI chief discusses the company's internal use of its Einstein products.
In a significant move to address the growing concerns surrounding artificial intelligence (AI) security, tech giants Salesforce, Cisco, and Accenture have joined forces to create a new initiative. This collaboration aims to tackle the potential threats posed by AI technologies in the rapidly evolving digital landscape 1.
The alliance, which brings together some of the most influential players in the tech industry, recognizes the urgent need to establish robust safety measures and protocols to mitigate risks associated with AI deployment. As AI continues to permeate various sectors of the economy, the potential for malicious actors to exploit these technologies has become a pressing concern for both businesses and consumers alike.
While actively participating in this industry-wide security initiative, Salesforce is also making strides in implementing its own AI technologies internally. The company's AI chief recently revealed that Salesforce extensively uses its Einstein products within the organization 2.
In a statement that highlights the company's confidence in its AI offerings, the AI chief remarked, "We like to drink our own martinis." This approach of utilizing their own AI products internally not only demonstrates Salesforce's trust in its technology but also provides valuable insights for further development and refinement of these tools 2.
The collaboration between Salesforce, Cisco, and Accenture underscores the growing recognition of trust and safety as critical components in the AI ecosystem. As AI systems become more sophisticated and widely adopted, ensuring their responsible use and protecting against potential misuse has become a top priority for industry leaders 1.
This initiative is expected to focus on developing best practices, establishing industry standards, and creating frameworks for AI governance. By pooling their expertise and resources, these tech giants aim to create a more secure environment for AI deployment across various sectors.
The formation of this alliance signals a shift in the tech industry's approach to AI development and implementation. Rather than competing in isolation, companies are recognizing the need for collaboration to address common challenges and threats posed by AI technologies 1.
This cooperative effort is likely to have far-reaching implications for the broader tech industry. It may set a precedent for other companies to join forces in addressing AI-related security concerns, potentially leading to the establishment of industry-wide standards and protocols for AI safety and ethics.
As the initiative takes shape, the tech community and stakeholders will be watching closely to see how these industry leaders translate their commitment into actionable strategies. The success of this collaboration could pave the way for a more secure and trustworthy AI ecosystem, benefiting businesses and consumers alike.
The partnership between Salesforce, Cisco, and Accenture represents a significant step forward in addressing the complex challenges posed by AI technologies. As these companies work together to enhance AI security and build trust, their efforts may well shape the future landscape of AI development and deployment across the global tech industry.
Marc Benioff, CEO of Salesforce, argues that the future of AI is in autonomous agents rather than large language models, claiming that LLMs are reaching their upper limits. He emphasizes the need for realistic expectations about AI's current capabilities.
5 Sources
5 Sources
At Dreamforce 2024, Salesforce introduced AgentForce, positioning it as the next evolution in AI technology. CEO Marc Benioff critiqued current AI models and emphasized the potential of AI agents to transform business operations.
7 Sources
7 Sources
As AI technology advances, businesses and users face challenges with accuracy and reliability. Experts suggest ways to address gaps in AI performance and human expertise to maximize AI's potential.
2 Sources
2 Sources
Salesforce and Microsoft are leading the charge in integrating generative AI into data visualization and business operations, transforming how companies interact with data and make decisions.
2 Sources
2 Sources
Generative AI is revolutionizing industries, from executive strategies to consumer products. This story explores its impact on business value, employee productivity, and the challenges in building interactive AI systems.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved