2 Sources
2 Sources
[1]
Salesforce UK chief calls for tailored AI regulations for enterprise vs consumer tools By Invezz
Invezz.com - Salesforce's UK chief, Zahra Bahrololoumi, has voiced her concerns about blanket regulations on artificial intelligence (AI) in the UK, calling for a more targeted and tailored approach. As the AI landscape rapidly evolves, the Labour government's AI policy is under scrutiny for its potential impact on a wide range of companies involved in AI development, from consumer-facing tools like OpenAI's ChatGPT to enterprise-focused systems such as Salesforce's AI platforms. With the growing role of AI in industries from marketing to customer service, the debate over AI regulation is critical to balancing innovation with public safety. Zahra Bahrololoumi, the CEO of Salesforce UK and Ireland, highlighted the need for policymakers to differentiate between AI companies developing consumer-facing products and those creating enterprise AI solutions. Salesforce's AI systems, such as the Agentforce AI platform, are designed to support businesses by automating tasks like customer service and sales operations. Unlike consumer-facing AI models that often operate in a more flexible regulatory environment, enterprise AI tools must comply with stringent privacy and security standards. One of the key issues raised by Bahrololoumi is the handling of sensitive data. Salesforce's "zero retention" policy ensures that customer data used in its AI processes is never stored in its systems, maintaining high standards of data privacy. This contrasts with consumer-facing AI models like ChatGPT or Anthropic's Claude, where the storage and use of data for training models remains unclear. Data privacy remains a critical concern as AI technology becomes increasingly embedded in daily operations. Enterprise AI systems, particularly those used by businesses to handle customer interactions, must meet robust standards such as the General Data Protection Regulation (GDPR), which governs data security and privacy across the European Union and the UK. Bahrololoumi pointed out the fundamental difference between consumer and enterprise AI. Companies like Salesforce operate under stricter data protection regulations, whereas consumer-facing AI products may not need to meet the same standards. While enterprise AI tools are subject to more rigorous checks, the broad application of AI regulations could inadvertently hinder the growth of AI in sectors where innovation is key. "Targeted, proportional, and tailored" legislation is what Bahrololoumi sees as essential for the development of AI across different industries. Consumer-facing AI systems may require different guidelines than business-oriented solutions, which are typically governed by existing corporate regulations. The Labour government has not yet introduced a specific AI bill, but AI regulations have been at the forefront of national debate. The government has expressed a desire to support the AI sector's growth while ensuring safety and fairness in its implementation across industries. Bahrololoumi's comments come as the government considers the next steps in crafting a regulatory framework for AI. A spokesperson from the UK's Department for Science, Innovation and Technology (DSIT) mentioned that the government's AI rules would target the most powerful AI models rather than implementing blanket regulations across all AI applications. This focus could ensure that companies like Salesforce, which provide enterprise AI services, are not subject to the same regulations as firms developing consumer-facing technologies. The ethics and safety of AI are top priorities for enterprise providers like Salesforce. Their AI systems, which automate various business functions, must ensure that data handling, privacy, and compliance with corporate guidelines remain intact. The company's Agentforce platform, for instance, allows businesses to create autonomous digital agents that can handle complex tasks like customer service and marketing without compromising data security. As AI continues to develop, the challenge for regulators will be to keep up with the rapid pace of change while ensuring that innovation is not stifled. Salesforce's call for targeted and proportionate regulations reflects a broader concern across the tech industry about the balance between safety and growth. The AI sector is poised to revolutionise industries, but it must be governed by a regulatory framework that supports innovation without compromising ethical standards or public trust.
[2]
Salesforce's UK chief urges government not to regulate all AI companies in the same way
LONDON -- The UK chief executive of Salesforce wants the Labor government to regulate artificial intelligence -- but says it's important that policymakers don't tar all technology companies developing AI systems with the same brush. Speaking to CNBC in London, Zahra Bahrololoumi, CEO of UK and Ireland at Salesforce, said the American enterprise software giant takes all legislation "seriously." However, she added that any British proposals aimed at regulating AI should be "proportional and tailored." Bahrololoumi noted that there's a difference between companies developing consumer-facing AI tools -- like OpenAI -- and firms like Salesforce making enterprise AI systems. She said consumer-facing AI systems, such as ChatGPT , face fewer restrictions than enterprise-grade products, which have to meet higher privacy standards and comply with corporate guidelines. "What we look for is targeted, proportional, and tailored legislation," Bahrololoumi told CNBC on Wednesday. "There's definitely a difference between those organizations that are operating with consumer facing technology and consumer tech, and those that are enterprise tech. And we each have different roles in the ecosystem, [but] we're a B2B organization," she said. A spokesperson for the UK's Department of Science, Innovation and Technology (DSIT) said that planned AI rules would be "highly targeted to the handful of companies developing the most powerful AI models," rather than applying "blanket rules on the use of AI. " That indicates that the rules might not apply to companies like Salesforce, which don't make their own foundational models like OpenAI. "We recognize the power of AI to kickstart growth and improve productivity and are absolutely committed to supporting the development of our AI sector, particularly as we speed up the adoption of the technology across our economy," the DSIT spokesperson added.
Share
Share
Copy Link
Zahra Bahrololoumi, CEO of Salesforce UK and Ireland, calls for targeted AI regulations that differentiate between enterprise and consumer-facing AI tools, emphasizing the need for proportional and tailored legislation in the rapidly evolving AI landscape.
Zahra Bahrololoumi, CEO of Salesforce UK and Ireland, has voiced concerns about potential blanket regulations on artificial intelligence (AI) in the UK, advocating for a more targeted and tailored approach to AI governance
1
. As the AI landscape rapidly evolves, the debate over regulation has become critical in balancing innovation with public safety and ethical considerations.Bahrololoumi emphasizes the need for policymakers to differentiate between AI companies developing consumer-facing products and those creating enterprise AI solutions
1
2
. She argues that enterprise AI tools, such as Salesforce's platforms, operate under stricter data protection regulations and must comply with robust standards like the General Data Protection Regulation (GDPR)1
.One of the key issues raised by Bahrololoumi is the handling of sensitive data. Salesforce implements a "zero retention" policy, ensuring that customer data used in its AI processes is never stored in its systems, maintaining high standards of data privacy
1
. This approach contrasts with some consumer-facing AI models, where the storage and use of data for training purposes may be less transparent1
.Bahrololoumi advocates for "targeted, proportional, and tailored" legislation, which she sees as essential for the development of AI across different industries
1
2
. She argues that consumer-facing AI systems may require different guidelines than business-oriented solutions, which are typically governed by existing corporate regulations1
.The UK government, through the Department for Science, Innovation and Technology (DSIT), has indicated that planned AI rules would be "highly targeted to the handful of companies developing the most powerful AI models," rather than applying blanket rules on AI use
2
. This approach suggests that companies like Salesforce, which don't develop their own foundational models, might not be subject to the same regulations as firms creating consumer-facing technologies2
.Related Stories
As AI continues to develop, the challenge for regulators will be to keep pace with rapid changes while ensuring that innovation is not stifled
1
. Salesforce's call for targeted and proportionate regulations reflects a broader concern across the tech industry about striking the right balance between safety and growth1
.The UK government has expressed a commitment to supporting the development of the AI sector while ensuring safety and fairness in its implementation across industries
1
2
. This approach aims to foster innovation and accelerate AI adoption in the economy while addressing potential risks and ethical concerns2
.Summarized by
Navi
[1]