3 Sources
3 Sources
[1]
Navigating EU AI law: Google Cloud's proactive approach - ExBulletin
AI governance has reached a major milestone in the European Union: the AI Act has been published in the EU Official Journal and will come into effect on August 1st. The AI Act is a legal framework that establishes obligations for AI systems based on their level of potential risk and impact. It will be phased in over the next 36 months and includes prohibitions on certain practices, general-purpose AI rules, and obligations for high-risk systems. Importantly, the Act's yet-to-be-developed AI Code of Practice will establish compliance requirements for a subset of general-purpose AI. At Google, we believe in the potential of AI for society and know how important it is to mitigate the risks. Today, we're summarizing the following: How we currently support AI customers How were you preparing to comply with this new law? What customers can do How Google Cloud is currently supporting AI customers We offer data protection. We are committed to building privacy protections into our cloud architecture. We provide meaningful transparency around our data use, including clear disclosures and commitments around access to your data and a long-standing commitment to GDPR compliance. We continue this commitment with Generative AI. Google Cloud will not use customer-provided data to train models without your permission. In a standard generative AI implementation for an enterprise customer, any organizational data stored remains in the customer's cloud environment. Importantly, as outlined in the Google Cloud Platform Terms of Service and Cloud Data Processing Addendum, organizations have control over how their data is accessed, used, and processed, and we provide customers with visibility into who can access their data and why. Additionally, customers have control over the tuning of the underlying model. They can tune a specific underlying model for a specific task without rebuilding the entire underlying model. Each tuning job creates additional adapter weights, which are learned parameters. Adapter weights are customer specific and only available to the customer who tuned them. During inference, the underlying model receives adapter weights, executes requests and returns results. Customers can manage the encryption of stored adapters generated during training using Customer Managed Encryption Keys (CMEK), and can delete adapter weights at any time. We invest in comprehensive risk management. Rigorous assessment is essential to the success of AI that adheres to security, privacy and safety standards. Clouds' commitment to risk management was demonstrated in a recent AI readiness assessment of our efforts by Coalfires. Internally, we continue to invest in comprehensive AI risk assessments, frequently refining our risk process and taxonomy based on ongoing research into new and evolving AI risks, user feedback, red team testing results, and other engagements. The technical details and context of AI use vary by product and must be assessed separately. A key element of these analyses includes consideration of the role Google Cloud and our customers play in ensuring safe and secure deployments. Google is leading on AI safety and responsibility. We are committed to leading responsible AI development and will continue to oversee our governance process in accordance with our long-standing AI Principles. We have identified certain areas we will not pursue, including technologies that could cause harm, violate international law or human rights, or enable surveillance that violates acceptable norms. These values are also embedded in our Google Cloud Platform Acceptable Use Policy and our Prohibited Uses for Generative AI Policy, making them transparent and communicated to our customers. We support transparency. We firmly believe that building trust is essential for the long-term success of AI, and that starts with a commitment to transparency. Google has led the world in supporting the concept of model cards, which provide a shared understanding of model capabilities and limitations and help customers and researchers understand how we train and test our generative models. In addition to model-specific details, our paper on Approaches to Trust in Artificial Intelligence outlines how to identify, assess, and mitigate potential harmful impacts as part of an end-to-end process. We remain committed to sharing the latest research covering topics such as responsible AI, security, privacy, and anti-abuse. We support our customers on security, copyright, and portability issues. We believe we are in the same boat as our customers and recognize that responsible AI requires an ecosystem approach. We offer enterprise protection with generative AI copyright indemnification and don't charge egress fees, so customers are free to choose the most responsible provider without fear of being locked into a model. We also develop responsible AI tools, enablement, and support, allowing customers to customize their own risk and safety posture for each use case and each deployment. Our Secure AI Framework (SAIF) helps Google Cloud customers assess the relevance of traditional security risks and controls and determine how they need to adapt or extend them to cover AI systems. We also hope to support customers as they look to establish their AI strategy by sharing guidance and best practices on topics like AI governance and AI security. How Google Cloud is preparing for AI law compliance Internally, our AI Act readiness program is focused on ensuring our products and services comply with the requirements of the Act while continuing to deliver the innovative solutions our customers expect. It's a company-wide effort that involves collaboration across many teams, including: Law and Policy: We are thoroughly analyzing the requirements of the AI Act and working to integrate it into existing policies, practices, and contracts. Risk and Compliance: Assess and mitigate potential risks related to compliance with AI laws and ensure robust processes are in place. Product and Engineering: Ensuring that our AI systems are designed and built with the AI Act's principles of transparency, accountability, and fairness in mind, and that we continuously improve the user experience by incorporating the AI Act's requirements for testing, monitoring, and documentation. Client Engagement: We work closely with clients to understand their needs and concerns regarding AI law and provide guidance and support where required. How Google Cloud customers can prepare for AI law The AI Act is a complex piece of legislation, and the details of how it will be implemented are still being debated within the European Commission and the AI Office. As the AI Office moves forward and implementation guidance continues to evolve, it is important to familiarize yourself with the requirements of the AI Act and how they apply to your current or future uses of AI. For Google Cloud customers interested in preparing for AI law, we have several recommendations: Follow the developments in the Code of Conduct and the AI Office forum as discussions continue over the coming months to determine the compliance basis for General Purpose AI (GPAI) models. Stay informed about how your organization is using GPAI and where your compliance obligations lie. Engage with European regulators and industry associations: AI legislation can help boost competitiveness, productivity, and innovation opportunities in Europe, but only if it is implemented in line with international best practices and with real use cases in mind. Engage with industry associations and European regulators to share how your company is using AI and the value and benefits it will bring to your business in the future. Review your AI governance practices: The AI Act sets out several requirements for AI oversight. You should review your governance practices to ensure they meet the requirements of the Act. It is beneficial to assess the risk level of your AI systems and your overall data governance program as it will help with your explainability and transparency efforts. While the EU AI Law provides a framework for AI regulation, there are still areas that require continued attention and clarification. We are committed to open dialogue and collaboration to address concerns and ensure the benefits of AI are available to all while mitigating potential risks. As we prepare, we remain focused on delivering cutting-edge AI solutions to our enterprise customers that are innovative and compliant. We have the capabilities and experience to continue partnering with policymakers and customers as new regulations, frameworks and standards are developed. What Are The Main Benefits Of Comparing Car Insurance Quotes Online
[2]
What stays and what goes - ExBulletin
For the past 18 years, Thai Son Nguyen has been working to build a world-class digital transformation and e-commerce service provider through SmartOSC. Getty Believe it or not, we're already here in the second half of 2024. That means we have six months' worth of data on technology trends so far this year. That also means we know what companies should bring to the second half of 2024, and which trends to consign to the dustbin of history. In discussions with other industry leaders and participants in my company's podcast series, several common themes have emerged that will likely shape the direction of the tech industry in the second half of 2024. It's clear that as business leaders, we have a responsibility not only to respond to changing technology trends, but also to steer the ship. So now is the time for us to act: what trends should we bring with us into the second half of 2024, and what should we leave behind? What should remain: 1. Diverse and heterogeneous AI applications The rapid advancements in generative AI have created incredible hype as everyone is jumping on the bandwagon and incorporating this powerful technology into their work. But what's the best way to adopt it in your organization? While we may not have fully answered this question yet, I think it's clear that using AI techniques in different ways without top-down oversight and bottom-up experimentation will hold your teams and organizations back. Marketing is especially susceptible to fragmented use of AI. If different teams use different solutions, the technology can greatly improve efficiency, but brand messaging can get lost. Adding multiple marketers, each using different AI tools in their productions, can lead to completely fragmented output, Maria Elena Martyak, director at Doyle Blackfriars, wrote on LinkedIn. She suggested establishing guidelines for which AI tools to use and how to deploy them. On my company's podcast, Dennis Trawnitschek, Chief Technology Officer at SCBx, put it well: "[AI] "AI is a team sport, and it's also really important to embrace change from the top," he said, adding, "They need to walk the talk." He also emphasized the value of a balanced approach that harmonizes both top-down and bottom-up strategies. Senior teams need to actively oversee and guide the integration of AI tools into different aspects of the business, but it's important to embrace bottom-up involvement and foster a culture of experimentation and innovation within the organization. It may be tempting to splash out big bucks on a brand new, shiny system, but high price doesn't necessarily mean high quality. Technology needs to work for your company, so make sure you understand what your needs are. Your needs will vary depending on your industry, location, company maturity, and a variety of other factors, but for almost every company, allocating a portion of your budget to prioritize things like cybersecurity and customer engagement will be enough. The key is to do your research and understand your unique needs. For example, do you need technology that can scale quickly or that suits your organization's IT talent level? Whatever your needs are, they should be a bigger factor in your purchasing decision than price. 3. Monolithic or All-in-One Systems One-size-fits-all solutions may seem convenient on the surface, but they struggle to keep up with the rapidly evolving needs of modern businesses. Enterprise requirements are multifaceted and unique, so many businesses require a more customized approach. Modular, purpose-built solutions enable the integration of components that increase agility, adaptability, and drive innovation. A strategic imperative for your organization is to foster an ecosystem that prioritizes continuous innovation and flexibility. Consider specialized solutions that can help you navigate the complexities of your unique business environment while staying competitive in an ever-changing marketplace. What to bring: 1. No-code, low-code The rise of generative AI tools like ChatGPT and Claude is disrupting traditional notions of coding. These advanced language models allow anyone who can write to code with some proficiency, making the technology stack more accessible to a wider range of team members. This trend coincides with the growing popularity of no-code and low-code development platforms, which aim to democratize complex technologies like AI, machine learning and the Internet of Things by providing user-friendly interfaces and drag-and-drop tools. Traditionally, developing applications that leverage these advanced technologies required extensive programming knowledge, but no-code and low-code platforms enable faster prototyping and faster deployment of solutions, even for non-technical users. The potential of this trend is further highlighted by Gartner, who predicted that 70% of new business applications will use low-code or no-code technologies by 2025. Leveraging these technologies will give middle managers deeper insight into their teams' work, enabling them to better assess and support their direct reports. 2. Redesigned Products and Services In addition to the democratization of technology, another key trend is growing consumer demand for authenticity and real-life experiences. Research by Sitecore and Advanis revealed that 85% of consumers want brands to showcase "real life" experiences rather than "perfect life" experiences, reflecting a shift away from what is perceived as artificial. This trend requires a reinvention of products and services with an emphasis on quality, craftsmanship and storytelling that resonate with consumers looking to connect with meaningful brands. Embracing authenticity allows companies to differentiate and create products that cannot be easily imitated. Ultimately, it fosters exclusivity and value that resonates with customers. As we move into the second half of 2024, it will be important to make strategic decisions about which technology trends to embrace and which to leave behind. Saying goodbye to various AI applications, costly technologies, and monolithic systems will be a smart move for many business leaders. However, I believe no-code and low-code platforms and product reliability will see adoption in the near future. Making informed choices will help companies remain agile, innovative, and competitive in an ever-evolving technology landscape. Forbes Business Council is the leading growth and networking organization for business owners and leaders. Am I eligible to join? What Are The Main Benefits Of Comparing Car Insurance Quotes Online
[3]
Cloud CISO Perspective: How to think about your security budget - ExBulletin
CISOs can work with the board and executive risk committee to help their organizations redefine risk (there is plenty of guidance on how to begin and mature these conversations). This approach helps to alleviate increased demand pressures by focusing on critical business assets and services. CISOs can also facilitate conversations about mitigating risk by phasing out certain business services, products, vendors, or even entire classes of technology. Improving resource efficiency is a highly effective technique for getting more out of your supplies. For organizations that still use on-premise technology, this means moving to cloud-based systems with strong security designs and defaults. Also helpful are approaches that improve employee training, adopt more modern tools, and move to automation and orchestration tools. Leaders can also accept that supply-side deficits will occur, but this comes with its own risk calculations and risk management techniques. Operating this way requires executive buy-in, and risk liabilities need to be paid off (and discussed at least annually), but this approach has worked well for some organizations. It's an interesting time for cloud development. Generative AI is motivating organizations to rethink their approach to technology and security. There is a huge opportunity to change how you approach security and how you build secure products, which also leads to a rethinking of how you approach security budgets. For more leadership guidance from Google Cloud experts, check out our CISO Insights hub and contact the Ask Office of the CISO. What Are The Main Benefits Of Comparing Car Insurance Quotes Online
Share
Share
Copy Link
Meta's AI chatbot Galactica was taken offline just three days after its launch due to concerns over the accuracy of information it provided. The incident highlights the ongoing challenges in developing reliable AI language models.
Meta, the parent company of Facebook, recently launched an AI chatbot called Galactica, which was designed to assist researchers and students in finding scientific information. However, the chatbot was taken offline after just three days due to significant concerns about the accuracy of the information it provided
1
.Galactica was trained on a vast amount of scientific data, including over 48 million scientific papers, websites, textbooks, and lecture notes. The AI was capable of answering questions, solving math problems, and even writing scientific articles. Despite these impressive capabilities, users quickly discovered that the chatbot could generate plausible-sounding but entirely false information
2
.Several AI experts and researchers voiced their concerns about Galactica's potential to spread misinformation. Dan Hendrycks, an AI researcher at the University of California, Berkeley, pointed out that the chatbot could be used to generate fake scientific papers. Michael Black, director at the Max Planck Institute for Intelligent Systems, demonstrated how Galactica could produce convincing but entirely fabricated research
1
.In response to the growing criticism, Meta decided to take Galactica offline. Yann LeCun, Meta's Chief AI Scientist, acknowledged the concerns but also expressed disappointment, stating that the AI model was "pretty good" for its intended purpose of assisting with scientific writing
3
.The Galactica incident highlights the ongoing challenges in developing reliable AI language models. It raises questions about the potential risks of deploying such systems without adequate safeguards against misinformation. This event follows similar controversies surrounding other AI chatbots, such as Microsoft's Tay, which was shut down in 2016 after generating offensive content
2
.Related Stories
Despite the setback, Meta remains committed to developing AI tools for scientific research. The company emphasized that Galactica was an experiment and that they would continue to refine their approach. This incident serves as a reminder of the importance of rigorous testing and ethical considerations in AI development, particularly when dealing with sensitive areas like scientific information
3
.The Galactica controversy has sparked a broader debate about the role of AI in scientific research and information dissemination. While some researchers see potential in AI-assisted scientific writing, others warn of the dangers of relying too heavily on AI-generated content without proper verification
1
.Summarized by
Navi
[2]