2 Sources
[1]
Why neglecting AI ethics is such risky business - and how to do AI right
Nearly 80 years ago, in July 1945, MH Hasham Premji founded Western India Vegetable Products Limited in Amalner, a town in the Jalgaon district of Maharashtra, India, located on the banks of the Bori River. The company began as a manufacturer of cooking oils. In the 1970s, the company pivoted to IT and changed its name to Wipro. Over the years, it has grown to become one of India's biggest tech companies, with operations in 167 countries, nearly a quarter of a million employees, and revenue north of $10 billion. The company is led by executive chairman Rishad Premji, grandson of the original founder. Today, Wipro describes itself as a "leading global end-to-end IT transformation, consulting, and business process services provider." In this exclusive interview, ZDNET spoke with Kiran Minnasandram, VP and CTO of Wipro FullStride Cloud. Also: Forget SaaS: The future is Services as Software, thanks to AI He spearheads strategic technological initiatives and leads the development of future-looking solutions. His primary role is to drive innovation and empower organizations by providing them with state-of-the-art solutions. With a focus on cloud computing, he architects and implements advanced cloud-based architectures that transform how businesses operate, while optimizing operations, enhancing scalability, and fostering flexibility to propel clients forward on their digital journeys. Also: 7 leadership lessons for navigating the AI turbulence As you might imagine, AI has become a big focus for the company. In this interview, we had the opportunity to discuss the importance of AI ethics and sustainability as it pertains to the future of IT. Let's dig in. ZDNET: How do you define ethical AI, and why is it critical for businesses today? Kiran Minnasandram: Ethical AI not only complies with the law but is also aligned with the value we hold dear at Wipro. Everything we do is rooted in four pillars. AI must be aligned with our values around the individual (privacy and dignity), society (fairness, transparency, and human agency), and the environment. The fourth pillar is technical robustness that encompasses legal compliance, safety, and robustness. ZDNET: Why do many businesses struggle with AI ethics, and what are the key risks they should address? KM: The struggle often comes from the lack of a common vocabulary around AI. This is why the first step is to set up a cross-organizational strategy that brings together technical teams as well as legal and HR teams. AI is transformational and requires a corporate approach. Second, organizations need to understand what the key tenets of their AI approach are. This goes beyond the law and encompasses the values they want to uphold. Also: Is your business AI-ready? 5 ways to avoid falling behind Third, they can develop a risk taxonomy based on the risks they foresee. Risks are based on legal alignment, security, and the impact on the workforce. ZDNET: How does AI adoption impact corporate sustainability goals, both positively and negatively? KM: AI adoption has and will have a significant impact on corporate sustainability goals. On the positive side, AI can enhance operational efficiency by optimizing supply chains and improving resource management through more precise monitoring of energy and carbon consumption, as well as improving data collection processes for regulatory reporting. For example, AI can be used by manufacturing or logistics companies to optimize transportation routes, leading to reduced carbon emissions. Also: 5 quick ways to tweak your AI use for better results - and a safer experience Conversely, rapid development and deployment of AI is resulting in increased energy consumption and carbon emissions, as well as substantial water usage for cooling data centers. Training large AI models demands significant computational power, resulting in a larger carbon footprint. ZDNET: How should enterprises balance the drive for AI innovation with environmental responsibility? KM: As a starting point, enterprises will need to establish clear policies, principles, and guidelines on the sustainable use of AI. This creates a baseline for decisions around AI innovation and enables teams to make the right choices around the type of AI infrastructure, models, and algorithms they will adopt. Additionally, enterprises need to establish systems to effectively track, measure, and monitor environmental impact from AI usage and demand this from their service providers. We have worked with clients to evaluate current AI policies, engage internal and external stakeholders, and develop new principles around AI and the environment before training and educating employees across several functions to embed thinking in everyday processes. Also: Want to win in the age of AI? You can either build it or build your business with it By creating more transparency and accountability, companies can drive meaningful AI innovation while being cognizant of their environmental commitments. There are a significant number of cross-industry and cross-stakeholder groups being set up to support enterprises with exploring the environmental dilemmas, measurement requirements, and impact associated with AI innovation. With an incredibly fast-moving agenda, learning from others and collaborating on a global stage is critical. Wipro has led various collaborative global efforts on AI and the environment alongside our clients. We are well-placed to help our clients navigate the regulatory landscape. ZDNET: How are global regulations evolving to address ethical AI and sustainability concerns? KM: AI has never existed in isolation. Privacy, consumer protection, security, and human rights legislation all apply to AI. In fact, data protection regulators play a key role in safeguarding individuals from the harms of AI. Consumer protection plays a key role when it comes to algorithmic pricing, for example, and non-discrimination legislation can support cases of algorithmic discrimination. It is very important for organizations to understand how existing legislation applies to AI and upskill the workforce on how to embed legal protection, privacy, and security into the adoption of AI. Also: Is your business AI-ready? 5 ways to avoid falling behind In addition to existing legislation, some AI-specific laws are being enacted. In Europe, the EU AI Act governs the marketisation of AI products. The riskier the product, the more it needs to have controls wrapped around it. In the US, individual states are legislating around AI, especially in the context of labor management, which is arguably one of the most complex areas of AI deployment. ZDNET: What are the biggest misconceptions about AI ethics and sustainability, and how can businesses overcome them? KM: The biggest misconception is that it is challenging to bring innovation and responsibility together. The reality is that responsible AI is the key to unlocking AI progress as it provides long-term sustainable innovation. Also: How businesses are accelerating time to agentic AI value Ultimately, companies and consumers will choose the products they trust. So, trust is the cornerstone for AI deployment. Companies that bring together innovation and trust are going to have a competitive edge. ZDNET: How does Wipro FullStride Cloud support companies in aligning AI with ESG (environmental, social, and governance) goals? KM: We start by developing responsible AI frameworks that ensure fairness, transparency, and accountability within the AI models. We also leverage AI to track and report ESG metrics, as well as Green AI initiatives such as tools to measure and reduce AI's carbon footprint. Also: AI agents aren't just assistants: How they're changing the future of work today On the infrastructure side, we work with clients to optimize workloads and make energy-efficient use of data centers. We also work on industry-specific AI solutions for sectors like healthcare, finance, and manufacturing to meet ESG goals. ZDNET: What are the most effective ways cloud solutions can reduce AI's environmental footprint? KM: Cloud solutions can support energy-efficient data centers by using renewables, optimizing cooling, and incorporating carbon-aware computing. AI model optimization is also possible through less energy-intensive techniques such as federated learning and model pruning. Also: As AI agents multiply, IT becomes the new HR department You can align resources more closely with demand by using serverless and auto-scaling solutions to avoid over-provisioning. Cloud providers now offer carbon tracking and reporting dashboards, allowing you to measure and optimize your footprint. With multi-cloud and edge computing, you can further reduce data movement and process AI closer to the source. ZDNET: How can cloud infrastructure be leveraged to embed ethical considerations into AI development? KM: Cloud infrastructure offers powerful tools to help embed ethical considerations into AI development. Built-in AI ethics toolkits can support bias detection and fairness testing by identifying imbalances in training data and models. Cloud platforms also offer diversity-aware training tools to help ensure datasets are representative and inclusive, which is critical for developing responsible AI systems. Also: The CTO vs. CMO AI power struggle - who should really be in charge? You can also take advantage of cloud-based AI frameworks that offer explainability and transparency features to better understand how models make decisions. Secure and privacy-preserving AI development is supported through capabilities like differential privacy and encrypted processing, enabling responsible data handling from end to end. Cloud services can further support ethical AI through automated compliance monitoring, helping ensure adherence to regulations such as GDPR and CCPA. Tools for model drift testing and hallucination detection are also available, making it easier to continuously monitor model performance and flag inaccurate or unreliable outputs over time. ZDNET: Why do some organizations struggle to measure AI's sustainability impact, and how can cloud-based tools help? KM: Many organizations struggle to measure AI's sustainability impact due to the absence of standard metrics. Without a universal framework to quantify environmental effects, it becomes difficult to benchmark progress or compare across initiatives. Cloud-based tools can help bridge this gap by offering customizable dashboards and models that track carbon output across the AI lifecycle, from development through deployment. Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses Real-time monitoring presents another challenge, as energy consumption associated with AI workloads can fluctuate significantly. Static reporting methods often miss these variations. Cloud platforms can offer dynamic, real-time tracking tools that adjust to shifting workloads and provide a more accurate view of energy usage. Additionally, fragmented data visibility across cloud, on-premises, and edge environments complicates sustainability assessments. Cloud-native solutions can aggregate data from multiple sources into a single view, improving transparency and decision-making. Some of AI's environmental costs remain hidden. These extend beyond training to inference, storage, and compute scaling. Cloud tools can surface these lesser-known impacts by analyzing end-to-end usage patterns. Regulatory and compliance gaps also add complexity, especially as ESG (environmental, social, and governance) reporting requirements vary by region. Cloud services can help manage this by automating region-specific compliance tracking. Finally, cloud-based analytics can assist in navigating the trade-offs between cost, model performance, and sustainability, offering insights that support more balanced, responsible AI development. ZDNET: What concrete steps can organizations take to improve AI transparency and accountability? KM: First, train the workforce to use AI responsibly. Encourage the workforce to deploy AI within a safe space by querying and interrogating it. Also: How Nvidia is helping upskill educators and prep students for the AI age Second, set up a governance structure for AI, encompassing all aspects of the business, from procurement to HR, CISO, and risk management. ZDNET: How does AI bias emerge, and what role do cloud-based frameworks play in mitigating it? KM: Bias in AI can come from several sources, including algorithmic training data that are unrepresentative or contain historical prejudices, as well as errors and inconsistencies in human-labeled datasets. If trained on poor data, AI decisions may be skewed based on cultural, corporate, or societal ethical frameworks, leading to inconsistent outcomes. Also: AI for the world, or just the West? How researchers are tackling Big Tech's global gaps Legacy AI models trained on outdated assumptions and historical data may continue to propagate past biases. AI may also struggle with diverse dialects, regional contexts, or cultural nuances. Cloud-based frameworks can help mitigate this by monitoring compliance with diverse regional regulations and ensuring fair AI model development through validation across diverse economic, social, and demographic groups. Cloud-based adaptive training processes can also rebalance datasets to prevent power-dynamic biases. ZDNET: What governance strategies should enterprises implement to ensure responsible AI usage? KM: The most important thing is to have a governance framework. Some organizations may have a separate AI governance structure, while others (like ours) have embedded it within our existing governance construct. Also: The best free AI courses and certificates in 2025 It is very important to involve every corner of the organization. AI impact assessments are useful tools to embed legal protection, privacy, and robustness in the deployment of AI from the inception stage. What do you think about the growing emphasis on ethical and sustainable AI? Has your organization implemented any frameworks or policies to ensure responsible AI development? How are you approaching the environmental impact of AI workloads, and are you using any cloud-based tools to help measure or reduce that footprint? Do you think global regulations are keeping pace with AI innovation, or are companies being left to navigate the gray areas on their own? Let us know in the comments below.
[2]
The AI Power Play: How ChatGPT, Gemini, Claude, And Others Are Shaping The Future Of Artificial Intelligence
Artificial intelligence (AI) has seen rapid growth, transforming industries and daily life. From chatbots to advanced generative models, AI's capabilities continue to expand, driven by powerful companies investing heavily in research and development. "The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone," wrote Bill Gates in 2023. "It will change the way people work, learn, travel, get health care, and communicate with each other." In 2025, companies such as OpenAI, Google, Anthropic, and emerging challengers like DeepSeek have pushed the boundaries of what large language models (LLMs) can do. Moreover, corporate solutions from Microsoft and Meta are making AI tools more accessible to enterprises and developers alike. This article explores the latest AI models available to the public, their advantages and drawbacks, and how they compare in the competitive AI landscape. The Power and Performance of AI Models AI models rely on extensive computational resources, particularly large language models (LLMs) that require vast datasets and processing power. The leading AI models undergo complex training procedures that involve billions of parameters, consuming significant energy and infrastructure. Key AI players invest in cutting-edge hardware and optimization strategies to improve efficiency while maintaining high performance. The balance between computational power, speed, and affordability is a significant factor in differentiating these AI models. ChatGPT, developed by OpenAI, is one of the most recognizable and widely used AI models in the world. Built with a dialogue-driven format, ChatGPT is designed to answer follow-up questions, challenge incorrect premises, admit mistakes, and reject inappropriate requests. Its versatility has made it a leading AI tool for both casual and professional use, spanning industries such as customer service, content creation, programming, and research. ChatGPT is ideal for a wide range of users, including writers, business professionals, educators, developers, and researchers. Its free-tier accessibility makes it an excellent starting point for casual users, while businesses, content creators, and developers can leverage its advanced models for enhanced productivity and automation. It is also among the most user-friendly AI models available, featuring a clean interface, intuitive responses, and seamless interaction across devices. However, organizations that require custom AI models or stricter data privacy controls may find its closed-source nature restrictive, particularly compared to open-source alternatives like Meta's LLaMA. The latest version, GPT-4o, is available for free-tier users and offers a strong balance of speed, reasoning, and text generation capabilities. For users seeking enhanced performance, ChatGPT Plus provides priority access and faster response times at a monthly subscription cost. For professionals and businesses requiring more robust capabilities, ChatGPT Pro unlocks advanced reasoning features through the o1 promode, which includes enhanced voice functionality and improved performance on complex queries. Developers looking to integrate ChatGPT into applications can access its API, a type of software interface. Pricing starts at approximately $0.15 per million input tokens and $0.60 per million output tokens for GPT-4o mini, while the more powerful o1 models come at a higher cost. A token is defined as a fundamental unit of data, like a word or subword, that an AI model processes to understand and generate text. One of ChatGPT's greatest strengths is its versatility and conversational memory. It can handle a broad range of tasks, from casual conversation and creative writing to technical problem-solving, coding assistance, and business automation. When memory is enabled, ChatGPT can retain context across interactions, allowing for a more personalized user experience. Another key advantage is its proven user base -- with hundreds of millions of users worldwide, ChatGPT has undergone continuous refinement based on real-world feedback, improving its accuracy and usability. Additionally, GPT-4o's multimodal capabilities allow it to process text, images, audio, and video, making it a comprehensive AI tool for content creation, analysis, and customer engagement. While a free version exists, the most powerful features require paid subscriptions, which may limit accessibility for smaller businesses, independent developers, and startups. Another drawback is an occasional lag in real-time updates; even though ChatGPT has web-browsing capabilities, it may struggle with the most recent or fast-changing information. Lastly, its proprietary model means users have limited control over modifications or customization, as they must adhere to OpenAI's data policies and content restrictions. Google's Gemini Google's Gemini series is renowned for its multimodal capabilities and its ability to handle extensive context, making it a versatile tool for both personal and enterprise-level applications. General consumers and productivity users benefit from Gemini's deep integration with Google Search, Gmail, Docs, and Assistant, making it an excellent tool for research, email drafting, and task automation. Business and enterprise users find value in Gemini's integration with Google Workspace, enhancing collaboration across Drive, Sheets, and Meet. Developers and AI researchers can leverage its capabilities through Google Cloud and Vertex AI, making it a strong choice for building AI applications and custom models. Creative professionals can take advantage of its multimodal abilities, working with text, images, and video. Meanwhile, students and educators benefit from Gemini's ability to summarize, explain concepts, and assist with research, making it a powerful academic tool. Google Gemini is highly accessible, especially for those already familiar with Google services. Its seamless integration across Google's ecosystem allows for effortless adoption in both personal and business applications. Casual users will find it intuitive, with real-time search enhancements and natural interactions that require little to no learning curve. Developers and AI researchers can unlock advanced customization through API access and cloud-based features, though utilizing these tools effectively may require technical expertise. The current versions, Gemini 1.5 Flash and Pro, cater to different needs, with Flash offering a cost-efficient, distilled option and Pro providing higher performance. Meanwhile, the Gemini 2.0 series, designed primarily for enterprise use, includes experimental models like Gemini 2.0 Flash with enhanced speed and multimodal live APIs, as well as the more powerful Gemini 2.0 Pro. Basic access to Gemini is often free or available through Google Cloud's Vertex AI. Still, advanced usage, especially when integrated into enterprise solutions, was introduced at $19.99-$25 per month per user, with pricing adjusted to reflect added features like a 1-million-token context window. Gemini's main advantage over other AIs is that it excels in processing text, images, audio, and video simultaneously, making it a standout in multimodal mastery. It also integrates seamlessly with Google Workspace, Gmail, and Android devices, making it a natural fit for users already in the Google ecosystem. Additionally, it offers competitive pricing for developers and enterprises needing robust capabilities, especially in extended context handling. However, Gemini's performance can be inconsistent, particularly with rare languages or specialized queries. Some advanced versions may be limited by safety testing, delaying wider access. Furthermore, its deep integration with Google's ecosystem can be a barrier for users outside that environment, making adoption more challenging. Anthropic's Claude Anthropic's Claude is known for its emphasis on safety, natural conversational flow, and long-form contextual understanding. It is particularly well-suited for users who prioritize ethical AI usage and structured collaboration in their workflows. Researchers and academics who need long-form contextual retention and minimal hallucinations, as well as writers and content creators who benefit from its structured approach and accuracy, will find Claude an essential and beneficial AI assistant. Business professionals and teams can leverage Claude's "Projects" feature for task and document management, while educators and students will find its safety guardrails and clear responses ideal for learning support. Because Claude is highly accessible for those seeking a structured, ethical AI with a strong contextual understanding, it is moderately suitable for creative users who may find its restrictive filters limiting and less ideal for those needing unrestricted, fast brainstorming tools or AI-generated content with minimal moderation. Claude 3.5 Sonnet, on the other hand, is the flagship model, offering enhanced reasoning, speed, and contextual understanding for both individual and enterprise users. For businesses and teams, the Claude Team and Enterprise Plans start at approximately $25 per user per month (billed annually), providing advanced collaboration features. Individual users can access Claude Pro, a premium plan that costs around $20 per month, offering expanded capabilities and priority access. A limited free tier is also available, allowing general users to explore basic features and test its functionality. Unlike most AIs, Claude excels in ethical AI safety, extended conversational memory, and structured project management, making it ideal for users who require reliable and well-moderated AI assistance. Its intuitive interface and organization tools enhance productivity for writers, researchers, educators, and business professionals. However, there are instances when availability constraints during peak hours can disrupt workflow efficiency. Claude's strict safety filters, while preventing harmful content, sometimes limit creative flexibility, making it less suitable for highly experimental or unrestricted brainstorming sessions. Additionally, enterprise costs may be high for large-scale teams with extensive AI usage. DeepSeek AI DeepSeek, a newcomer from China, has quickly gained attention for its cost efficiency and open-access philosophy. Unlike many established AI models, DeepSeek focuses on providing affordable AI access while maintaining strong reasoning capabilities, making it an appealing option for businesses and individual users alike. "DeepSeek R1 is one of the most amazing and impressive breakthroughs I've ever seen -- and as open source, a profound gift to the world," said Marc Andreessen, former software engineer and co-founder of Netscape. Being an excellent choice for cost-conscious businesses, independent developers, and researchers who need a powerful yet affordable AI solution, DeepSeek is particularly suitable for startups, academic institutions, and enterprises that require strong reasoning and problem-solving capabilities without high operational costs. It is highly accessible for individuals due to its free web-based model, and even developers and enterprises benefit from its low-cost API. However, organizations requiring politically neutral AI models or strict privacy assurances may find it less suitable, especially in industries where data security and regulatory compliance are paramount. The latest model, DeepSeek-R1, is designed for advanced reasoning tasks and is accessible through both an API and a chat interface. An earlier version, DeepSeek-V3, serves as the architectural foundation for the current releases, offering an extended context window of up to 128,000 tokens while being optimized for efficiency. DeepSeek is free for individual users through its web interface, making it one of the most accessible AI models available. However, for business applications, API usage comes at a significantly lower cost than U.S. competitors, making it an attractive option for enterprises looking to reduce expenses. Reports indicate that DeepSeek's training costs are drastically lower, with estimates suggesting it was trained for approximately $6 million, a fraction of the cost compared to competitors, whose training expenses can run into the tens or hundreds of millions. One of DeepSeek's biggest strengths is its cost efficiency. It allows businesses and developers to access powerful AI without the financial burden associated with models like OpenAI's GPT-4 or Anthropic's Claude. Its open-source approach further enhances its appeal, as it provides model weights and technical documentation under open licenses, encouraging transparency and community-driven improvements. Additionally, its strong reasoning capabilities have been benchmarked against leading AI models, with DeepSeek-R1 rivaling OpenAI's top-tier models in specific problem-solving tasks. As Anthropic co-founder Jack Clark wrote in his "Import AI" newsletter, "R1 is significant because it broadly matches OpenAI's o1 model on a range of reasoning tasks and challenges the notion that Western AI companies hold a significant lead over Chinese ones." A notable problem with DeepSeek is that its response latency, especially during periods of high demand, makes it less ideal for real-time applications where speed is crucial. Censorship and bias are also potential concerns. DeepSeek aligns with local content regulations, meaning it may sanitize or avoid politically sensitive topics, which could limit its appeal in global markets. Additionally, some users have raised privacy concerns due to its Chinese ownership, questioning whether its data policies are as stringent as those of Western AI companies that comply with strict international privacy standards. Microsoft's Copilot Microsoft's Copilot is a productivity-focused AI assistant designed to enhance workplace efficiency through seamless integration with the Microsoft 365 suite. By embedding AI-powered automation directly into tools like Word, Excel, PowerPoint, Outlook, and Teams, Copilot serves as an intelligent assistant that streamlines workflows, automates repetitive tasks, and enhances document generation. Ideal for businesses, enterprise teams, and professionals who heavily rely on Microsoft 365 applications for their daily operations, Microsoft's Copilot is particularly beneficial for corporate professionals, financial analysts, project managers, and administrative staff who need AI-powered assistance to enhance productivity and reduce time spent on routine tasks. However, organizations that prefer open-source AI models or require flexible, cross-platform compatibility may find Copilot less suitable, especially if they rely on non-Microsoft software ecosystems for their workflows. Microsoft 365 Copilot is available across Microsoft's core productivity applications, providing AI-powered assistance for document creation, email drafting, data analysis, and meeting summarization. The service costs approximately $30 per user per month and typically requires an annual subscription. However, pricing can vary based on region and enterprise agreements, with some organizations receiving customized pricing based on their licensing structure. One of Copilot's most significant advantages is its deep ecosystem integration within Microsoft 365. For businesses and professionals already using Microsoft Office, Copilot enhances workflows by embedding AI-driven suggestions and automation directly within familiar applications. Its task automation capabilities are another significant benefit, helping users generate reports, summarize meetings, draft emails, and analyze data more efficiently. Furthermore, Copilot receives continuous updates backed by Microsoft's substantial investments in AI and cloud computing, ensuring regular improvements in performance, accuracy, and feature expansion. In contrast, one of the significant drawbacks of Microsoft's Copilot is its ecosystem lock-in -- Copilot is tightly coupled with Microsoft 365, meaning its full potential is only realized by organizations already invested in Microsoft's software ecosystem. Limited flexibility is another concern, as it lacks extensive third-party integrations found in more open AI platforms, making customization difficult for businesses that rely on a broader range of tools. Additionally, some users report occasional response inconsistencies, where Copilot may lose context in long sessions or provide overly generic responses, requiring manual refinement. Meta AI Meta's suite of AI tools, built on its open-weight LLaMA models, is a versatile and research-friendly AI suite designed for both general use and specialized applications. Meta's approach prioritizes open-source development, accessibility, and integration with its social media platforms, making it a unique player in the AI landscape. It is ideal for developers, researchers, and AI enthusiasts who want free, open-source models that they can customize and fine-tune. It is also well-suited for businesses and brands leveraging Meta's social platforms, as its AI can enhance customer interactions and content creation within apps like Instagram and WhatsApp. Meta AI is highly accessible for developers and researchers due to its open-source availability and flexibility. However, businesses and casual users may find it less intuitive compared to AI models with more refined user-facing tools. Additionally, companies needing strong content moderation and regulatory compliance may prefer more tightly controlled AI systems from competitors like Microsoft or Anthropic. Meta AI operates on a range of LLaMA models, including LLaMA 2 and LLaMA 3, which serve as the foundation for various applications. Specialized versions, such as Code Llama, are tailored for coding tasks, offering developers AI-powered assistance in programming. One of Meta AI's standout features is its open-source licensing, which makes many of its tools free for research and commercial use. However, enterprise users may encounter service-level agreements (SLAs) or indirect costs, especially when integrating Meta's AI with proprietary systems or platform partnerships. Meta AI's biggest advantage is its open-source and customizable nature, allowing developers to fine-tune models for specific use cases. This fosters greater innovation, flexibility, and transparency compared to closed AI systems. Additionally, Meta AI is embedded within popular social media platforms like Facebook, Instagram, and WhatsApp, giving it massive consumer reach and real-time interactive capabilities. Meta also provides specialized AI models, such as Code Llama, for programming and catering to niche technical applications. Despite its powerful underlying technology, Meta AI's user interfaces and responsiveness can sometimes feel less polished than those of competitors like OpenAI and Microsoft. Additionally, Meta has faced controversies regarding content moderation and bias, raising concerns about AI-generated misinformation and regulatory scrutiny. Another challenge is ecosystem fragmentation; with multiple AI models and branding under Meta, navigating the differences between Meta AI, LLaMA, and other offerings can be confusing for both developers and general users. AI's Impact on the Future of Technology As AI adoption grows, the energy demand for training and operating these models increases. Companies are developing more efficient AI models while managing infrastructure costs. Modern AI models, particularly those known as large language models (LLMs), are powerhouses that demand vast computational resources. Training these models involves running billions of calculations across highly specialized hardware over days, weeks, or even months. The process is analogous to running an industrial factory non-stop -- a feat that requires a tremendous amount of energy. The rise of AI assistants, automation, and multimodal capabilities will further shape industries, from customer support to content creation. "The worst thing you can do is have machines wasting power by being always on," said James Coomer, senior vice president for products at DDN, a California-based software development firm, during the 2023 AI conference ai-PULSE. AI competition will likely drive further advancements, leading to smarter, more accessible, and environmentally conscious AI solutions. However, challenges related to cost, data privacy, and ethical considerations will continue to shape the development of AI. Sustainable AI and the Future AI companies are actively addressing concerns about energy consumptionand sustainability by optimizing their models to enhance efficiency while minimizing power usage. One key approach is leveraging renewable energy sources, such as solar and wind power, to supply data centers, which significantly reduces their carbon footprint. Additionally, advancements in hardware are being developed to support more energy-efficient AI computation, enabling systems to perform complex tasks with lower energy demands. These innovations not only help reduce environmental impact but also contribute to long-term cost savings for AI companies. Beyond technological improvements, regulatory policies are being introduced to ensure AI growth aligns with environmental sustainability. Governments and industry leaders need to work together to establish guidelines that encourage responsible energy consumption while promoting research into eco-friendly AI solutions. However, the fear of governmental regulation often makes technology leaders hesitant to collaborate. One voice at the forefront of global AI governance is Amandeep Singh Gill, the United Nations Secretary-General's envoy on technology, who emphasizes the importance of collaborative governance in AI development -- and sustainable development needs to be part of this cooperation and coordination. "[W]e have to find ways to engage with those who are in the know," he said in a September 2024 interview in Time. "Often, there's a gap between technology developers and regulators, particularly when the private sector is in the lead. When it comes to diplomats and civil servants and leaders and ministers, there's a further gap. How can you involve different stakeholders, the private sector in particular, in a way that influences action? You need to have a shared understanding." No matter the level of collaboration between the private and public sectors, companies need to aggressively explore emission-mitigation methods like carbon offset programs and energy-efficient algorithms to further mitigate their environmental impact. By integrating these strategies, the AI industry is making strides toward a more sustainable future without compromising innovation and progress. Balancing Innovation and Responsibility AI is advancing rapidly, with OpenAI, Google, Anthropic, DeepSeek, CoPilot, and MetaAI leading the way. While these models offer groundbreaking capabilities, they also come with costs, limitations, and sustainability concerns. Businesses, researchers, and policymakers must prioritize responsible AI development while maintaining accessibility and efficiency. The Futurist: The AI (R)evolution panel discussion held by the Washington Post brought together industry leaders to explore the multifaceted impact of artificial intelligence (AI) on business, governance, and society. Martin Kon of Cohere explains that his role is securing AI for business with an emphasis on data privacy, which is essential for "critical infrastructure like banking, insurance, health care, government, energy, telco, etc." Because there's no equivalent of Google Search for enterprises, AI, Kon says, is an invaluable tool in searching for needles in haystacks-but it's complicated: "Every year, those haystacks get bigger, and every year, the needles get more valuable, but every enterprise's haystacks are different. They're data sources, and everyone cares about different needles." He is, however, optimistic on the job front, maintaining that the new technology will create more jobs and greater value than many critics fear. "Doctors, nurses, radiologists spend three and a half hours a day on admin. If you can get that done in 20 minutes, that's three hours a day you've freed up of health care professionals. You're not going to fire a third of them. They're just going to have more time to treat patients, to train, to teach others, to sleep for the brain surgery tomorrow." May Habib, CEO of Writer, which builds AI models, is similarly optimistic, describing AI as "democratizing." "All of these secret Einsteins in the company that didn't have access to the tools to build can now build things that can be completely trajectory-changing for the business, and that's the kind of vision that folks need to hear. And when folks hear that vision, they see a space and a part for themselves in it." Sy Choudhury, director of business development for AI Partnerships at Meta, sees a vital role for AI on the public sector side. "[I]t can be everything very mundane from logistics all the way to cybersecurity, all the way to your billing and making sure that you can talk to your state school when you're applying for federal student--or student loans, that kind of thing." Rep. Jay Obernolte (R-CA), who led the House AI Task Force in 2024, acknowledges the need for "an institute to set standards for AI and to create testing and evaluation methodologies for AI" but emphasizes that "those standards should be non-compulsory..." And while agreeing that AI is "a very powerful tool," he says that it's still "just a tool," adding that "if you concentrate on outcomes, you don't have to worry as much about the tools..." But some of those outcomes, he admits, can be adverse. "[O]ne example that I use a lot is the potential malicious use of AI for cyber fraud and cyber theft," he says. "[I]n the pantheon of malicious uses of AI, that's one of the ones that we at the task force worried the most about because we say bad actors are going to bad, and they're going to bad more productively with AI than without AI because it's such a powerful tool for enhancing productivity." Consumers can also do their part by managing AI usage wisely -- turning off unused applications, optimizing workflows, and advocating for sustainable AI practices. AI's future depends on balancing innovation with responsibility. The challenge is not just about creating smarter AI but also ensuring that its growth benefits society while minimizing its environmental impact. Author Bio: Sharon Kumar is a technology editor at The Observatory, where he provides analysis and critical perspectives on the rapidly evolving tech landscape. As a seasoned MAANG tech professional with over a decade of experience in program management, strategic planning, and technology-driven business solutions, including AI and system performance optimization, Kumar has a deep understanding of emerging trends, digital infrastructure, and software development.
Share
Copy Link
An in-depth look at the current state of AI, focusing on ethical considerations, sustainability challenges, and the competitive landscape of leading AI models like ChatGPT and Google's Gemini.
The rapid growth of Artificial Intelligence (AI) is transforming industries and daily life, with tech giants and startups alike pushing the boundaries of what large language models (LLMs) can achieve. As Bill Gates noted, "The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone" 2. This technological revolution, however, comes with significant ethical considerations and sustainability challenges that businesses must address.
Kiran Minnasandram, VP and CTO of Wipro FullStride Cloud, emphasizes the importance of ethical AI in today's business landscape. He defines ethical AI as not only complying with the law but also aligning with core values around individuals, society, and the environment 1. Minnasandram outlines four pillars of ethical AI:
Many businesses struggle with implementing ethical AI due to a lack of common vocabulary and understanding across different organizational departments. Minnasandram suggests a three-step approach to address this challenge:
The adoption of AI has significant implications for corporate sustainability goals. On the positive side, AI can enhance operational efficiency, optimize resource management, and improve data collection for regulatory reporting. For instance, AI can help manufacturing or logistics companies optimize transportation routes, leading to reduced carbon emissions 1.
However, the rapid development and deployment of AI also present environmental challenges. Training large AI models demands significant computational power, resulting in increased energy consumption, carbon emissions, and substantial water usage for cooling data centers 1.
To strike a balance between AI innovation and environmental responsibility, enterprises should:
Wipro has been at the forefront of these efforts, working with clients to evaluate AI policies, engage stakeholders, and develop new principles around AI and the environment 1.
The AI market is dominated by powerful language models developed by tech giants and innovative startups. Some of the leading models include:
ChatGPT by OpenAI: Known for its versatility and conversational memory, ChatGPT offers a range of capabilities from casual conversation to technical problem-solving and coding assistance. It's available in free and paid tiers, with pricing based on token usage for API access 2.
Google's Gemini: Renowned for its multimodal capabilities and extensive context handling, Gemini is deeply integrated with Google's ecosystem, making it valuable for both personal and enterprise-level applications 2.
As AI continues to advance, global regulations are evolving to address ethical and sustainability concerns. Minnasandram notes that AI is subject to existing privacy, consumer protection, security, and human rights legislation. Data protection regulators play a crucial role in safeguarding individuals from potential AI-related harms 1.
In conclusion, as AI becomes increasingly integrated into business operations and daily life, companies must prioritize ethical considerations and sustainability. By doing so, they can harness the power of AI while mitigating risks and contributing to a more responsible technological future.
OpenAI CEO Sam Altman reveals Meta's aggressive recruitment tactics, offering $100 million signing bonuses to poach AI talent. Despite the lucrative offers, Altman claims no top researchers have left OpenAI for Meta.
34 Sources
Business and Economy
19 hrs ago
34 Sources
Business and Economy
19 hrs ago
YouTube announces integration of Google's advanced Veo 3 AI video generator into Shorts format, potentially revolutionizing content creation and raising questions about the future of user-generated content.
7 Sources
Technology
3 hrs ago
7 Sources
Technology
3 hrs ago
Pope Leo XIV, the first American pope, has made artificial intelligence's threat to humanity a key issue of his papacy, calling for global regulation and challenging tech giants' influence on the Vatican.
3 Sources
Policy and Regulation
3 hrs ago
3 Sources
Policy and Regulation
3 hrs ago
Google introduces Search Live, an AI-powered feature enabling back-and-forth voice conversations with its search engine, enhancing user interaction and multitasking capabilities.
11 Sources
Technology
3 hrs ago
11 Sources
Technology
3 hrs ago
OpenAI CEO Sam Altman announces GPT-5's summer release, hinting at significant advancements and potential shifts in AI model deployment. Meanwhile, OpenAI renegotiates with Microsoft and expands into new markets.
2 Sources
Technology
3 hrs ago
2 Sources
Technology
3 hrs ago