Curated by THEOUTPOST
On Sat, 20 Jul, 4:01 PM UTC
2 Sources
[1]
The AI boom has an unlikely early winner: Wonky consultants
After ChatGPT came out in 2022, the marketing team at Reckitt Benckiser, which makes Lysol and Mucinex, was convinced that new artificial intelligence technology could help its business. But the team was uncertain how, so it turned to Boston Consulting Group for help. Reckitt's request was one of hundreds that Boston Consulting Group received last year. It now earns a fifth of its revenue -- from zero just two years ago -- through work related to AI. "There's a genuine thirst to figure out what are the implications for their businesses," said Vladimir Lukic, Boston Consulting Group's managing director for technology. The next big boom in tech is a long-awaited gift for wonky consultants. From Boston Consulting Group and McKinsey & Co. to IBM and Accenture, sales are growing and hiring is on the rise because companies are in desperate need of technology Sherpas who can help them figure out what generative AI means and how it can help their businesses. While the tech industry is casting about for ways to make money off generative AI, the consultants have begun cashing in. IBM, which has 160,000 consultants, has secured more than $1 billion in sales commitments related to generative AI for consulting work and its watsonx system, which can be used to build and maintain AI models. Accenture, which provides consulting and technology services, booked $300 million in sales last year. About 40% of McKinsey's business this year will be generative AI related, and KPMG International, which has a global advisory division, went from making no money a year ago from generative-AI-related work to targeting more than $650 million in U.S. business opportunities tied to the technology over the past six months. The demand for tech-related advice recalls the industry's dot-com boom. Businesses stampeded consultants with requests for counsel in the 1990s. From 1992 to 2000, sales for Sapient, a digital consulting firm, went from $950,000 to $503 million. Subsequent technology shifts like the migration to mobile and cloud computing were less hurried, said Nigel Vaz, CEO of the firm, which is now known as Publicis Sapient. "In the mid-'90s, CEOs would say, 'I don't know what a website is or what it could do for my business, but I need it,'" Vaz said. "This is similar. Companies are saying, 'Don't tell me what to build. Tell me what you can build.'" Consulting firms have been scrambling to show what they can do. In May, Boston Consulting Group hosted a one-day conference at a Boston convention center where it set up demonstration booths for OpenAI, Anthropic and other AI tech leaders. It also demonstrated some of its own AI work in robotics and programming. Generative AI sales are helping the industry find growth after a postpandemic lull. The management consulting industry in the United States is expected to collect $392.2 billion in sales this year, up 2% from a year ago, according to IBISWorld, a research firm. The work that consultants have been enlisted to do varies from business to business. Some consultancies are advising companies on regulatory compliance as regions like the European Union pass laws regulating artificial intelligence. Others are drawing up plans for AI customer support systems or developing guardrails to prevent AI systems from making errors. For businesses, the results have been mixed. Generative AI is prone to giving people incorrect, irrelevant or nonsensical information, known as hallucinations. It is difficult to ensure that it provides accurate information. It can also be slower to respond than a person, which can confuse customers about whether their questions will be answered. IBM, which has a $20 billion consulting business, ran into some of those issues on its work with McDonald's. The companies developed an AI-powered voice system to take drive-through orders. But after customers reported that the system made mistakes, like adding nine iced teas to an order instead of the one Diet Coke requested, McDonald's ended the project. McDonald's said it remained committed to a future of digital ordering and would evaluate alternative systems. IBM said it was working with McDonald's on other projects and was in discussions with other restaurant chains about using its voice-activated AI. Other programs from IBM have shown more promise. The company worked with Dun & Bradstreet, a business data provider, to develop a generative AI system to analyze and provide advice on selecting suppliers. The tool, called Ask Procurement, will allow employees to conduct detailed searches with specific parameters. For example, it could find memory chip suppliers that are minority owned and automatically create a request for proposals for them. Gary Kotovets, chief data and analytics officer at Dun & Bradstreet, said his team of 30 people needed IBM's help to build the system. To reassure customers that the answers that Ask Procurement provides are accurate, he insisted that customers be able to trace every answer to an original source. "Hallucinations are a real concern and in some cases a perceived concern," Kotovets said. "You have to overcome both and convince the client it's not hallucinating." Over seven weeks this year, McKinsey's AI group, QuantumBlack, built a customer service chatbot for ING Bank, with guardrails to prevent it from offering mortgage or investment advice. Because the viability of the chatbot was uncertain and McKinsey had limited experience with the relatively new technology, the firm did the work as a "joint experiment" under its contract with ING, said Bahadir Yilmaz, chief analytics officer at ING. The bank paid McKinsey for the work, but Yilmaz said that many consultants were willing to do speculative work with generative AI without pay because they wanted to demonstrate what they could do with the new technology. The project has been labor intensive. When ING's chatbot gave incorrect information during its development, McKinsey and ING had to identify the cause. They traced the problem back to issues like outdated websites, said Rodney Zemmel, a senior partner at McKinsey working on technology. The chatbot now handles 200 of 5,000 customer inquiries daily. ING has people review every conversation to make sure the system doesn't use discriminatory or harmful language or hallucinate. "The difference between ChatGPT and our chatbot is our chatbot cannot be wrong," Yilmaz said. "We have to be safe with the system we're building, but we're close." Over a four-month period this year, Reckitt worked with Boston Consulting Group to develop an AI platform that could create local advertisements in different languages and formats. With the push of a button, the system can turn a commercial about Finish dishwashing detergent from English into Spanish. Reckitt's AI marketing system, which is being tested, can make developing local ads 30% faster, saving the company time and sparing it from some tedious work, said Becky Verano, vice president of global creativity and capabilities at Reckitt. Because the technology is so new, Verano said, the team is learning and adjusting its work as new tech companies release updates to the image and language models. She credited Boston Consulting Group with bringing structure to that chaos. "You're constantly having to move to the latest trends, to the newest findings, and learning each time how the tools respond," she said. "There's not an exact science to it." This article originally appeared in The New York Times.
[2]
The push to develop generative artificial intelligence without all the lawsuits
Companies such as Google and OpenAI built their artificial intelligence chatbots and image generators by gobbling content from the web, spurring legal fights over copyright claims. Now, some of those copyright holders are trying to get in on the AI boom. Major stock photo suppliers Getty Images and Shutterstock, among others, are building AI image generators with their own data, bypassing the legal worries that have shadowed the industry. While the largest tech companies have been locked in a dizzying AI race, visual media marketplaces, content creators and artists are pushing for licensing so that they can be paid for work that helps train AI models and influences the technology they worry could one day displace them. It's part of a larger effort to transform how AI models are developed, one that would train them with licensed data rather than with content that is scraped without permission. Although many image generators are often used by consumers for amusement, such as creating the viral image of the pope in a white puffer jacket, the tech industry has coalesced around the idea that more advertising agencies and other companies would use these tools for marketing if there was no legal uncertainty surrounding them. That's the target market for Getty. Its partner, Picsart, which is building an AI image model with stock photos from Getty's repository, is trying to appeal to small- and medium-size businesses. The company is mostly known for a photo-editing app used by more than 100 million people, most of them Generation Zers. Picsart wanted to use licensed data to build the model because, for both the company and prospective customers, lawsuits are "a drag to the business, it's a distraction," said Craig Foster, its chief financial officer. "I don't want any part of that." After ChatGPT, the chatbot from OpenAI, and Stable Diffusion, a popular image generator from the British startup Stability AI, wowed consumers in 2022, Google, Meta and other companies rushed to release similar AI capabilities. It didn't take long for lawsuits to follow. Publishers, authors and artists said they found signs that their works had been scraped to train the AI models. The New York Times sued OpenAI and its partner, Microsoft, in December for using copyrighted news articles without permission to train AI chatbots. OpenAI and Microsoft said they used the articles under "fair use." There has also been legal wrangling over models that convert text into images. Cartoonists and a photographer sued Google in April, saying the company trained Imagen, its image generator, with their copyrighted works. Google has said that "American law has long supported using public information in new and beneficial ways." "With each different version of technology that comes out, copyright law is put to the test," said Alan Fisch, an intellectual property lawyer at Fisch Sigler. Without clear legal rules in place, licensing data is "one way to reduce risk," he added. In early 2023, Getty Images, the world's largest privately owned archive, noticed that its famous watermark was being re-created in some AI-generated images from Stable Diffusion. It sued the tool's maker, Stability AI, in February 2023, saying it had copied more than 12 million images from Getty's collection. Stability said it did not infringe on Getty's intellectual property rights. Getty worked with the chipmaker Nvidia to build its own image generator, calling it "a worry‑free model built for business." Through Getty's website or another interface, customers can type in a prompt for the image they want to see and specify its quality and style. Then, they can select the shape and color of the image, and the generator will present multiple options. Getty, along with 20 other stock image companies, is providing images for Bria AI, an Israeli startup, to build an AI model. Bria will split revenue from its generator with Getty and its other partners. Bria CEO Yair Adato said dividing revenue with all of the partners and helping to attribute work back to artists was essential to preserve the role of content creators. Without "value for creation, everything will be very average and very boring," he said. Getty has said it will pay photographers when it uses their images to train a model. It will also give photographers a portion of the subscription revenue it receives from clients. The company told Wired it paid about 30 cents for every dollar it made. The rising quality of models that generate images and videos has many artists concerned for the fate of their industries. And it is not always clear if AI companies have used their content to train the underlying models. The Times has reported that Sora, OpenAI's video generator, was trained partly on YouTube videos, but the company has not been transparent about data sources. That lack of transparency concerns filmmakers such as Joe Talbot, who directed the award-winning film "The Last Black Man in San Francisco." He said artists needed to be consulted about technology being built on the backs of their work. "I worry," he said, "about my fellow film brethren being able to etch out a decent existence." Shutterstock, which has a massive library of images and video clips, started a contributor fund in 2022 to compensate artists when their work was licensed for AI. The amount a contributor is paid depends on how much an AI provider pays Shutterstock. Their royalties are a proportion from each deal, and the amount rises if the client uses more of their images. The company declined to specify exact percentages, the average value of a photo or typical payouts. Outside of AI, regular photos on Shutterstock often sell for $14.50 each, and photographers receive 15% to 40% of the total, depending on how many they license in a year. Shutterstock has taken a different route to AI than its rival Getty has, selling images to major AI providers such as OpenAI since 2021 and receiving $104 million in licensing revenue last year. But it says licensing habits are changing. "We are well aware that the days of needing huge volumes of data to train models are over," said Aimee Egan, chief enterprise officer of Shutterstock. Later this year, the company will roll out two AI models: one with software-maker Databricks for images and another with Nvidia for 3D images. Companies such as Shutterstock and Adobe are now paying photographers to take pictures for AI training, but the earning potential can be modest and inconsistent. Adobe has offered photographers less than $100 to shoot as many as 1,000 photos for AI, Bloomberg News reported. And the rush to develop polished generated images could erode long-term job prospects in photography. That has left room for other companies trying to help artists be paid when their work is used for AI. Startup OpenLicense built a marketplace where AI businesses can find data and artists can be compensated and track which models are using their work. Payments scale with how often a photo is used. If a photo is referenced 1 million times to generate images, the artist can expect as much as $12,000 in royalties, said Joshua Soto, co-founder and president of OpenLicense. The company has started working with artists on Imageshack, an image-hosting site it has teamed up with. Soto said the company was "trying to bridge the benefits" of AI between developers and artists. That relationship was recently put under more strain when Adobe, the software giant behind Photoshop, updated its terms of service with vague language in June. Some customers believed the words to mean that Adobe would scrape their work to keep building its generative AI system, Firefly. The company denied the claim several times. But the episode highlighted artists' pervasive fears over how AI could disrupt their livelihoods -- a worry that has led some to oppose the technology. But Soto, a onetime graphic designer, said that engaging could make the best of a challenging situation. "Your content is going to get used either way," he said. "You might as well be in a position where you are part of that process and explicitly saying which content you want used."
Share
Share
Copy Link
As AI technology advances, the demand for AI consultants grows, while companies face legal hurdles in developing generative AI. This story explores the emerging field of AI consulting and the efforts to create AI systems without infringing on copyrights.
As artificial intelligence continues to reshape industries, a new breed of professionals is emerging: AI consultants. These experts are in high demand as companies scramble to integrate AI technologies into their operations. According to recent reports, AI consultants are commanding impressive salaries, with some earning up to $300,000 per year 1.
The surge in demand for AI expertise has led to a shortage of qualified professionals, driving up salaries and creating opportunities for those with the right skills. Companies are seeking consultants who can help them navigate the complex landscape of AI implementation, from selecting the right tools to developing AI strategies that align with business goals.
While the AI consulting field flourishes, companies developing generative AI technologies are facing significant legal hurdles. The push to create AI systems that can generate text, images, and other content has raised concerns about copyright infringement and intellectual property rights 2.
Many AI companies are now focusing on developing generative AI models that do not rely on copyrighted material for training. This approach aims to avoid potential lawsuits and legal complications that could arise from using protected content without permission.
To address these legal concerns, AI developers are exploring various strategies:
Using public domain works: Some companies are turning to works that are no longer protected by copyright to train their AI models.
Creating original training data: Developers are investing in generating their own datasets to train AI systems, ensuring they have full rights to the material used.
Partnering with content creators: Collaborations between AI companies and artists, writers, and other creators are emerging, allowing for the use of licensed content in AI training.
As the legal landscape surrounding AI development becomes more complex, AI consultants are increasingly being called upon to help companies navigate these challenges. Their expertise is not only valuable in implementing AI technologies but also in ensuring that companies develop and use AI systems in compliance with evolving legal and ethical standards.
The intersection of AI consulting and legal considerations in AI development highlights the rapidly evolving nature of the field. As AI technologies continue to advance, the demand for professionals who can bridge the gap between technical implementation and legal compliance is likely to grow.
Companies that successfully navigate these challenges, with the help of skilled AI consultants, will be better positioned to harness the power of AI while minimizing legal risks. This evolving landscape presents both opportunities and challenges for businesses, developers, and legal professionals alike, shaping the future of AI innovation and its integration into various industries.
Reference
[1]
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
A startup aims to solve AI's shoplifting detection issues, while major tech companies explore new ways to acquire AI talent and technology without traditional buyouts.
2 Sources
2 Sources
As AI technology advances, it offers new tools for enhancing work productivity. However, its application in creative fields like novel writing raises concerns among authors. This story explores the potential benefits and controversies surrounding AI in various industries.
2 Sources
2 Sources
A comprehensive look at the latest developments in AI, including OpenAI's Sora, Microsoft's vision for ambient intelligence, and the shift towards specialized AI tools in business.
6 Sources
6 Sources
Artificial Intelligence is transforming the landscape of business creation, enabling entrepreneurs to launch innovative ventures in various sectors beyond just AI-focused companies.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved