The 2024 Olympic Games kick off tomorrow in Paris, where over 10,000 athletes will compete in 329 events. Also in attendance: a whole lot of AI.
From the broadcasts to security, AI is being infused heavily throughout the Paris games. It's not exactly surprising in this moment when AI is dominating, but what is interesting is the extent to which AI's presence at the Olympics provides a snapshot of how broadly AI is quickly being implemented in so many different areas from live events to broadcast, advertising, athlete performance, law enforcement, and state surveillance.
Overall the International Olympic Committee (IOC) said it's determined over 180 potential use cases for AI in the Olympics, some of which will be on display this summer. AI tools will also assist athletes with analyzing sporting performance and will additionally be deployed to "enhance the fairness and accuracy of judging and refereeing through the provision of precise metrics," according to IOC. Beyond the competitions themselves, some of the ways AI is being used at the Olympics seem like nothing more than marketing gimmicks, while others show how the technology could change how we interact with events and content. Then there's the more controversial use case of surveillance that has privacy and civil liberty advocates sounding the alarm.
Last week, it was announced that Google has been named "the official search AI partner of Team USA," marking the first time the sports organization has partnered with a tech company. Commentators with NBCUniversal -- which continues to hold exclusive American media rights to the games -- will use Google's new AI search overviews to answer questions about the sports on-air. NBC has also recruited Comedian Leslie Jones to serve as "Chief Superfan Commentator," a role that seemingly mixes entertaining fans with showing off how Google's Gemini assistant works.
The U.S. Olympic & Paralympic Committee put out a statement hyping up the Google collaboration as a "powerful alliance that brings together the best of technology and sports" that will "inspire millions." Even without the grandiose language, it seems like a huge stretch. Does anyone really want to watch sports commentators Google live on the air? I would bet no, but clearly, this is serving more as an ad for Google's AI than a good use of the technology. There's a reason Google struck this first-of-its-kind deal, likely paying a pretty penny. (Financial terms of the deal were not disclosed.)
As we know, these models are still prone to making up and delivering false information. There's a high chance the answers Gemini delivers will be less accurate than if the broadcast team researched the questions themselves. The same goes for the chatbot, powered by Intel, the IOC is rolling out for participating athletes to ask frequently asked questions regarding guidelines and procedures.
Prior to this Google deal, however, NBCUniversal announced a brand new way for fans to watch Olympic content that could actually enhance the experience and simply wouldn't be possible without generative AI. The new feature, called Your Daily Olympic Recap, will provide users with personalized recaps of their favorite events complete with commentary provided by an AI version of legendary sportscaster Al Michaels's voice. Users can set up their preferences and then receive a personalized recap every day of the Olympics through the Peacock app, NBC's streaming platform that will also be airing the games.
To create the recaps, the language model behind the feature will analyze subtitles and metadata from NBC's Olympic coverage to create summaries of the events. It will then rewrite those summaries in Michaels's style before feeding them into a text-to-voice model. With so many different events happening simultaneously, the Olympics is not exactly easy to keep up with. Your Daily Olympic Recap actually solves a problem by compiling the parts you're most interested in and wrapping them up in a neat package. Using Michael's voice is the perfect bow on top, providing that legendary commentary sports fans have grown to love and expect. Time will tell if this feature works as planned, but NBC gets credit for using AI in an interesting way that actually solves a problem, rather than AI for AI's sake.
Event organizers and the French government are also leaning on AI to monitor potential threats. Specifically, AI-powered cameras will be used to track crowd movement and detect any activity considered unusual, from suspicious luggage to traffic violations. They'll also be collecting geolocation data and deploying camera-equipped drones. Earlier this year, the French government actually changed its laws in order to allow this use of the technology for the Olympics, passing Article 7 (a controversial provision that legalized AI-powered video surveillance before, during, and after the games) and Article 10 (which specifically permits the use of AI software to review video and camera feeds). While the technology technically isn't based on facial recognition, many have argued that Article 7 in particular violates GDPR, the EU's data privacy law which prohibits the processing of biometric data and gives citizens the right to have their personal data erased. It's also a tight line to walk in light of the newly enacted EU AI Act, which explicitly bans the use of live facial recognition systems by law enforcement.
While acknowledging that such a large-scale event requires increased security, data privacy researchers and civil liberty advocates have raised concerns about giving the private technology companies involved access to thousands of cameras around France. The vague nature of how exactly the technology will be used also raises questions about what activity counts as unusual or suspicious and thus is subjected to increased surveillance, as well as how the data will be stored -- and potentially used in the future. While the French Government changed the law specifically for the Olympics, the provision allows for usage of the technology far beyond the end of the games.
Anne Toomey McKenna, a law professor and attorney focused on the intersection of privacy, AI, and surveillance, argued in The Conversation that the French government and private tech sector are harnessing the legitimate need for increased security as grounds to deploy advanced surveillance and data gathering tools.
"Flagging these events seems like a logical and sensible use of technology. But the real privacy and legal questions flow from how these systems function and are being used," she wrote. "How much and what types of data have to be collected and analyzed to flag these events? What are the systems' training data, error rates and evidence of bias or inaccuracy? What is done with the data after it is collected, and who has access to it? There's little in the way of transparency to answer these questions."
Mistral unveils its next-generation flagship model. That's according to VentureBeat. The day after Meta announced Llama 3.1 to challenge the closed frontier models from companies like OpenAI and Google, French AI startup Mistral followed suit with its own partially open new release. Called Mistral Large 2, the company says it's significantly more capable in code generation, mathematics, and reasoning than its predecessor and also offers much stronger multilingual support. The model contains 123 billion parameters, and while that's much fewer than Llama 3.1's 405 billion, Mistral says the model can run on a single H100 node. As my Eye on AI coauthor Jersey Kahn wrote on Tuesday, Llama 3.1's gargantuan size comes with some hefty challenges around the number of GPUs required to run the model and the expertise needed to manage such a distributed workload. When it comes to the Mistral Large 2's openness, however, it only stretches as far as noncommercial uses. For commercial applications, companies must obtain a separate license and usage agreement from Mistral.
Microsoft launches generative AI search results for Bing. That's according to TechCrunch. Much like Google's generative AI-powered search, Bing will now share summaries in response to users' search inquiries, created by AI from sources across the web. Bing says it will cite and link to the pages where the information stems from. The new search experience is currently only available to a small percentage of users. In its initial days, Google's generative AI search infamously suggested to users that they eat rocks and put glue on pizza, so we'll see how this goes.
Deepfakes of Kamala Harris are going viral. Media Matters said fake audio of Harris started spreading like wildfire following President Joe Biden's announcement that he won't seek reelection and the subsequent wave of support for Harris as the Democratic nominee. The audio falsely depicts Harris sounding incoherent and slurring and was manipulated from a speech she gave at Howard University last year. Media Matters found that right-wing accounts are pushing the audio and that users have wrongly attributed it to Harris in order to attack her. TikTok has since removed the audio, but Mashable reports it's spread on Elon Musk's X as well. While the audio appears to be manipulated rather than generated by AI, the rise of widely available AI tools has made false depictions of political figures a top-of-mind issue. In May, a Senate committee passed three bills aimed at protecting elections from AI-generated deepfakes, but they have not advanced to become law.
OpenAI may be on track to lose $5 billion this year and may need additional funding within 12 months. That's according to an analysis performed by The Information based on OpenAI financial figures it said it obtained and unnamed sources it said were familiar with the company's finances and spending. This would be a burn rate so high that the company, which received a $10 billion cash injection from Microsoft in 2023, would have to seek additional funding soon -- probably within the next year -- despite reportedly also being on course to make more than $2 billion in revenue this year. The publication's analysis, however, is based on a number of extrapolations, estimates, and assumptions about how much OpenAI is spending to rent data center capacity from Microsoft Azure, as well as how quickly the cost of training OpenAI's latest frontier models is ramping up. Those assumptions might not be correct. But if they are even in the ballpark, they point to a business that is nowhere close to becoming profitable. The publication pointed out that OpenAI may be in better shape than its rival Anthropic, which may be losing about half as much money but on a much lower revenue base.
A day after raising $500 million, AI startup Cohere told staff it was laying off about 20 employees -- by Sharon Goldman
GM-owned Cruise has lost interest in cars without steering wheels. Its competitors haven't -- by Jessica Mathews
Exclusive: Lakera snags $20 million to prevent business Gen AI apps from going haywire and revealing sensitive data -- by Sharon Goldman
Tesla shares plunged on weak earnings. But a top analyst says AI investments will lead to a $1 trillion-plus valuation -- by Sheryl Estrada
AI could become the 'new steel' as overcapacity risk goes unnoticed -- by Susan Ariel Aaronson (Commentary)
July 21-27: International Conference on Machine Learning (ICML), Vienna, Austria
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia.
Dec. 9-10: Fortune Brainstorm AI San Francisco (register here)
That's how much faster Smoothie King's chief legal officer has been able to review contracts using AI, Ironclad told Eye on AI. Using IronClad's software, the company has tagged and indexed over 4,000 contracts, generated net-new contract language, and was able to more easily cover for a colleague out on parental leave. Overall, Ironclad said 75% of its customers are now using its AI features to negotiate contracts, tag and index contracts, and surface insights.
The legal industry has been quick to pursue ways to benefit from new generative AI technologies, and tackling the tedious and repetitive nature of contracts has been of particular interest. IronClad is just one of dozens of companies offering legal copilots to automate contracts and legal work for both law firms and corporate legal companies. Earlier this week, OpenAI-backed legal startup Harvey raised an additional $100 million at a $1.5 billion valuation.