Curated by THEOUTPOST
On Tue, 10 Dec, 8:03 AM UTC
7 Sources
[1]
US law firms prioritise jobs and safety in AI rollout
US law firms' use of generative artificial intelligence tools for training lawyers, automating workflows and tackling complex tasks highlights the technology's growing significance two years on from the launch of OpenAI's groundbreaking ChatGPT. The release of the chatbot gave the public its first real demonstration of the power of generative AI tools and their ability to produce code, text or image responses to natural language prompts. But, since then, the legal sector has faced a similar quandary to other industries: how to capitalise on the new technology without cannibalising existing jobs or compromising quality? "We see huge potential for generative AI to make us better lawyers, and we want our people to feel confident in using the technology," says Isabel Parker, chief innovation officer at White & Case. "We also have a duty to our clients to ensure that we are using generative AI safely and responsibly." Firms are now closer to understanding how the technology could make legal work better, faster and cheaper. In mid-2023, Crowell & Moring began using generative AI for "legal adjacent" matters that did not involve confidential information. The firm encouraged its use for legal work "on a case-by-case basis where generative AI adds value, risks are mitigated, and the client has consented", says Alma Asay, chief innovation and value officer at Crowell. The firm has gradually used AI to help with more core tasks such as drafting letters and summarising testimonies with a client's consent. That has cut the time taken to summarise a client's intake notes to under 30 minutes, compared with two to four hours previously, says Asay. Now, many firms -- having tested the technology in relatively low-stakes environments, and allowed their clients to grow more comfortable with generative AI -- are looking at how to make a bigger difference to workflows and find a competitive advantage. "Summarising a document is helpful, but it's not a game-changer . . . It's a cost avoidance play, allowing us to ask better questions of vendors or bypass them," says Thor Alden, associate director of innovation at Dechert, which is building its own AI tools on top of models from leading developers. More important, he says, are the custom tools Dechert has built "to take data sets and infuse them into our workflows". These tools are able to trawl huge data sets for specific information and respond to queries in the style of an expert lawyer. The next target is to develop AI agents that are capable of performing a string of legal tasks -- in effect, acting as an additional team member. "AI allows you to look at any document in any context on any day," says Alden. The tool allows you to search "in a way you couldn't otherwise, and it may come up with a response you wouldn't have thought of". Two of the biggest barriers to adoption, to date, are technological literacy and client caution -- particularly when it comes to giving generative AI tools access to sensitive data. A number of firms are emphasising the importance of staff mastering the technology, seeing it as a competitive edge in the sector. Crowell has rolled out mandatory AI training for its staff, and 45 per cent of the firm's lawyers have used the technology in a professional capacity. Similarly, Davis Wright Tremaine has developed an AI tool to train young lawyers how to write more effectively. But using generative AI for meatier legal issues brings additional complexity. Even the best chatbots today are prone to errors and invention, known as hallucinations. Those are serious concerns for a sector in which data privacy and accuracy are paramount. "There are a lot of reasons why a client may say no to AI," says Alden. "Sometimes it's just caution about the risks; sometimes they just want you to 'ask permission' [before using AI]." At Crowell, legal professionals must undergo training that addresses issues, including hallucinations, the use of client data, and their own ethical responsibilities. The firm emphasises the limitations of AI tools as well as their potential, says Asay. White & Case, meanwhile, has sought to protect client data by developing its own large language model in house. It is trained on an array of legal sources but privately licensed and deployed securely on the firm's private network, says Janet Sullivan, global director of practice technology. This approach gives lawyers "flexibility to explore the full potential of this technology", and gives the firm access to powerful frontier open source models, while still protecting its data, she says. The full potential of AI in a legal setting remains some way from being realised, as firms and their clients warm up to a technology that is still too error-prone to be used in highly sensitive settings. But it is already cutting the time spent on onerous work such as trawling through data and summarising documents. And more efficiency gains are anticipated in the short term. "I've always been a believer that technology helps lawyers get back to lawyering" says Asay. "We didn't need these tools decades ago when the amount of information was manageable. As the volume of information grows, technology helps us keep apace and ensures that humans are able to focus on their highest and best uses."
[2]
AI adds bespoke features to ready-made tools
The ability of large language models (LLMs) to sort through troves of documents has been seized on by law firms. As the technology develops, a number will consider building these tools in-house. Some already have. Lawyers have been using technology for data analysis in their legal practices for years. But the rise of generative artificial intelligence has now made it possible for firms to train large language models with litigation files and other records. Large language models are AI systems capable of parsing and generating human language by processing vast amounts of data -- ushering in powerful capabilities for the legal industry. As law firms become more savvy with this technology, which underpins chatbots such as ChatGPT, a few have begun coding large language models in-house -- potentially throwing doubt on the business model of traditional legal tech providers. One such firm is patent litigation specialist Irell & Manella. Instead of turning to a third-party legal tech provider, it tasked its in-house developer, scientific fellow Thomas Barr, with building its own AI-powered platform to analyse patents and related documentation. His Irell Programmable Patent Platform software (IP3) initially used proprietary algorithms to access a large database before the firm started experimenting with AI. As IP3 was built in house, Barr says he can ensure the security and privacy of the patent data that powers the program. It can be fine-tuned to clients' needs more nimbly than with a third-party platform, he adds. Legal tech providers are limited in the solutions they can offer law firms, Barr argues. "They don't actually know what the problem is, because they're not the ones going to clients and solving the problem," he says. Irell's lawyers know first-hand how to solve clients' needs, Barr notes, "so we should be in the business of building the tools that help us do that". Irell began taking data from the US Patent and Trademark Office and applying its custom code to train the platform years before the launch of ChatGPT. But technological advances in large language models have allowed the firm to expand the platform to allow free-form questions from its lawyers during litigations. The firm's lawyers use IP3 to answer "incredibly complex questions and generate reports" at levels previously unfathomable, says Amy Proctor, a partner at Irell. "We've all been blown away by what is now possible by combining this kind of database with the latest AI capabilities." Other law firms are using generative AI but tapping into the offerings of third-party legal tech providers to apply it to their data sets. McDermott Will & Emery, a leading US healthcare law firm, has taken data from more than 750 middle-market private equity healthcare deals its lawyers negotiated and run it through an AI model licensed to the firm by legal tech provider eBrevia. As eBrevia offers market analysis, which many firms use to inform negotiations, McDermott instructed the company to custom-train the AI model to provide analysis on the healthcare market. Then, the firm trained it on around 20 additional items based on its own healthcare private equity data, says Hunter Jackson, McDermott's chief knowledge officer. It combined the "out of the box functionality" from the vendor with the lawyers' own fine-tuning. Now, McDermott uses this information, which is available on its intranet, to provide a market analysis report to its lawyers at the beginning of each new deal. The report is a useful reference for drafting letters of intent, as it draws on past deals to determine which terms the company generally accepts. The firm also provides reports to clients, to show how the provisions of a deal were decided. The study has sharpened the firm's competitive advantage, says Hunter Sharp, partner at McDermott. "It allows us to provide clients with advice using highly relevant data that many other firms can't match because they don't do as many health private equity deals as we do -- and they don't track these provisions in the way in which we track them." McDermott plans to continue to rely on third-party legal tech providers, but Jackson envisions large language models coming into play further, once they are more accurate. That would disrupt many legal tech providers, and even cause some to fold, argues Gabe Teninbaum, assistant dean at Suffolk University Law School in Boston, a specialist in legal tech. Many large law firms have been building big data strategies for a decade, he says. Now, instead of hiring data scientists to analyse information, lawyers can upload a spreadsheet and use generative AI to pull insights from the data. While many legal tech companies are using the latest innovations in AI, they may still be too generic if large language models are not trained on niche legal data, Teninbaum says. "The way that they can deal with that is they can either adapt themselves by . . . changing their underlying technology stack, or pivoting the way that they do business." Kriti Sharma, legal tech chief product officer at information group Thomson Reuters, is unconcerned, however: "It does keep us on our toes in terms of always needing to be inventive." She sees the solutions law firms build on top of Thomson Reuters' tools as more exciting than threatening. The company is exploring how to enhance its offerings to law firms, Sharma says. Last year, it purchased Casetext, developer of legal tech software CoCounsel. She wants to expand Thomson Reuters's AI solutions into niche areas of the law, seeing law firms as partners in building on AI and data, rather than competitors. "I can't imagine a world where we could go back to . . . tech providers that do the tech, and then the law firms that do legal work, and we live in our separate worlds," she says.
[3]
Law firms adapt to cover expanding legal risk
Cultural and technological developments in recent years have given rise to a new array of legal risks for businesses in the US. Now, in response, law firms are expanding their products and services to cover anything from cyber crime and generative AI to cultural disagreements in the workplace. "Historically, law firms have been organised around what the lawyers do, such as litigation, corporate or regulatory work," says Gerry Stegmaier, a partner in the emerging technologies group at Reed Smith. But this is changing, as lawyers adapt to offer clients legal advice beyond the more traditional practice areas. As existing legal risks, such as regulatory compliance, grow more complex, and new risks emerge, such as those brought on by the advent of generative AI, many firms are also creating new practices to focus on these subjects, says Stegmaier. At the same time, firms must make use of new technologies if they want to remain competitive. He adds: "Lawyers who are familiar with and expert in AI and the other skills required for excellence will be much better positioned to adapt to change now and in the future." Since OpenAI launched ChatGPT in November 2022, chatbots and other large language models have been widely introduced in the workplace. But, for all their advantages, the adoption of these tools also creates legal risks over data privacy, copyright infringement, regulatory compliance, and discrimination -- for instance, when used for hiring. With generative AI, "you had a tool that you could ask to do anything, and it would provide different answers every time", points out Danny Tobey, chair of DLA Piper's AI and data analytics practice for the Americas. The question is: how do you test such a tool for compliance, accuracy and vulnerabilities? Tobey's team decided to use AI to test AI. They started "legal red teaming" the language models: cross-examining a generative AI model the same way one would a trial witness. Lawyers and data scientists would work together on a particular model, he says, interrogating it with "lines of attack" for specialised industries, such as healthcare, financial services, or consumer goods. Then, a separate AI system would be set up to interrogate the AI model. "A lawyer could ask a dozen or 100 questions, but then we want the generative AI to ask 1,000 or more questions," Tobey explains. "That way, you get the benefit of human ingenuity and creativity in really pushing on the model, but you also get the scale and repetition of generative AI." Another growing area of focus for law firms is the legal risk associated with cultural issues and diversity, equality and inclusion (DEI) in the workplace. This is particularly noticeable since the pandemic and the increased emphasis on being open and authentic at work, says Sam Schwartz-Fenwick, a partner at Seyfarth Shaw and leader of the firm's LGBT affinity group: "People were coming to work with their full identity on, and that meant that there was a lot of disagreement." At some companies, this led to HR complaints and, sometimes, litigation. The legal framework governing free speech in US workplaces is complex, he says. It includes Title Seven, the federal law that covers or protects against employment discrimination at work based on protected categories. Then, there is the National Labor Relations Act and a number of state laws regarding off-duty conduct and speech, he adds. "So all of those things are in play." In 2023, Seyfarth found it had been handling enough such client cases to warrant a dedicated unit to handle them. Schwartz-Fenwick is co-lead of the Cultural Flashpoints task force, which allows lawyers from different specialised areas to pool their expertise to help clients. Historically, clients would come to Seyfarth after a problem had already emerged. But increasingly, companies want to nip issues in the bud. He expects workplace disputes and regulations to become more complex due to a "fractured culture" in the US in which "roughly half of the population sees the world in very different ways than the other half". Meanwhile, regulatory risk is only growing, says Sebastian Lach, a partner at Hogan Lovells and co-CEO of the firm's in-house legal technology brand Eltemate. "It's not only more regulations, it's also more enforcement and more aggressive enforcement." Eltemate created a specialised AI tool to sort through regulatory documents, making the process both systematic and faster for their clients. "We've trained our own AI algorithm that basically gives all the documents a relevancy score based on what the client has told us they're interested in," Lach says. "If it's higher than 60 per cent, it's relevant. If it's lower than 40 per cent, it's not relevant." The tool can reduce a "document dump" of 10,000 documents to a database of 70, including automated summaries, and translations in select languages. Rather than clients pushing back against the use of generative AI tools by law firms, they see it as proof the lawyers are working efficiently, Lach says. "We're seeing a huge shift in our business model. Because, if you think about it, this is also a shift from hourly rates to technology cost, which is massive, because you have to rethink the whole model."
[4]
Young lawyers build tech skills to prepare for AI impact
As artificial intelligence extends its reach into corporate offices worldwide, young lawyers are being forced to prepare for the adoption of the new tools. Younger lawyers are generally more technologically savvy, says Brendan Gutierrez McDonnell, a partner at K&L Gates -- even if concerns about the technology's accuracy in professional applications mean they have, so far, been "tepid" on its use at work. However, when used correctly, AI has the potential to transform the work of junior lawyers. And technology companies are betting big on AI becoming an integral part of the legal profession. The legal industry's estimated $26.7bn spend on technology this year is predicted to grow to more than $46bn by 2030, according to analysts at Research and Markets. And AI is becoming a significant part of law's technological advancement. Harvey, the legal generative AI platform founded in 2022, raised $100mn in July in an investment round led by Google parent Alphabet, Microsoft-backed OpenAI, and other investors. It valued the venture at $1.5bn. Meanwhile, legal services provider LexisNexis this summer upgraded its AI assistant, and Thomson Reuters has also been busy extending its AI-powered offering. Yet, for young lawyers, how quickly and deeply this technology will become part of their jobs is still unclear. A survey published by the American Bar Association in June found law schools are increasingly including AI in their curricula. Half of those responding said they already offered classes on the subject, while 85 per cent said they are considering changes to their curricula to respond to the increasing prevalence of AI tools. NYU's law school is one example of where teaching in the subject is already offered, says Andrew Williams, director of the lawyering programme at the college. "But, in our curriculum, our focus remains on teaching the underlying concepts -- structure, analysis, narrative, interpersonal dynamics -- that students need before using any available technology tools," he says. "An analogy might be an art curriculum that provides a grounding in shading, colour, and perspective, rather than jumping right into a particular digital art programme." Francisco Morales Barrón, a partner at Vinson & Elkins in New York, also teaches a "generative AI in corporate law" class at the University of Pennsylvania. His firm has been giving required continuing education credits for learning about different AI products and testing them. Younger associates are allowed to use AI, but need to take mandatory training first. Lawyers still need to "validate everything" that an AI tool summarises, Morales Barrón says. Though AI might achieve two-thirds accuracy in an assignment correctly, "that is still a significant gap that is not acceptable". Still, the AI can cut some of the drudge work out of the usual tasks for junior lawyers, he believes. "[In] five to 10 years, I do think the practice of law will be different from what it is now," he says. For junior lawyers, "the best thing you can do is learn the tools" while remaining conscious about mistakes the tools can make. US courts, however, remain extremely cautious in their acceptance of the use of generative AI and related tools in dealing with case work. The variability of acceptance is illustrated by a public website developed by law firm Ropes & Gray, which tracks the rules judges set for using AI in their courts and can be used by lawyers across the US. Aside from its benefits, the pitfalls and potential embarrassment of deploying AI can be significant. Earlier this year, a New York judge criticised a law firm's use of ChatGPT to justify an estimate for its case fees of up to $600 an hour, saying the firm's reliance on the AI service to suggest rates was "utterly and unusually unpersuasive". The judge ultimately cut the firm's fees by more than half. And, last year, a judge in Manhattan rebuked a lawyer for submitting a brief compiled with the help of the AI tool which cited fictitious cases. Judges elsewhere in the country also seem wary of the technology. The Fifth US Circuit Court of Appeals considered, but ultimately did not adopt, a requirement that lawyers disclose whether AI was used to generate a filing. Still, the Fifth Circuit warned: "I used AI" will not be an excuse for an otherwise sanctionable offence. Corporate clients, too can be nervous about how law firms are using AI. Companies have warned lawyers over the risk of breach of confidentiality if information is fed into AI systems to "train" them -- a danger law firms are keen to assuage. At Goodwin, the firm trains its lawyers on the ethical implications and dangers of technology use, as well as the practical benefits of the tools. "We train them with the specific aim of building judgment," says Caitlin Vaughn, managing director of learning and professional development at the firm. With the concerns about generative AI and some of its dangers, such as "hallucinations" that fabricate information, "a lot of people hit the pause button", says Gutierrez McDonnell at K&L Gates. "When using AI, you have to know when it is wrong and when it is right," he argues. But, when correctly deployed, it transforms how work can be done. With AI, "you can do pro bono [work] way more effectively," he adds. For young lawyers using AI, Gutierrez McDonnell adds: "We tell them . . . new things will come out of it. Jump all in."
[5]
Lawyers navigate novel AI legal battles
Dechert's Brenda Sharton is no stranger to litigating issues at the edge of technological innovation. In the 1990s, while on maternity leave, she read about the internet attracting millions of users and soon became an expert on its intersection with privacy law. In the past couple of years, she has had a sense of déjà vu, after winning the dismissal of two of the first lawsuits brought against a generative AI company in the US, while getting up to speed on the nascent technology and explaining it to the courts. Sharton, managing partner of Dechert's Boston office and chair of the firm's cyber, privacy and AI practice, points out that artificial intelligence "is not something new" and has been developed over more than a decade, mostly behind the scenes. But, since the arrival of the latest wave of generative AI, led by OpenAI's ChatGPT, Sharton and a handful of specialists are having to defend companies that now face sprawling copyright and privacy claims, which could hamper the emerging industry. Sharton's most high-profile AI case was a proposed class action against her client Prisma Labs, the maker of popular photo editing tool Lensa. As she puts it, the plaintiff had in effect alleged that "anyone in Illinois who ever uploaded a photo to the internet" had been harmed by the software allegedly being trained on images scraped from the web without their explicit consent. But a federal judge ruled in August that the plaintiff had not shown "concrete and particularised" injury and could not prove their images were in the vast data set used. "Judges have said you're going to have to explain what was inaccurate," Sharton says, as well as "what was done that violated whatever existing law". In other instances, the limits of what AI businesses term "fair use" of copyrighted material is still to be established. Andy Gass, partner at Latham & Watkins, is defending OpenAI in cases brought by publishers including the New York Times and DeviantArt over alleged copyright violations. He is also defending rival AI venture Anthropic in lawsuits brought by music publishers alleging wrongful copyright infringement. Gass says the slew of cases currently being heard are "both fascinating and quite important" -- although he cautions against interpreting initial decisions as predictive of future AI legal battles. "The issues that we are seeing and dealing with now are, in some sense, foundational issues," he says. "But they are going to be very different than the ones that are presented three years from now, five years from now, or ten years from now." Gass and his team, who had been working on generative AI questions well before ChatGPT was released to great fanfare in late 2022, have embedded lawyers with the technologists at some of the companies they represent. They delve into the details of how the models are being trained so they can analyse the copyright issues that may arise. "[AI litigation] involves a very novel technology, but very well established principles of law," Gass says. "The challenge, as an advocate, is explaining that to the judges." Sharton says exploring the details with the courts is one of the most demanding aspects of being an AI lawyer. "You have to do a tremendous amount of educating of the judges," she says. "It's a big learning curve for them as well. And they . . . don't have the luxury of specialising [in particular subject matters] like lawyers do." Warrington Parker, managing partner of Crowell & Moring's San Francisco office, is representing defendant ROSS Intelligence, an AI-powered legal tech company, in one of the first generative AI cases to allege copyright infringement -- filed by Thomson Reuters in May 2020. Parker argued in front of Delaware's Judge Stephanos Bibas this month, in a lawsuit that is not yet settled. He is not sure the judge is "convinced yet" of his arguments, including his contention that the AI training data used by ROSS has a public benefit and should be considered fair use. "But I think he is interested." Aside from the judges, there is the matter of the general public. While none of the existing lawsuits has yet gone to a jury trial -- and some doubt that any will, given the complexity -- certain lawyers defending AI clients fear a negative public perception of AI could taint any panel's view. For a jury, "the idea that you took someone else's work . . . is going to be an issue", Parker says -- although he does not accept that characterisation. The question of how the incoming Trump administration will regulate the technology will be particularly pertinent to firms with AI clients. If the new government decides to give companies freer reign, plaintiffs' lawyers are not "going to be able to piggyback on, say, [Federal Trade Commission] actions, which they typically do," Sharton says. Moreover, the outcome of existing cases, even if some are lost, may not be enough to restrict the sector's growth. "If it's a matter of damages only, I think some actors will pay those damages and continue," Parker says. "In other words, it is the cost of doing the business." For now, there are more anecdotal signs of the judiciary paying attention to generative AI's capabilities. During a case management conference earlier this year, 90-year-old Judge Alvin Hellerstein proved his personal interest in the topic. The legendary judge "took out his iPad and played a song that had been generated by [an AI] tool that was sort of about his career on the bench", Gass says. Even less adventurous judges will end up with a stronger grasp of the technology, Gass predicts. To extend the analogy to the early internet age, he says, "we are still in the dial-up modem phase of the trajectory of these tools".
[6]
In-house lawyer role broadens in response to technological change
In July this year, Bridgewater Associates launched a macro fund -- one that aims to profit from macroeconomic shifts -- that primarily uses machine learning to make its investment decisions. Precise details of the fund's operation, investors and initial performance remain undisclosed. But the launch signals a commitment by one of the world's largest hedge fund operators to move beyond its existing investment systems and embrace more fully the potential of artificial intelligence. The fund is the result of work by Bridgewater's Artificial Investment Associate (AIA) Labs, an expanding team of some 20 investors, engineers and technologists, founded in 2023, which aims to harness the power of AI and machine learning in the investment process. The Bridgewater legal team worked closely with AIA Labs on the legal and regulatory aspects of the design and launch of the fund -- studying data science to ensure compliance with regulations and to draft new risk disclosures for prospective clients. For the hedge fund's lawyers, it is another illustration of how their roles have broadened beyond giving pure legal advice. "The evolution of the business is creating the need for the evolution of the people within the business to be able to not just react, but anticipate what is coming next," says Tracey Yurko, chief legal officer at Bridgewater. This is just one example of how new technologies and business models are broadening the scope of lawyers' roles at the in-house legal teams featured in the 2024 Innovative Lawyers North America report. Even traditional legal transactions, such as preparing for stock market flotations, can create opportunities for lawyers to lead and contribute in new ways. When Reddit, the online social media and community forum, was preparing to raise money and list its shares on the New York Stock Exchange in March, the lawyers' role was unusual, argues the business's vice-president of legal, Milana McCullagh. "Because Reddit is such a distinct platform, we had this great opportunity to bring its unique, community-based nature into the [initial public offering] process," she says. Her legal team helped create a "directed share programme" that gave some of Reddit's unpaid users and moderators, who had built the platform, a chance to profit by buying shares in its flotation. Innovation across businesses and within their internal legal teams is frequently linked. This year's winning team at enterprise technology business Salesforce is one such example. Sabastian Niles, the tech company's chief legal officer, argues: "We are at this really powerful and exciting inflection point for the technology industry as a whole." He identifies his team's three priorities as: helping to accelerate growth for the broader business; strengthening trust; and encouraging operational excellence. The legal team of nearly 500 staff handled more than 200,000 legal requests last year. It has recently developed a range of technologies and procedures, based on Salesforce's own evolving enterprise software platforms, changing how it handles those requests. Analysis shows that, using these new systems, around half of all queries can be resolved by the business colleagues themselves, without needing a lawyer. The team is now beginning to roll out "agentic" AI tools that are increasingly designed not just to answer queries but also to independently carry out other tasks. Collecting and connecting contract and business data was also the first step for the legal team at the global manufacturing company Flex. It is a "bit tedious" but a "crucially important step for effective AI solutions", says Justin Schwartz, deputy general counsel. These AI tools are now assisting the team in negotiating contracts with customers and suppliers, managing risks, and integrating new staff members faster. The changes they enable are delivering measurable financial benefits, he says. Schwartz aims to change the perception of the legal team from one of narrow legal experts to "a department full of great business people and commercial leaders who happen to have law degrees". As in-house lawyers continue to broaden their responsibilities, Yurko believes company legal roles are becoming more distinct from law firm specialists. "The thing that we have been building towards for a while is having people who are comfortable doing a broad set of things -- lawyers who are like all-round athletes."
[7]
Law firms lean into the business of prediction
When research was under way for the very first FT Innovative Lawyers report two decades ago, few law firms could be described as game changers. Even when the North America report started in 2010 and clients were being disrupted, few firms could imagine being a disrupter. And even fewer thought it desirable. Until now. A striking feature of the research for the 2024 FT North America report is the number of law firm leaders who say they are embracing existential challenges. "We are leaning into disruption," says Ira Coleman, global chair of McDermott Will & Emery. For instance, he explains, the firm is making significant investments in its push to explore how generative AI is affecting how the law is practised. It has invested $20mn so far in The LegalTech Fund, a venture capital firm that invests in legal technology start-ups, says Coleman -- making it the largest commitment from a single law firm. The firm's innovation team is also at the forefront of this new era. Its recent presentation to the firm's management committee featuring an avatar both alarmed and thrilled. "We asked her some tricky questions and her answers freaked out a lot of people. The game is changing," Coleman says. McDermott Will & Emery has a long record of doing things differently. It was one of the first US law firms to organise itself to align with industry sectors rather than around legal practice areas. The result was its current dominant position in healthcare. With revenues just shy of $2bn in 2023, according to The American Lawyer, McDermott is one of the fastest growing law firms in the US. Its compound annual revenue growth rate over three years to 2023 is 12 per cent. "Practice-area focus makes you very inward looking," explains Coleman. "The industry-sector [approach] makes us more external looking and part of our clients' businesses." Other law firms in the FT index (below) of innovative North America law firms for 2024 share Coleman's appetite for disruption. See the full list North America Innovative Lawyers Awards 2024 winners here Frank Ryan, global co-chair and co-chief executive at DLA Piper, says: "We come at AI saying: 'Why don't we be at the epicentre of this technology?' For more conventional law firms, this approach could be destabilising." The firm has also significantly invested in AI, data experts and technologists. One such hire was Danny Tobey, a medical doctor turned lawyer who was already writing white papers on the future of AI when he was recruited by the firm. "There is a lot of brain power in the world outside legal," observes Ryan. "If you function in the traditional ways that law firms do business, it is easy to miss them." Tobey heads the firm's data analytics practice where he is spearheading a trend that will see lawyers move away from advising clients based on precedents towards making more predictions about what may happen. It is part of DLA Piper's experiment with what it calls "proactive compliance as a service". The idea, here, is to use clients' data, human lawyers and AI to spot trends and patterns that can alert the client to a potential breach of their own policies or of industry regulations. Predictive analytics is already used in the legal sector. Indeed, some corporate legal departments -- at global brewer Anheuser-Busch InBev, for example -- have been using data to spot likely incidents of corruption and fraud for nearly a decade. If this is starting to sound a bit like The Minority Report movie, in which "precogs" who can see into the future help forecast crimes before they happen, Tobey acknowledges the echo but differentiates the two ideas, too. "We are not looking to predict things that have not happened yet and punish people. But we're looking to educate and fill the gaps in compliance," he says. The aim is to stop bad behaviour "when it's embers [and not yet ] a forest fire". The DLA Piper service is one of the first to offer year-round monitoring rather than the single "big event" advice model favoured by leading law firms. It also allows the firm to study their clients' unstructured data in real time -- and thereby adds new and different meaning to knowing its clients, a top strategic aim. It also requires a different fee model and delivery approach, but Tobey predicts this will become the norm. Law firms may become automated subscription product businesses that also offer human advice at a premium. The number of data-based initiatives now being offered by other law firms that have a dominant market position suggests he is right. Latham & Watkins, which was one of the first US law firms to be truly disruptive over the past couple of decades, has created a database of deals terms to institutionalise the firm's knowledge and to ensure clients are better prepared when entering into negotiations. The approach is different from DLA Piper's, but the aim is the same: it is trying to work with clients on what will happen in the future, rather than just offering knowledge about precedents. Similarly, McDermott Will & Emery used its healthcare expertise to create a database of details from earlier, heavily negotiated deals to help clients in the sector when finalising purchase agreements with counterparties. What these firms agree on is that the move to more advice that involves predictions will require close co-operation between the lawyers and software developers. Tobey puts the creation of DLA Piper's AI practice, which has built up eight-figure revenues in less than two years, down to this integration: "Our law is tech informed and our tech is law informed," he says.
Share
Share
Copy Link
US law firms are increasingly adopting AI technologies to enhance efficiency and competitiveness, while navigating complex ethical and practical challenges. This trend is reshaping legal practices and education.
US law firms are increasingly embracing artificial intelligence (AI) technologies, recognizing their potential to revolutionize legal practices. Two years after the launch of OpenAI's ChatGPT, the legal sector is grappling with how to leverage AI without compromising jobs or quality [1].
Many firms are now closer to understanding how AI can make legal work better, faster, and cheaper. Crowell & Moring, for instance, has been using generative AI for "legal adjacent" matters and gradually expanding its use to core tasks like drafting letters and summarizing testimonies [1].
Some law firms are developing custom AI tools to gain a competitive edge. Dechert is building its own AI tools on top of models from leading developers, capable of trawling huge data sets for specific information and responding to queries in the style of an expert lawyer [1].
Irell & Manella has taken this a step further by developing its own AI-powered platform, IP3, to analyze patents and related documentation. This in-house approach allows for greater security, privacy, and customization to client needs [2].
Law schools are increasingly incorporating AI into their curricula. A survey by the American Bar Association found that 50% of responding schools already offer AI classes, while 85% are considering curriculum changes to address the growing prevalence of AI tools [4].
Law firms are also prioritizing AI training for their staff. Crowell & Moring has implemented mandatory AI training, with 45% of its lawyers already using the technology professionally [1].
The advent of generative AI has created new legal risks for businesses, including data privacy, copyright infringement, regulatory compliance, and potential discrimination in hiring processes. In response, law firms are expanding their services to cover these emerging areas [3].
DLA Piper, for example, has developed a "legal red teaming" approach, using AI to test AI models for compliance, accuracy, and vulnerabilities [3].
Despite the potential benefits, the adoption of AI in legal practice faces significant challenges. Concerns about data privacy, accuracy, and the potential for AI "hallucinations" (fabricated information) have led to caution among both lawyers and clients [1][4].
Courts remain wary of AI use in legal proceedings. Instances of AI-generated briefs citing fictitious cases have led to judicial rebukes and warnings [4].
The integration of AI is expected to transform the work of junior lawyers. While AI can reduce time spent on mundane tasks, there's an emphasis on developing skills to effectively use and validate AI-generated content [4].
As AI technology continues to evolve, its impact on the legal profession is expected to grow. Lawyers specializing in AI-related cases predict that while current legal battles focus on foundational issues like copyright and fair use, future challenges will likely be quite different [5].
The legal industry's spending on technology is projected to grow from $26.7 billion in 2023 to over $46 billion by 2030, with AI becoming a significant part of this technological advancement [4].
As the legal sector navigates this AI revolution, balancing innovation with ethical considerations and client trust remains a key challenge. The coming years will likely see further integration of AI in legal practices, reshaping the industry and the skills required for future lawyers.
Reference
[1]
[2]
[3]
[4]
[5]
Nvidia's monopoly in AI chips has prompted countries and tech giants to seek alternatives, driving a global competition for AI hardware supremacy.
2 Sources
As Nvidia's AI chips face supply constraints and export controls, countries and tech giants are scrambling to develop domestic alternatives, reshaping the global semiconductor landscape.
2 Sources
Nvidia's monopoly in AI chips has prompted countries and tech giants to seek alternatives. The global race for AI chip development intensifies as nations aim to reduce reliance on US technology.
2 Sources
As AI technology advances, the demand for AI consultants grows, while companies face legal hurdles in developing generative AI. This story explores the emerging field of AI consulting and the efforts to create AI systems without infringing on copyrights.
2 Sources
India's rapid progress in artificial intelligence development is encountering potential obstacles due to stringent privacy regulations. The country's AI sector growth may be hindered by data protection laws, raising concerns about the balance between innovation and privacy.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved