Curated by THEOUTPOST
On Sat, 13 Jul, 12:01 AM UTC
2 Sources
[1]
What We Know About the New U.K. Government's Approach to AI
When the U.K. hosted the world's first AI Safety Summit last November, Rishi Sunak, the then Prime Minister, said the achievements at the event would "tip the balance in favor of humanity." At the two-day event, held in the cradle of modern computing, Bletchley Park, AI labs committed to share their models with governments before public release, and 29 countries pledged to collaborate on mitigating risks from artificial intelligence. It was part of the Sunak-led Conservative government's effort to position the U.K. as a leader in artificial intelligence governance, which also involved establishing the world's first AI Safety Institute -- a government body tasked with evaluating models for potentially dangerous capabilities. While the U.S. and other allied nations subsequently set up their own similar institutes, the U.K. institute boasts 10 times the funding of its American counterpart. Eight months later, on July 5, after a landslide loss to the Labour Party, Sunak left office and the newly elected Prime Minister Keir Starmer began forming his new government. His approach to AI has been described as potentially tougher than Sunak's. Starmer appointed Peter Kyle as science and technology minister, giving the lawmaker oversight of the U.K.'s AI policy at a crucial moment, as governments around the world grapple with how to foster innovation and regulate the rapidly developing technology. Following the election result, Kyle told the BBC that "unlocking the benefits of artificial intelligence is personal," saying the advanced medical scans now being developed could have helped detect his late mother's lung cancer before it became fatal. Alongside the potential benefits of AI, the Labour government will need to balance concerns from the public. An August poll of over 4,000 members of the British public conducted by the Centre for Data Ethics and Innovation found 45% respondents believed AI taking people's jobs represented one of the biggest risks posed by the technology; 34% believed loss in human creativity and problem solving was one of the greatest risks. Here's what we know so far about Labour's approach to artificial intelligence. One of the key issues for the Labour government to tackle will likely be how to regulate AI companies and AI-generated content. Under the previous Conservative-led administration, the Department for Science, Innovation and Technology (DSIT) held off on implementing rules, saying that "introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation and prevent people from across the UK from benefiting from AI," in a 2024 policy paper about AI regulation. Labour has signaled a different approach, promising in its manifesto to introduce "binding regulation on the handful of companies developing the most powerful AI models," suggesting a greater willingness to intervene in the rapidly evolving technology's development. Read More: U.S., U.K. Announce Partnership to Safety Test AI Models Labour has also pledged to ban sexually explicit deepfakes. Unlike proposed legislation in the U.S., which would allow victims to sue those who create non-consensual deepfakes, Labour has considered a proposal by Labour Together, a think-tank with close ties to the current Labour Party, to impose restrictions on developers by outlawing so-called nudification tools. While AI developers have made agreements to share information with the AI Safety Institute on a voluntary basis, Kyle said in a February interview with the BBC that Labour would make that information-sharing agreement a "statutory code." Read More: To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy "We would compel by law, those test data results to be released to the government," Kyle said in the interview. Timing regulation is a careful balancing act, says Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute. "The art form is to be right on time with law. That means not too early, not too late," she says. "The last thing that you want is a hastily thrown together policy that stifles innovation and does not protect human rights." Watchter says that striking the right balance on regulation will require the government to be in "constant conversation" with stakeholders such as those within the tech industry to ensure the government has an inside view of what is happening at the cutting edge of AI development when formulating policy. Kirsty Innes, director of technology policy at Labour Together points to the U.K. Online Safety Act, which was signed into law last October as a cautionary tale of regulation failing to keep pace with technology. The law, which aims to protect children from harmful content online, took 6 years from the initial proposal being made to finally being signed in. "During [those 6 years] people's experiences online transformed radically. It doesn't make sense for that to be your main way of responding to changes in society brought by technology," she says. "You've got to be much quicker about it now." Read More: The 3 Most Important AI Policy Milestones of 2023 There may be lessons for the U.K. to learn from the E.U. AI Act, Europe's comprehensive regulatory framework passed in March, which will come into force on August 1 and become fully applicable to AI developers in 2026. Innes says that mimicking the E.U. is not Labour's endgame. The European law outlines a tiered risk classification for AI use cases, banning systems deemed to pose unacceptable risks, such as social scoring systems, while placing obligations on providers of high-risk applications like those used for critical infrastructure. Systems said to pose limited or minimal risk face fewer requirements. Additionally, it sets out rules for "general-purpose AI", which are systems with a wide range of uses, like those underpinning chatbots such as OpenAI's ChatGPT. General-purpose systems trained on large amounts of computing power -- such as GPT-4 -- are said to pose "systemic risk," and developers will be required to perform risk assessments as well as track and report serious incidents. "I think there is an opportunity for the U.K. to tread a nuanced middle ground somewhere between a very hands-off U.S. approach and a very regulatory heavy E.U. approach," says Innes. Read More: There's an AI Lobbying Frenzy in Washington. Big Tech Is Dominating In a bid to occupy that middle ground, Labour has pledged to create what it calls the Regulatory Innovation Office, a new government body that will aim to accelerate regulatory decisions. In addition to helping the government respond more quickly to the fast-moving technology, Labour says the "pro-innovation" regulatory body will speed up approvals to help new technologies get licensed faster. The party said in its manifesto that it would implement AI into healthcare to "transform the speed and accuracy of diagnostic services, saving potentially thousands of lives." Healthcare is just one area where Kyle hopes to use AI. On July 8, he announced the revamp of the DSIT, which will bring on AI experts to explore ways to improve public services. Meanwhile former Labour Prime Minister Tony Blair has encouraged the new government to embrace AI to improve the country's welfare system. A July 9 report by his think tank the Tony Blair Institute for Global Change, concluded AI could save the U.K. Department for Work and Pensions more than $1 billion annually. Blair has emphasized AI's importance. "Leave aside the geopolitics, and war, and America and China, and all the rest of it. This revolution is going to change everything about our society, our economy, the way we live, the way we interact with each other," Blair said, speaking on the Dwarkesh Podcast in June. Read more: How a New U.N. Advisory Group Wants to Transform AI Governance Modernizing public services is part of Labour's wider strategy to leverage AI to grow the U.K. tech sector. Other measures include making it easier to set up data centers in the U.K., creating a national data library to bring existing research programs together, and offering decade-long research and development funding cycles to support universities and start-ups. Speaking to business and tech leaders in London last March, Kyle said he wanted to support "the next 10 DeepMinds to start up and scale up here within the U.K." Artificial intelligence-powered tools can be used to monitor worker performance, such as grading call center-employees on how closely they stick to the script. Labour has committed to ensuring that new surveillance technologies won't find their way into the workplace without consultation with workers. The party has also promised to "protect good jobs" but, beyond committing to engage with workers, has offered few details on how. Read More: As Employers Embrace AI, Workers Fret -- and Seek Input "That might sound broad brush, but actually a big failure of the last government's approach was that the voice of the workforce was excluded from discussions," says Nicola Smith, head of rights at the Trades Union Congress, a union-group. While Starmer's new government has a number of urgent matters to prioritize, from setting out its legislative plan for year one to dealing with overcrowded prisons, the way it handles AI could have far-reaching implications. "I'm constantly saying to my own party, the Labour Party [that] 'you've got to focus on this technology revolution. It's not an afterthought," Blair said on the Dwarkesh Podcast in June. "It's the single biggest thing that's happening in the world today."
[2]
Humans will be the arbiters of value - UiPath collaborates with academia on AI innovation
While tech companies have been eager to see what generative AI can do, we're in the early days of adoption - and according to Dr Ed Challis, Head of AI Strategy at UiPath, there is still a significant journey ahead. When he spoke to my colleague Chris Middleton last year about interactive AI, Challis was realistic about limitations of gen AI. Since then, the needle has shifted, but are we any closer to determining the best ways to productize this technology and identifying optimal use cases? During a recent conversation at UiPath on Tour in London, Challis shared his insights into the rapid developments in AI and what they mean for enterprise customers. One aspect of keen interest for Challis is the inherent value derived from the technology, networks, and proprietary data unique to each company. This is due to "the ability to create products and services at UiPath that we can then give to our clients, allowing them to leverage their internal data assets." All very well, but then that's part of UiPath's raison d'etre. Challis returned to the theme of collaboration which was strongly in evidence throughout the London event, and explained why this collaborative approach between research and customer interaction is pivotal, saying: The AI products we make have these principles of being open, flexible, and responsible. But flexibility to me really means, can the model learn and adapt to the client's environment? And do we make that process quick and easy? UiPath is continuing to examine the gap between innovation and execution, integrating generative AI into this collaborative process - moving from a mixture of logic-based software and humans, into a combination of AI, logic-based systems, and humans working collaboratively. Challis emphasized that the key is ensuring that AI solutions can learn the unique contexts of different organizations. He pitched this by drawing an analogy to human adaptability - a new hire may have an amazing CV but if they can't learn and adapt to new ways of working, they fundamentally won't provide a lot of value. However, the orchestration of these elements is critical, determining which tasks are best suited for AI and which require human oversight, he argued: Humans will be the arbiters of value - setting designs and thinking about trade-offs between value such as quality, price, and the level of personalization. Decisions have to be made in designing, creating, and delivering something. I think we're moving to a world where creating something will become much cheaper. But then the question is, what do we want to create? Reflecting on the nascent stage of AI deployment, Challis acknowledged the early phases of integration and the complexities enterprises face, given that it can take time for people to even choose to adopt AI before navigating InfoSec implications, copyright issues, and legal questions before moving to limited trials and evaluations in low-risk departments. He said: Now is the time where we're starting to see the beginning of high-value use cases, but it's still super early in the 'Crossing the Chasm' narrative. I think we're in the upper quartile of the very smallest, the beginning set of people doing the big implementations. Trust remains a cornerstone in the relationship between AI solutions and their users. Challis drew parallels between trusting AI and trusting new employees, reviewing more closely when new activities or higher risk work is undertaken. UiPath's mission statement is that it aims to solve trust issues with AI by implementing "human in the loop" control points, allowing humans to review, correct, and approve AI-generated outcomes, as Challis explained: You can get an AI model to propose the answer, and then give it to a human to review. If a human has reviewed and approved it 99 times out of 100, but changed something minor once, you can start to build trust in the AI's decisions. This approach mirrors practices in financial services, such as the "4-eyes tasks," where one person chooses what to buy and another validates those choices. Challis referred to the self-healing robot function which he says can save "eternities for for both developers and for people who are less familiar with code", similar to the way that software developers do pair programming. Looking forward, Challis highlighted the potential of AI to revolutionize business automation through the development of large action models, also referred to as agentic systems: We've seen these large models increase the number of modalities of data they process. The next big evolution is models that can take actions. They have the potential to solve a lot of problems in the world of computation and pose interesting questions as well. While these future visions paint a view of 'the sky's the limit', UiPath is keeping itself grounded by partnering with academia. I spoke with Professor David Barber, esteemed academic and Director of the AI Center at University College London (UCL) about how this partnership came about. The Engineering and Physical Sciences Research Council (ESPRC), part of UK Research and Innovation (UKRI) is investing £80 million in nine AI research hubs. The AI hub in Generative Models, based at the UCL AI Center, is led by Barber, and brings together researchers from Cambridge, Oxford, Edinburgh, Cardiff, Manchester, Imperial, Surrey and UCL. The partnership between UCL and UiPath was initiated after UiPath acquired Re:infer, a company co-founded by Barber and his PhD students in 2022 - Barber joined UiPath part-time, motivated by their vision and the opportunity to bring more research and academic capabilities to their work. UiPath is an official partner of this new AI hub, and Barber is excited about the opportunity to expand these research efforts to learn how AI capabilities can be used to improve the world of work. A notable point in Barber's discussion was the potential of generative AI beyond content creation by enabling a more dynamic interaction between humans and machines. He stressed that while the pace of change in AI has grown rapidly, this evolution brings it's own challenges, and understanding this interplay is crucial for different sectors to be able to see benefits, particularly in fields where the margin for error is slim, such as government and healthcare. Building that trust in an environment where constraints are needed goes beyond a system that can help you choose the best flight for a business trip - this is a challenge that can play to UiPath's strengths, as Barber explained: In those cases where you need constraints, you need the well-defined process going forwards. Previously, a lot of the value that a company would think about is in the execution of things - such as creating a design or a message. But now in principle a lot of that could be done by an AI system. So where is the value? The value is no longer in the static - the physical drawing of the diagram - it's about the important content that diagram should have. There needs to be somebody who understands what the message should be - which can then be executed by an AI system. Similarly, that's what happening across businesses - it's no longer your technical competence, but the importance is your knowledge of a process and what you need to do to get the job done. What are the individual steps of that process? Can it then by carried by out by a system like UiPath, and make sure it's done reliably? The backbone of a well-designed process is very, very important - in a way that people can trust. Learning is constant. It can be intimidating to sit down with distinguished experts and need them to put it in layman's terms, (resisting the temptation to say "Explain it to me like I'm five years old") especially at a time when media articles can be - and are (though never here!) - generated by AI at a rate of knots. Challis and Barber, along with UiPath CEO Daniel Dines in my previous interview, were able to make a case to alleviate some of those worries for me. C-Suite executives are under pressure to implement an AI strategy - but AI itself hasn't removed the work of learning. As Barber explained to me, we're still showing computers that they don't know everything. You can show a new form or a new piece of software to a machine, but it won't be able to process it without iterative guidance. Then, once the machine learns the process, it can put that intelligence in the bank and continue to improve as it carries out the work at greater speed. Returning to Dines' keynote message, the symbiotic relationship between humans and automation is going to keep coming in the next few months/years/decades - and the term 'agentic AI' will undoubtedly become the new phrase on everyone's lips. But one key point that came through in these conversations was the need to do a better job of communicating about the marriage of AI and automation - with evidence-based use cases that can influence training and skills development roadmaps. We're all still learning as we go along.
Share
Share
Copy Link
UK Labour Party unveils AI regulation plans while UiPath partners with academia for AI innovation. The stories highlight the balance between AI advancement and responsible development.
The UK Labour Party, led by Sir Keir Starmer, has unveiled its vision for artificial intelligence (AI) regulation, emphasizing the need for a balanced approach to innovation and safety. In a recent speech, Starmer outlined Labour's plans to establish an AI regulator within the first 100 days of taking office, should they win the next general election 1.
The proposed regulator would focus on ensuring AI safety while fostering innovation in the rapidly evolving field. Starmer stressed the importance of striking a balance between embracing technological advancements and addressing potential risks associated with AI development.
In parallel developments, UiPath, a leading automation software company, has announced a significant collaboration with academic institutions to drive AI innovation 2. This partnership aims to bridge the gap between theoretical research and practical applications in the AI and automation sectors.
The collaboration involves several prestigious universities and research centers, focusing on areas such as natural language processing, computer vision, and machine learning. UiPath's initiative underscores the growing trend of industry-academia partnerships in advancing AI technologies.
Both stories highlight the ongoing debate surrounding AI development and its impact on society. While the UK Labour Party emphasizes the need for regulatory oversight, UiPath's academic collaboration demonstrates the industry's commitment to pushing the boundaries of AI capabilities.
Starmer's proposed AI regulator would likely scrutinize developments similar to those emerging from UiPath's academic partnerships, ensuring that innovations align with safety and ethical standards. This approach reflects a growing global consensus on the need for responsible AI development.
The juxtaposition of these two developments illustrates the complex landscape of AI advancement. On one hand, political entities are recognizing the need for regulatory frameworks to guide AI development. On the other, private sector companies are actively collaborating with academic institutions to accelerate innovation.
This dual approach suggests a future where AI development is characterized by a delicate balance between rapid technological progress and careful oversight. The success of this model will depend on effective communication and cooperation between government bodies, private enterprises, and academic institutions.
Labour's AI regulation plans and UiPath's academic partnerships both reflect a broader global trend of nations and companies vying for leadership in the AI space. The UK's approach, as outlined by Starmer, aims to position the country as a leader in responsible AI development, potentially influencing international standards.
Similarly, UiPath's collaboration with academia demonstrates the company's commitment to staying at the forefront of AI innovation. This strategy not only enhances UiPath's competitive edge but also contributes to the broader ecosystem of AI research and development.
World leaders and tech executives discuss AI's transformative impact on government services, business operations, and workforce dynamics at the World Economic Forum 2025 in Davos.
5 Sources
5 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
A comprehensive look at how AI shaped policy discussions, enterprise strategies, and marketing practices in 2023, highlighting both opportunities and concerns across various sectors.
7 Sources
7 Sources
DeepSeek's emergence disrupts the AI market, challenging industry giants and raising questions about AI's future development and societal impact.
3 Sources
3 Sources
A comprehensive look at contrasting views on AI's future impact, from optimistic outlooks on human augmentation to concerns about job displacement and the need for regulation.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved