7 Sources
[1]
Revealed: How the UK tech secretary uses ChatGPT for policy advice
New Scientist has used freedom of information laws to obtain the ChatGPT records of Peter Kyle, the UK's technology secretary, in what is believed to be a world-first use of such legislation The UK's technology secretary, Peter Kyle, has asked ChatGPT for advice on why the adoption of artificial intelligence is so slow in the UK business community - and which podcasts he should appear on. This week, Prime Minister Keir Starmer said that the UK government should be making far more use of AI in an effort to increase efficiency. "No person's substantive time should be spent on a task where digital or AI can do it better, quicker and to the same high quality and standard," he said. Now, New Scientist has obtained records of Kyle's ChatGPT use under the Freedom of Information (FOI) Act, in what is believed to be a world-first test of whether chatbot interactions are subject to such laws. These records show that Kyle asked ChatGPT to explain why the UK's small and medium business (SMB) community has been so slow to adopt AI. ChatGPT returned a 10-point list of problems hindering adoption, including sections on "Limited Awareness and Understanding", "Regulatory and Ethical Concerns" and "Lack of Government or Institutional Support". The chatbot advised Kyle: "While the UK government has launched initiatives to encourage AI adoption, many SMBs are unaware of these programs or find them difficult to navigate. Limited access to funding or incentives to de-risk AI investment can also deter adoption." It also said, concerning regulatory and ethical concerns: "Compliance with data protection laws, such as GDPR [a data privacy law], can be a significant hurdle. SMBs may worry about legal and ethical issues associated with using AI." "As the Cabinet Minister responsible for AI, the Secretary of State does make use of this technology. This does not substitute comprehensive advice he routinely receives from officials," says a spokesperson for the Department for Science, Innovation and Technology (DSIT), which Kyle leads. "The Government is using AI as a labour-saving tool - supported by clear guidance on how to quickly and safely make use of the technology." Kyle also used the chatbot to canvas ideas for media appearances, asking: "I'm Secretary of State for science, innovation and technology in the United Kingdom. What would be the best podcasts for me to appear on to reach a wide audience that's appropriate for my ministerial responsibilities?" ChatGPT suggested The Infinite Monkey Cage and The Naked Scientists, based on their number of listeners. As well as seeking this advice, Kyle asked ChatGPT to define various terms relevant to his department: antimatter, quantum and digital inclusion. Two experts New Scientist spoke to said they were surprised by the quality of the responses when it came to ChatGPT's definitions of quantum. "This is surprisingly good, in my opinion," says Peter Knight at Imperial College London. "I think it's not bad at all," says Cristian Bonato at Heriot-Watt University in Edinburgh, UK. New Scientist made the request for Kyle's data following his recent interview with PoliticsHome, in which the politician was described as "often" using ChatGPT. He said that he used it "to try and understand the broader context where an innovation came from, the people who developed it, the organisations behind them" and that "ChatGPT is fantastically good, and where there are things that you really struggle to understand in depth, ChatGPT can be a very good tutor for it". DSIT initially refused New Scientist's FOI request, stating: "Peter Kyle's ChatGPT history includes prompts and responses made in both a personal capacity, and in an official capacity". A refined request, for only the prompts and responses made in an official capacity, was granted. The fact the data was provided at all is a shock, says Tim Turner, a data protection expert based in Manchester, UK, who thinks it may be the first case of chatbot interactions being released under FOI. "I'm surprised that you got them," he says. "I would have thought they'd be keen to avoid a precedent." This, in turn, poses questions for governments with similar FOI laws, such as the US. For example, is ChatGPT more like an email or WhatsApp conversation - both of which have historically been covered by FOI based on past precedent - or the results of a search engine query, which traditionally have been easier for organisations to reject? Experts disagree on the answer. "In principle, provided they could be extracted from the department's systems, a minister's Google search history would also be covered," says Jon Baines at UK law firm Mishcon de Reya. "Personally, I wouldn't see ChatGPT as being the same as a Google search," says John Slater, an FOI expert. That is because Google searches don't create new information, he says. "ChatGPT, on the other hand, does 'create' something based on the input from the user." With this uncertainty, politicians might want to avoid using privately developed commercial AI tools like ChatGPT, says Turner. "It's a real can of worms," he says. "To cover their own backs, politicians should definitely use public tools, provided by their own departments, as if the public might end up being the audience."
[2]
Should governments really be using AI to remake the state?
What is artificial intelligence? It is a question that scientists have been wrestling with since the dawn of computing in the 1950s, when Alan Turing asked: "Can machines think?" Now that large language models (LLMs) like ChatGPT have been unleashed on the world, finding an answer has never been more pressing. While their use has already become widespread, the social norms around these new AI tools are still rapidly evolving. Should students use them to write essays? Will they replace your therapist? And can they turbocharge government? That last question is being asked in both the US and UK. Under the new Trump administration, Elon Musk's Department of Government Efficiency (DOGE) taskforce is eliminating federal workers and rolling out a chatbot, GSAi, to those that remain. Meanwhile, the UK prime minister, Keir Starmer, has called AI a "golden opportunity" that could help reshape the state. Certainly, there is government work that could benefit from automation, but are LLMs the right tool for the job? Part of the problem is we still can't agree what they actually are. This was aptly demonstrated this week, when New Scientist used freedom of information (FOI) laws to obtain the ChatGPT interactions of Peter Kyle, the UK's secretary of state for science, innovation and technology. Politicians, data privacy experts and journalists - not least us - were stunned that this request was granted, given similar requests for a minister's Google search history, say, would generally be rejected. That the records were released suggests that the UK government sees using ChatGPT as more akin to a ministerial conversation with civil servants via email or WhatsApp, both of which are subject to FOI laws. Kyle's interactions with ChatGPT don't indicate any strong reliance on the AI for forming serious policy - one of his questions was about which podcasts he should appear on. Yet the fact that the FOI request was granted suggests that some in government seem to believe the AI can be conversed with like a human, which is concerning. As New Scientist has extensively reported, current LLMs aren't intelligent in any meaningful sense and are just as liable to spew convincing-sounding inaccuracies as they are to offer useful advice. What's more, their answers will also reflect the inherent biases of the information they have ingested. Indeed, many AI scientists are increasingly of the view that LLMs aren't a route to the lofty goal of artificial general intelligence (AGI), capable of matching or exceeding anything a human can do - a machine that can think, as Turing would have put it. For example, in a recent survey of AI researchers, about 76 per cent of respondents said it was "unlikely" or "very unlikely" that current approaches will succeed in achieving AGI. Instead, perhaps we need to think of these AIs in a new way. Writing in the journal Science this week, a team of AI researchers says they "should not be viewed primarily as intelligent agents but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated". The researchers compare LLMs to "such past technologies as writing, print, markets, bureaucracies, and representative democracies" that have transformed the way we access and process information. Framed in this way, the answers to many questions become clearer. Can governments use LLMs to increase efficiency? Almost certainly, but only when used by people who understand their strengths and limitations. Should interactions with chatbots be subject to freedom of information laws? Possibly, but existing carve-outs designed to give ministers a "safe space" for internal deliberation should apply. And can, as Turing asked, machines think? No. Not yet.
[3]
The UK government embracing AI? I'm sorry, that's nonsense and I can prove it | Chris Stokel-Walker
My freedom of information request revealed the inane use of ChatGPT by the tech secretary. Is this the future? I hope not Two tech-related things made me laugh this week. One was Donald Trump's childlike exuberance at seeing the dash panel of a Tesla on the White House lawn, and his wondrous exclamation that "everything is computer". The other was equally hilarious, also tied to politics. Keir Starmer stood up yesterday in Hull and said waste would be thrown by the wayside and the civil service would lose its bloat ... thanks to the transformative effects of AI. What I knew, and no one else did until my story for New Scientist was published shortly afterwards, was that the prime minister was talking nonsense. I had seen the government's AI-powered revolution first hand. And it was all hooey. With what is believed to be a world first use of freedom of information (FoI) laws for this purpose, I had obtained the interactions Peter Kyle, the UK technology secretary, had with ChatGPT for his job. (The department Kyle heads had rejected an earlier FoI request because granting it ran the risk of unveiling his own personal conversations with ChatGPT.) If Kyle's chats are any indication of what the AI revolution promises in government, then we're in for a tough time. My knowledge of the UK's podcast scene is what you would expect as a middle-aged white man who spends too much time on the internet - so even I could answer the question Kyle first posed to the chatbot: which podcasts should he appear on to reach a wide audience. What Kyle asked next - which of the four podcasts ChatGPT had recommended had the most listeners - might have seemed like a sensible use of the generative AI tool, but it was also something that a quick Google could have solved in much the same time; and with fewer concerns that it was making up numbers like an overconfident Oxbridge graduate. AI "hallucinations" - the submission of false, incorrect or misleading results - remain a real and present fear with generative AI tools, and are alarmingly common. So too is the idea of mixing things up a little: recent BBC research showed AI tools such as ChatGPT got major things wrong when summarising its reporting. As to the tech secretary's query about why AI adoption is so slow in British small- and medium-sized businesses (SMBs), that was equally unenlightening. It isn't a novel or original take to say that data protection laws, such as general data protection regulations (GDPR), may stymie the development of a technology that is subject to multiple lawsuits about data breaches and copyright risks. Nor - you would hope - is it beyond the civil service's ken to understand that "many SMBs are unaware of these programs or find them difficult to navigate", as ChatGPT solemnly told Kyle in point seven of a plodding 10-point treatise. "Not reassuring that a minister with a department of experts is using AI for these sorts of questions," as one critic put it. ChatGPT was stronger at answering Kyle's questions seeking simplified analogies for complicated concepts such as "quantum" or "antimatter" - though, as some social media commentators have pointed out, it's a little worrying that Kyle felt the need to ask an AI chatbot for a definition of "digital inclusion". Presumably he should know, as the person responsible for overseeing it? I was generally surprised both by the comparative lack of interaction Kyle had had with ChatGPT, given he's spent the past several months telling anyone with an audio recorder about his hankering for the technology, and by the sorts of questions he was asking. In fact, my response on seeing the interaction between AI and the minister at the sharp end of the government's use and knowledge of it was: is this it? Is this the generative AI revolution we're banking our future and our economy on? There are ways to harness AI in government for the public good. Churning through complicated data to unpick patterns that the human eye misses, for example; AI-powered drug discovery; deep research of the type that can add insights long forgotten or never previously thought about - these are all attainable wins. Quite how the government's talk of AI's promise and potential chimes with the reality of how it is used now is something the tech secretary will have to answer for himself. Perhaps he can ask ChatGPT for some help there, too.
[4]
Technology secretary Peter Kyle asks ChatGPT for science and media advice
Strong advocate of AI use in government asks chat tool which podcasts to appear on and to define 'quantum' Peter Kyle, the science and technology secretary, has asked ChatGPT for advice on a range of work-related issues, including why British businesses are not adopting artificial intelligence and what podcasts he should appear on. Information provided to the New Scientist magazine in response to a freedom of information request showed that Kyle, an advocate for AI within the government, makes frequent use of OpenAI's chat tool in his professional life. The responses show Kyle asked for media and policy advice, and to define scientific terms relevant to his department, including "antimatter", "quantum" and "digital inclusion". Experts say the fact the information was provided could open the door to similar information having to be disclosed across Whitehall. A spokesperson for the Department for Science, Innovation and Technology said: "As the cabinet minister responsible for AI, the secretary of state does make use of this technology. This does not substitute comprehensive advice he routinely receives from officials. "The government is using AI as a labour-saving tool - supported by clear guidance on how to quickly and safely make use of the technology." Keir Starmer gave a speech on Thursday promising widespread changes to the civil service, including greater use of AI. "If we push forward with digital reform of government - and we are going to do that - we can make massive savings, Β£45bn savings in efficiency," the prime minister said. "AI is a golden opportunity." Kyle has championed initiatives within government, which include spearheading controversial plans to exempt AI companies from copyright rules so they can access creative content for free. Some critics have accused him of being too close to the industry, pointing out that while Kyle was in opposition, a staff member from the AI company Faculty AI was seconded to his office. In January, the technology secretary told PoliticsHome he would often use ChatGPT "to try and understand the broader context where an innovation came from, the people who developed it, the organisations behind them". He added: "ChatGPT is fantastically good, and where there are things that you really struggle to understand in depth, ChatGPT can be a very good tutor for it." He previously told the Times: "AI can tutor you. So for example, I can go into a chatbot and say 'What is quantum mechanics and what are its applications?', and it can come up with a description, it will tutor you." When New Scientist asked for his ChatGPT prompts and answers the department initially refused, saying the information would include conversations made in a personal and official capacity. When the magazine explained it wanted to see those made in an official capacity, the department supplied the information. According to the exchanges, Kyle asked why small- and medium-sized businesses had been so slow to take up AI. The chatbot replied: "While the UK government has launched initiatives to encourage AI adoption, many SMBs are unaware of these programs or find them difficult to navigate. Limited access to funding or incentives to de-risk AI investment can also deter adoption." On another occasion, Kyle asked: "I'm secretary of state for science, innovation and technology in the United Kingdom. What would be the best podcasts for me to appear on to reach a wide audience that's appropriate for my ministerial responsibilities?" ChatGPT suggested The Infinite Monkey Cage and The Naked Scientists. Some have said using ChatGPT in this way poses policy risks. Beeban Kidron, the film director and member of the House of Lords who is leading the opposition to the government's AI copyright plans, said: "I am a bit worried that the science and innovation department is bedazzled with technical developments and not doing enough to protect UK democratic and economic interests."
[5]
Release of technology secretary's use of ChatGPT will have Whitehall sweating
Technology secretary Peter Kyle asks ChatGPT for science and media advice When Tony Blair looked back on his time in power, he had a simple assessment of his decision to introduce the Freedom of Information Act: "You idiot." While the technology secretary, Peter Kyle, is a fan of the former prime minister, he may be inclined to agree with that verdict after the act was used to reveal that he had been asking ChatGPT which podcasts he should appear on. The disclosure has already caused frustration among ministers, given its possible repercussions. Blair's gripe was that the act risked stopping the frank discussions needed among ministers and officials. Ever since, it has become notoriously difficult to have a freedom of information (FoI) request granted, as officials exploit various legal exemptions to refuse them. The successful use of the legislation to probe into Kyle's AI chatbot use has led some to conclude that a new precedent has been set, one that will have officials across Whitehall sweating over their recent chatbot interactions. "It's the first time I've come across that information being released, but it's completely in line with the basic principles of the act," said Martin Rosenbaum, a former BBC journalist and FoI consultant. "If ministers or officials are doing stuff on their phone or computers which is done for work purposes on behalf of a public authority, that is subject to FoI - whatever device they're using. The same logic would apply to whatever prompts you type into ChatGPT, or any other AI." WhatsApp messages and texts - even on personal phones - can already be subject to FoI requests, thanks to a series of fiercely fought cases. In practice, however, officials have become adept at finding ways to knock back or heavily redact responses. Given that attitude, the granting of access to Kyle's ChatGPT queries has shocked experts. "I'm surprised the department didn't fight it harder," said Rosenbaum. "Some departments would have tried to resist it all the way. Obviously, it's going to prompt a lot of other requests." Chris Stokel-Walker, the journalist behind the request for Kyle's ChatGPT use, already has plans to ask for further disclosures. "My jaw dropped - I thought there was no way they're going to give over this data," he said. "In the time since I got the response, I have put in other requests for other generative AI interactions." Given this success, could the act now be used to reveal ministerial Google search requests? Rosenbaum said there was no reason in law stopping such a move. "As journalists, that is a theory we should now test," said Stokel-Walker. There is still plenty of scope to limit what is revealed, however. Access to Kyle's ChatGPT queries was initially denied on the basis that some of his prompts were made in a personal capacity. It was later granted when the request was limited to prompts he made in an official capacity. But the difference between personal and official use is a grey area. Heather Brooke, whose use of an FoI request helped break the MPs' expenses scandal, said the distinction could be used to keep information secret. "It does give power of interpretation to people who have an interest in keeping things hidden," she said. "With MPs' expenses, some of the most egregious abuses were the exact things that they tried to claim were private." After this rare victory, Rosenbaum envisages a future in which AI could play an even bigger role in the legislation - becoming both the creator of requests, as well the target of them: "I'm sure FoI requesters themselves will be typing into ChatGPT - what should I request next?"
[6]
Science Secretary uses ChatGPT to come up with policy advice
The Science Secretary has used ChatGPT to come up with policy advice and tips for which podcasts he should appear on, records have shown. Peter Kyle, the Cabinet minister with responsibility for artificial intelligence, asked the chatbot why small businesses in the UK have been slow to adopt the technology. He also requested suggestions for the "best podcasts" he could appear on to reach a "wide audience", as well as definitions for terms such as "digital inclusion". The records were obtained by The New Scientist magazine under the freedom of information Act, in what is thought to be the first time the legislation has been applied to ministers' use of chatbots. Sir Keir Starmer has actively encouraged the use of AI in Whitehall, saying he is "determined to seize" the "golden opportunity" offered by the technology. Civil servants have been told to abide by the mantra that "no person's substantive time should be spent on a task where digital or AI can do it better, quicker and to the same high quality and standard". The records show how the technology is being used at ministerial level, with Mr Kyle drawing on ChatGPT for advice in his role as Science Secretary. He has previously said he uses the chatbot to learn on the job, telling Politics Home it can be a "very good tutor" when "there are things that you really struggle to understand in depth".
[7]
Freedom of Information Request Reveals UK Science Minister Used ChatGPT for AI Policy Guidance
The U.K. Science and Technology Secretary Peter Kyle (in image with Labour leader Sir Keir Starmer ) consulted ChatGPT on policy issues, a freedom of information request has revealed. Credit: Charles McQuillan, Getty Images. Since its debut in 2022, ChatGPT has found fans in diverse sectors and settings, even among the highest echelons of government. In the U.K., Science and Technology Secretary Peter Kyle used the chatbot to brainstorm policy issues, among other things, a freedom of information request by the New Scientist has revealed. How Peter Kyle Uses ChatGPT According to the New Scientist, Kyle prompted ChatGPT for advice on why the adoption of artificial intelligence is so slow in the U.K. business community. ChatGPT then offered a list of factors that slowed adoption, including "limited awareness and understanding," "regulatory and ethical concerns," and "lack of government or institutional support." Kyle also asked the chatbot which podcasts he should appear on, writing, "What would be the best podcasts for me to appear on to reach a wide audience that's appropriate for my ministerial responsibilities?" In response, ChatGPT recommended The Infinite Monkey Cage and The Naked Scientists. A New Frontier for Freedom of Information Freedom of information laws in the U.K. have long been a tool for journalists and citizens to uncover what goes on behind closed doors in government. In the past, freedom of information requests have revealed government emails, meeting minutes, and internal memos. But the New Scientist's latest request represents the first time they have been used to gain access to a minister's interactions with a chatbot. U.K. Government Encourages AI Adoption While there isn't a clear connection between ChatGPT's advice and government policy, encouraging AI adoption is certainly on the government's agenda. On Thursday, March 13, Prime Minister Keir Starmer announced plans to use AI across the public sector to boost efficiency. Calling the technology a "golden opportunity," he said "I'm going to send teams into every government department with a clear mission from me to make the state more innovative and efficient."
Share
Copy Link
The UK's Technology Secretary, Peter Kyle, has been using ChatGPT for policy advice and media guidance, raising questions about AI's role in government and potential implications for freedom of information laws.
In a groundbreaking development, New Scientist has obtained records of UK Technology Secretary Peter Kyle's ChatGPT usage through a Freedom of Information (FOI) request 1. This unprecedented disclosure has sparked discussions about the role of AI in government and its implications for transparency and policy-making.
Peter Kyle, an advocate for AI within the government, has been using ChatGPT for various work-related purposes. His queries included:
The chatbot provided responses on issues such as limited awareness, regulatory concerns, and lack of government support hindering AI adoption in small and medium-sized businesses 1.
The release of Kyle's ChatGPT interactions is considered a potential precedent-setter for FOI requests. Experts are surprised by the disclosure, as similar requests for ministers' Google search histories have typically been rejected 3. This development raises questions about how AI interactions should be classified under FOI laws.
Prime Minister Keir Starmer has called AI a "golden opportunity" for reshaping the state and increasing efficiency 4. The government claims to be using AI as a labor-saving tool, supported by guidelines for its safe and quick implementation 2.
Some critics have expressed worry about the government's approach to AI:
The disclosure has sparked a debate on the appropriate use of AI in government:
As the UK government pushes for greater AI adoption, questions remain about its implementation and regulation:
This revelation about the Technology Secretary's ChatGPT usage marks a significant moment in the ongoing discussion about AI's role in government and its implications for policy-making and transparency in the digital age.
OpenAI CEO Sam Altman reveals Meta's aggressive recruitment tactics, offering $100 million signing bonuses to poach AI talent. Despite the lucrative offers, Altman claims no top researchers have left OpenAI for Meta.
34 Sources
Business and Economy
19 hrs ago
34 Sources
Business and Economy
19 hrs ago
YouTube announces integration of Google's advanced Veo 3 AI video generator into Shorts format, potentially revolutionizing content creation and raising questions about the future of user-generated content.
7 Sources
Technology
2 hrs ago
7 Sources
Technology
2 hrs ago
Pope Leo XIV, the first American pope, has made artificial intelligence's threat to humanity a key issue of his papacy, calling for global regulation and challenging tech giants' influence on the Vatican.
3 Sources
Policy and Regulation
3 hrs ago
3 Sources
Policy and Regulation
3 hrs ago
Google introduces Search Live, an AI-powered feature enabling back-and-forth voice conversations with its search engine, enhancing user interaction and multitasking capabilities.
11 Sources
Technology
2 hrs ago
11 Sources
Technology
2 hrs ago
OpenAI CEO Sam Altman announces GPT-5's summer release, hinting at significant advancements and potential shifts in AI model deployment. Meanwhile, OpenAI renegotiates with Microsoft and expands into new markets.
2 Sources
Technology
2 hrs ago
2 Sources
Technology
2 hrs ago