2 Sources
[1]
UK government must show that its AI plan can be trusted to deal with serious risks when it comes to health data
The UK government's new plan to foster innovation through artificial intelligence (AI) is ambitious. Its goals rely on the better use of public data, including renewed efforts to maximize the value of health data held by the NHS. Yet this could involve the use of real data from patients using the NHS. This has been highly controversial in the past and previous attempts to use this health data have been at times close to disastrous. Patient data would be anonymized, but concerns remain about potential threats to this anonymity. For example, the use of health data has been accompanied by worries about access to data for commercial gain. The care.data program, which collapsed in 2014, had a similar underlying idea: sharing health data across the country to both publicly-funded research bodies and private companies. Poor communication about the more controversial elements of this project and a failure to listen to concerns led to the program being shelved. More recently, the involvement of the US tech company Palantir in the new NHS data platform raised questions about who can and should access data. The new effort to use health data to train (or improve) AI models similarly relies on public support for success. Yet perhaps unsurprisingly, within hours of this announcement, media outlets and social media users attacked the plan as a way of monetizing health data. "Ministers mull allowing private firms to make profit from NHS data in AI push," one published headline reads. These responses, and those to care.data and Palantir, reflect just how important public trust is in the design of policy. This is true no matter how complicated technology becomes -- and crucially, trust becomes more important as societies increase in scale and we're less able to see or understand every part of the system. It can be difficult, if not impossible, to make a judgment as to where we should place trust, and how to do that well. This holds true whether we are talking about governments, companies, or even just acquaintances -- to trust (or not) is a decision each of us must make every day. The challenge of trust motivates what we call the "trustworthiness recognition problem", which highlights that determining who is worthy of our trust is something that stems from the origins of human social behavior. The problem comes from a simple issue: Anyone can claim to be trustworthy and we can lack sure ways to tell if they genuinely are. If someone moves into a new home and sees ads for different internet providers online, there isn't a sure way to tell which will be cheaper or more reliable. Presentation doesn't need -- and may not even often -- reflect anything about a person or group's underlying qualities. Carrying a designer handbag or wearing an expensive watch doesn't guarantee the wearer is wealthy. Luckily, work in anthropology, psychology and economics shows how people -- and by consequence, institutions like political bodies -- can overcome this problem. This work is known as signaling theory, and explains how and why communication, or what we can call the passing of information from a signaler to a receiver, evolves even when the individuals communicating are in conflict. For example, people moving between groups may have reasons to lie about their identities. They might want to hide something unpleasant about their own past. Or they might claim to be a relative of someone wealthy or powerful in a community. Zadie Smith's recent book, "The Fraud," is a fictionalized version of this popular theme that explores aristocratic life during Victorian England. Yet it's just not possible to fake some qualities. A fraud can claim to be an aristocrat, a doctor, or an AI expert. Signals that these frauds unintentionally give off will, however, give them away over time. A false aristocrat will probably not fake his demeanor or accent effectively enough (accents, among other signals, are difficult to fake for those familiar with them). The structure of society is obviously different than that of two centuries ago, but the problem, at its core, is the same -- as, we think, is the solution. Much as there are ways for a truly wealthy person to prove wealth, a trustworthy person or group must be able to show they are worth trusting. The way or ways this is possible will undoubtedly vary from context to context, but we believe that political bodies such as governments must demonstrate a willingness to listen and respond to the public about their concerns. The care.data project was criticized because it was publicized via leaflets dropped at people's doors that did not contain an opt-out. This failed to signal to the public a real desire to alleviate people's concerns that information about them would be misused or sold for profit. The current plan around the use of data to develop AI algorithms must be different. Our political and scientific institutions have a duty to signal their commitment to the public by listening to them, and through doing so develop cohesive policies that minimize the risks to individuals while maximizing the potential benefits for all. The key is to place sufficient funding and effort to signal -- to demonstrate -- the honest motivation of engaging with the public about their concerns. The government and scientific bodies have a duty to listen to the public, and further to explain how they will protect it. Saying "trust me" is never enough. You have to show you are worth it.
[2]
Government needs to show that its AI plan can be trusted to deal with serious risks when it comes to health data
The UK government's new plan to foster innovation through artificial intelligence (AI) is ambitious. Its goals rely on the better use of public data, including renewed efforts to maximise the value of health data held by the NHS. Yet this could involve the use of real data from patients using the NHS. This has been highly controversial in the past and previous attempts to use this health data have been at times close to disastrous. Patient data would be anonymised, but concerns remain about potential threats to this anonymity. For example, the use of health data has been accompanied by worries about access to data for commercial gain. The care.data programme, which collapsed in 2014, had an similar underlying idea: sharing health data across the country to both publicly funded research bodies and private companies. Poor communication about the more controversial elements of this project and a failure to listen to concerns led to the programme being shelved. More recently, the involvement of the US tech company Palantir in the new NHS data platform raised questions about who can and should access data. The new effort to use health data to train (or improve) AI models, similarly relies on public support for success. Yet perhaps unsurprisingly, within hours of this announcement, media outlets and social media users attacked the plan as a way of monetising health data. "Ministers mull allowing private firms to make profit from NHS data in AI push," one published headline reads. These responses, and those to care.data and Palantir, reflect just how important public trust is in the design of policy. This is true no matter how complicated technology becomes - and crucially, trust becomes more important as societies increase in scale and we're less able to see or understand every part of the system. It can, though, be difficult, if not impossible, to make a judgement as to where we should place trust, and how to do that well. This holds true whether we are talking about governments, companies, or even just acquaintances - to trust (or not) is a decision each of us must make every day. The challenge of trust motivates what we call the "trustworthiness recognition problem", which highlights that determining who is worthy of our trust is something that stems from the origins of human social behaviour. The problem comes from a simple issue: anyone can claim to be trustworthy and we can lack sure ways to tell if they genuinely are. If someone moves into a new home and sees ads for different internet providers online, there isn't a sure way to tell which will be cheaper or more reliable. Presentation doesn't need - and may not even often - reflect anything about a person or group's underlying qualities. Carrying a designer handbag or wearing an expensive watch doesn't guarantee the wearer is wealthy. Luckily, work in anthropology, psychology and economics shows how people - and by consequence, institutions like political bodies - can overcome this problem. This work is known as signalling theory, and explains how and why communication, or what we can call the passing of information from a signaller to a receiver, evolves even when the individuals communicating are in conflict. For example, people moving between groups may have reasons to lie about their identities. They might want to hide something unpleasant about their own past. Or they might claim to be a relative of someone wealthy or powerful in a community. Zadie Smith's recent book, The Fraud, is a fictionalised version of this popular theme that explores aristocratic life during Victorian England. Yet it's just not possible to fake some qualities. A fraud can claim to be an aristocrat, a doctor, or an AI expert. Signals that these frauds unintentionally give off will, however, give them away over time. A false aristocrat will probably not fake his demeanour or accent effectively enough (accents, among other signals, are difficult to fake to those familiar with them). The structure of society is obviously different than that of two centuries ago, but the problem, at its core, is the same -- as, we think, is the solution. Much as there are ways for a truly wealthy person to prove wealth, a trustworthy person or group must be able to show they are worth trusting. The way or ways this is possible will undoubtedly vary from context to context, but we believe that political bodies such as governments must demonstrate a willingness to listen and respond to the public about their concerns. The care.data project, was criticised because it was publicised via leaflets dropped at people's doors that did not contain an opt-out. This failed to signal to the public a real desire to alleviate people's concerns that information about them would be misused or sold for profit. The current plan around the use of data to develop AI algorithms needs to be different. Our political and scientific institutions have a duty to signal their commitment to the public by listening to them, and through doing so develop cohesive policies that minimise the risks to individuals while maximising the potential benefits for all. The key is to place sufficient funding and effort to signal - to demonstrate - the honest motivation of engaging with the public about their concerns. The government and scientific bodies have a duty to listen to the public, and further to explain how they will protect it. Saying "trust me" is never enough: you have to show you are worth it.
Share
Copy Link
The UK government's ambitious AI innovation plan, which aims to leverage NHS health data, faces public skepticism and trust issues due to past controversies and concerns about data privacy and commercialization.
The UK government's ambitious plan to foster innovation through artificial intelligence (AI) has ignited a heated debate over the use of NHS health data. The initiative aims to maximize the value of public health information, but it faces significant challenges in gaining public trust and addressing concerns about data privacy and commercialization 1.
The current proposal bears similarities to previous attempts at leveraging NHS data, which have been met with public resistance. The care.data program, which collapsed in 2014, sought to share health data with both public research bodies and private companies. Its failure was attributed to poor communication and a lack of responsiveness to public concerns 2.
More recently, the involvement of US tech company Palantir in the NHS data platform raised questions about data access and control. These past controversies have set a precedent of skepticism that the government must now overcome 1.
Within hours of the announcement, media outlets and social media users criticized the plan, framing it as a potential monetization of health data. Headlines such as "Ministers mull allowing private firms to make profit from NHS data in AI push" reflect the immediate public concern and distrust surrounding the initiative 2.
The government's challenge in implementing this AI plan highlights what experts call the "trustworthiness recognition problem." This concept underscores the difficulty in determining who is worthy of trust, especially in complex societal structures where individuals cannot fully comprehend every aspect of a system 1.
To address this issue, experts point to signaling theory, which explains how communication evolves even when parties have conflicting interests. For the government to gain public trust, it must demonstrate its trustworthiness through actions rather than mere claims 2.
To succeed where previous initiatives failed, the government and scientific institutions must:
As the UK government moves forward with its AI innovation plan, the success of the initiative will largely depend on its ability to address these trust issues and demonstrate a genuine commitment to protecting public interests while advancing technological innovation.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
13 hrs ago
9 Sources
Technology
13 hrs ago
Google's Made by Google 2025 event showcases the Pixel 10 series, featuring advanced AI capabilities, improved hardware, and ecosystem integrations. The launch includes new smartphones, wearables, and AI-driven features, positioning Google as a strong competitor in the premium device market.
4 Sources
Technology
13 hrs ago
4 Sources
Technology
13 hrs ago
Palo Alto Networks reports impressive Q4 results and forecasts robust growth for fiscal 2026, driven by AI-powered cybersecurity solutions and the strategic acquisition of CyberArk.
6 Sources
Technology
13 hrs ago
6 Sources
Technology
13 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
21 hrs ago
6 Sources
Technology
21 hrs ago
President Trump's plan to deregulate AI development in the US faces a significant challenge from the European Union's comprehensive AI regulations, which could influence global standards and affect American tech companies' operations worldwide.
2 Sources
Policy
5 hrs ago
2 Sources
Policy
5 hrs ago