Curated by THEOUTPOST
On Wed, 9 Oct, 8:02 AM UTC
4 Sources
[1]
Will AI tools revolutionize public health? Not if they continue following old patterns, researchers argue
by Allison Arteaga Soergel, University of California - Santa Cruz As tools powered by artificial intelligence increasingly make their way into health care, the latest research from UC Santa Cruz Politics Department doctoral candidate Lucia Vitale takes stock of the current landscape of promises and anxieties. Proponents of AI envision the technology helping to manage health care supply chains, monitor disease outbreaks, make diagnoses, interpret medical images, and even reduce equity gaps in access to care by compensating for health care worker shortages. But others are sounding the alarm about issues like privacy rights, racial and gender biases in models, lack of transparency in AI decision-making processes that could lead to patient care mistakes, and even the potential for insurance companies to use AI to discriminate against people with poor health. Which types of impacts these tools ultimately have will depend upon the manner in which they are developed and deployed. In a paper for the journal Social Science & Medicine, Vitale and her co-author, University of British Columbia doctoral candidate Leah Shipton, conducted an extensive literature analysis of AI's current trajectory in health care. They argue that AI is positioned to become the latest in a long line of technological advances that ultimately have limited impact because they engage in a "politics of avoidance" that diverts attention away from, or even worsens, more fundamental structural problems in global public health. For example, like many technological interventions of the past, most AI being developed for health focuses on treating disease, while ignoring the underlying determinants of health. Vitale and Shipton fear that the hype over unproven AI tools could distract from the urgent need to implement low-tech but evidence-based holistic interventions, like community health workers and harm reduction programs. "We have seen this pattern before," Vitale said. "We keep investing in these tech silver bullets that fail to actually change public health because they're not dealing with the deeply rooted political and social determinants of health, which can range from things like health policy priorities to access to healthy foods and a safe place to live." AI is also likely to continue or exacerbate patterns of harm and exploitation that have historically been common in the biopharmaceutical industry. One example discussed in the paper is that the ownership of and profit from AI is currently concentrated in high-income countries, while low- to middle-income countries with weak regulations may be targeted for data extraction or experimentation with the deployment of potentially risky new technologies. The paper also predicts that lax regulatory approaches to AI will continue the prioritization of intellectual property rights and industry incentives over equitable and affordable public access to new treatments and tools. And since corporate profit motives will continue to drive product development, AI companies are also likely to follow the health technology sector's long-term trend of overlooking the needs of the world's poorest people when deciding which issues to target for investment in research and development. However, Vitale and Shipton did identify a bright spot. AI could potentially break the mold and create a deeper impact by focusing on improving the health care system itself. AI could be used to allocate resources more efficiently across hospitals and for more effective patient triage. Diagnostic tools could improve the efficiency and expand the capabilities of general practitioners in small rural hospitals without specialists. AI could even provide some basic yet essential health services to fill labor and specialization gaps, like providing prenatal check-ups in areas with growing maternity care deserts. All of these applications could potentially result in more equitable access to care. But that result is far from guaranteed. Depending on how and where these technologies are deployed, they could either successfully backfill gaps in care where there are genuine health worker shortages or lead to unemployment or precarious gig work for existing health care workers. And unless the underlying causes of health care worker shortages are addressed -- including burnout and "brain drain" in high-income countries -- AI tools could end up providing diagnosis or outbreak detection, which is ultimately not useful, because communities still lack the capacity to respond. To maximize benefits and minimize harms, Vitale and Shipton argue that regulation must be put in place before AI expands further into the health sector. The right safeguards could help to divert AI from following harmful patterns of the past and instead chart a new path that ensures future projects will align with the public interest. "With AI, we have an opportunity to correct our way of governing new technologies," Shipton said. "But we need a clear agenda and framework for the ethical governance of AI health technologies through the World Health Organization, major public-private partnerships that fund and deliver health interventions, and countries like the United States, India, and China that host tech companies. Getting that implemented is going to require continued civil society advocacy."
[2]
Will AI tools revolutionize public health? Not if they continue following old patterns, researchers argue
As tools powered by artificial intelligence increasingly make their way into health care, the latest research from UC Santa Cruz Politics Department doctoral candidate Lucia Vitale takes stock of the current landscape of promises and anxieties. Proponents of AI envision the technology helping to manage health care supply chains, monitor disease outbreaks, make diagnoses, interpret medical images, and even reduce equity gaps in access to care by compensating for healthcare worker shortages. But others are sounding the alarm about issues like privacy rights, racial and gender biases in models, lack of transparency in AI decision-making processes that could lead to patient care mistakes, and even the potential for insurance companies to use AI to discriminate against people with poor health. Which types of impacts these tools ultimately have will depend upon the manner in which they are developed and deployed. In a paper for the journal Social Science & Medicine, Vitale and her coauthor, University of British Columbia doctoral candidate Leah Shipton, conducted an extensive literature analysis of AI's current trajectory in health care. They argue that AI is positioned to become the latest in a long line of technological advances that ultimately have limited impact because they engage in a "politics of avoidance" that diverts attention away from, or even worsens, more fundamental structural problems in global public health. For example, like many technological interventions of the past, most AI being developed for health focuses on treating disease, while ignoring the underlying determinants of health. Vitale and Shipton fear that the hype over unproven AI tools could distract from the urgent need to implement low-tech but evidence-based holistic interventions, like community health workers and harm reduction programs. "We have seen this pattern before," Vitale said. "We keep investing in these tech silver bullets that fail to actually change public health because they're not dealing with the deeply rooted political and social determinants of health, which can range from things like health policy priorities to access to healthy foods and a safe place to live." AI is also likely to continue or exacerbate patterns of harm and exploitation that have historically been common in the biopharmaceutical industry. One example discussed in the paper is that the ownership of and profit from AI is currently concentrated in high-income countries, while low- to middle-income countries with weak regulations may be targeted for data extraction or experimentation with the deployment of potentially risky new technologies. The paper also predicts that lax regulatory approaches to AI will continue the prioritization of intellectual property rights and industry incentives over equitable and affordable public access to new treatments and tools. And since corporate profit motives will continue to drive product development, AI companies are also likely to follow the health technology sector's long-term trend of overlooking the needs of the world's poorest people when deciding which issues to target for investment in research and development. However, Vitale and Shipton did identify a bright spot. AI could potentially break the mold and create a deeper impact by focusing on improving the health care system itself. AI could be used to allocate resources more efficiently across hospitals and for more effective patient triage. Diagnostic tools could improve the efficiency and expand the capabilities of general practitioners in small rural hospitals without specialists. AI could even provide some basic yet essential health services to fill labor and specialization gaps, like providing prenatal check-ups in areas with growing maternity care deserts. All of these applications could potentially result in more equitable access to care. But that result is far from guaranteed. Depending on how and where these technologies are deployed, they could either successfully backfill gaps in care where there are genuine health worker shortages or lead to unemployment or precarious gig work for existing health care workers. And unless the underlying causes of health care worker shortages are addressed -- including burnout and "brain drain" to high-income countries -- AI tools could end up providing diagnosis or outbreak detection that is ultimately not useful because communities still lack the capacity to respond. To maximize benefits and minimize harms, Vitale and Shipton argue that regulation must be put in place before AI expands further into the health sector. The right safeguards could help to divert AI from following harmful patterns of the past and instead chart a new path that ensures future projects will align with the public interest. "With AI, we have an opportunity to correct our way of governing new technologies," Shipton said. "But we need a clear agenda and framework for the ethical governance of AI health technologies through the World Health Organization, major public-private partnerships that fund and deliver health interventions, and countries like the United States, India, and China that host tech companies. Getting that implemented is going to require continued civil society advocacy."
[3]
Will AI tools revolutionize public health? Not if | Newswise
As tools powered by artificial intelligence increasingly make their way into health care, the latest research from UC Santa Cruz Politics Department doctoral candidate Lucia Vitale takes stock of the current landscape of promises and anxieties. Proponents of AI envision the technology helping to manage health care supply chains, monitor disease outbreaks, make diagnoses, interpret medical images, and even reduce equity gaps in access to care by compensating for healthcare worker shortages. But others are sounding the alarm about issues like privacy rights, racial and gender biases in models, lack of transparency in AI decision-making processes that could lead to patient care mistakes, and even the potential for insurance companies to use AI to discriminate against people with poor health. Which types of impacts these tools ultimately have will depend upon the manner in which they are developed and deployed. In a paper for the journal Social Science & Medicine, Vitale and her coauthor, University of British Columbia doctoral candidate Leah Shipton, conducted an extensive literature analysis of AI's current trajectory in health care. They argue that AI is positioned to become the latest in a long line of technological advances that ultimately have limited impact because they engage in a "politics of avoidance" that diverts attention away from, or even worsens, more fundamental structural problems in global public health. For example, like many technological interventions of the past, most AI being developed for health focuses on treating disease, while ignoring the underlying determinants of health. Vitale and Shipton fear that the hype over unproven AI tools could distract from the urgent need to implement low-tech but evidence-based holistic interventions, like community health workers and harm reduction programs. "We have seen this pattern before," Vitale said. "We keep investing in these tech silver bullets that fail to actually change public health because they're not dealing with the deeply rooted political and social determinants of health, which can range from things like health policy priorities to access to healthy foods and a safe place to live." AI is also likely to continue or exacerbate patterns of harm and exploitation that have historically been common in the biopharmaceutical industry. One example discussed in the paper is that the ownership of and profit from AI is currently concentrated in high-income countries, while low- to middle-income countries with weak regulations may be targeted for data extraction or experimentation with the deployment of potentially risky new technologies. The paper also predicts that lax regulatory approaches to AI will continue the prioritization of intellectual property rights and industry incentives over equitable and affordable public access to new treatments and tools. And since corporate profit motives will continue to drive product development, AI companies are also likely to follow the health technology sector's long-term trend of overlooking the needs of the world's poorest people when deciding which issues to target for investment in research and development. However, Vitale and Shipton did identify a bright spot. AI could potentially break the mold and create a deeper impact by focusing on improving the health care system itself. AI could be used to allocate resources more efficiently across hospitals and for more effective patient triage. Diagnostic tools could improve the efficiency and expand the capabilities of general practitioners in small rural hospitals without specialists. AI could even provide some basic yet essential health services to fill labor and specialization gaps, like providing prenatal check-ups in areas with growing maternity care deserts. All of these applications could potentially result in more equitable access to care. But that result is far from guaranteed. Depending on how and where these technologies are deployed, they could either successfully backfill gaps in care where there are genuine health worker shortages or lead to unemployment or precarious gig work for existing health care workers. And unless the underlying causes of health care worker shortages are addressed -- including burnout and "brain drain" to high-income countries -- AI tools could end up providing diagnosis or outbreak detection that is ultimately not useful because communities still lack the capacity to respond. To maximize benefits and minimize harms, Vitale and Shipton argue that regulation must be put in place before AI expands further into the health sector. The right safeguards could help to divert AI from following harmful patterns of the past and instead chart a new path that ensures future projects will align with the public interest. "With AI, we have an opportunity to correct our way of governing new technologies," Shipton said. "But we need a clear agenda and framework for the ethical governance of AI health technologies through the World Health Organization, major public-private partnerships that fund and deliver health interventions, and countries like the United States, India, and China that host tech companies. Getting that implemented is going to require continued civil society advocacy."
[4]
AI's potential to improve healthcare access could be offset by exploitation
University of California - Santa CruzOct 9 2024 As tools powered by artificial intelligence increasingly make their way into health care, the latest research from UC Santa Cruz Politics Department doctoral candidate Lucia Vitale takes stock of the current landscape of promises and anxieties. Proponents of AI envision the technology helping to manage health care supply chains, monitor disease outbreaks, make diagnoses, interpret medical images, and even reduce equity gaps in access to care by compensating for healthcare worker shortages. But others are sounding the alarm about issues like privacy rights, racial and gender biases in models, lack of transparency in AI decision-making processes that could lead to patient care mistakes, and even the potential for insurance companies to use AI to discriminate against people with poor health. Which types of impacts these tools ultimately have will depend upon the manner in which they are developed and deployed. In a paper for the journal Social Science & Medicine, Vitale and her coauthor, University of British Columbia doctoral candidate Leah Shipton, conducted an extensive literature analysis of AI's current trajectory in health care. They argue that AI is positioned to become the latest in a long line of technological advances that ultimately have limited impact because they engage in a "politics of avoidance" that diverts attention away from, or even worsens, more fundamental structural problems in global public health. For example, like many technological interventions of the past, most AI being developed for health focuses on treating disease, while ignoring the underlying determinants of health. Vitale and Shipton fear that the hype over unproven AI tools could distract from the urgent need to implement low-tech but evidence-based holistic interventions, like community health workers and harm reduction programs. We have seen this pattern before. We keep investing in these tech silver bullets that fail to actually change public health because they're not dealing with the deeply rooted political and social determinants of health, which can range from things like health policy priorities to access to healthy foods and a safe place to live." Lucia Vitale, Politics Department, University of California - Santa Cruz AI is also likely to continue or exacerbate patterns of harm and exploitation that have historically been common in the biopharmaceutical industry. One example discussed in the paper is that the ownership of and profit from AI is currently concentrated in high-income countries, while low- to middle-income countries with weak regulations may be targeted for data extraction or experimentation with the deployment of potentially risky new technologies. The paper also predicts that lax regulatory approaches to AI will continue the prioritization of intellectual property rights and industry incentives over equitable and affordable public access to new treatments and tools. And since corporate profit motives will continue to drive product development, AI companies are also likely to follow the health technology sector's long-term trend of overlooking the needs of the world's poorest people when deciding which issues to target for investment in research and development. However, Vitale and Shipton did identify a bright spot. AI could potentially break the mold and create a deeper impact by focusing on improving the health care system itself. AI could be used to allocate resources more efficiently across hospitals and for more effective patient triage. Diagnostic tools could improve the efficiency and expand the capabilities of general practitioners in small rural hospitals without specialists. AI could even provide some basic yet essential health services to fill labor and specialization gaps, like providing prenatal check-ups in areas with growing maternity care deserts. All of these applications could potentially result in more equitable access to care. But that result is far from guaranteed. Depending on how and where these technologies are deployed, they could either successfully backfill gaps in care where there are genuine health worker shortages or lead to unemployment or precarious gig work for existing health care workers. And unless the underlying causes of health care worker shortages are addressed-;including burnout and "brain drain" to high-income countries-;AI tools could end up providing diagnosis or outbreak detection that is ultimately not useful because communities still lack the capacity to respond. To maximize benefits and minimize harms, Vitale and Shipton argue that regulation must be put in place before AI expands further into the health sector. The right safeguards could help to divert AI from following harmful patterns of the past and instead chart a new path that ensures future projects will align with the public interest. "With AI, we have an opportunity to correct our way of governing new technologies," Shipton said. "But we need a clear agenda and framework for the ethical governance of AI health technologies through the World Health Organization, major public-private partnerships that fund and deliver health interventions, and countries like the United States, India, and China that host tech companies. Getting that implemented is going to require continued civil society advocacy." University of California - Santa Cruz Journal reference: Shipton, L., & Vitale, L. (2024) Artificial Intelligence and the Politics of Avoidance in Global Health. Social Science & Medicine. doi.org/10.1016/j.socscimed.2024.117274.
Share
Share
Copy Link
A new study by UC Santa Cruz and University of British Columbia researchers highlights the potential of AI in healthcare while warning about its limitations in addressing fundamental public health issues.
Artificial Intelligence (AI) is rapidly making inroads into healthcare, promising revolutionary changes in various aspects of the industry. However, a recent study by researchers from the University of California, Santa Cruz and the University of British Columbia raises important questions about the technology's potential impact on public health [1][2][3][4].
Proponents of AI envision numerous applications that could transform healthcare:
AI could potentially improve resource allocation across hospitals, enhance patient triage, and expand the capabilities of general practitioners in rural areas lacking specialists [1][2][3].
Despite these promising applications, the researchers, Lucia Vitale and Leah Shipton, highlight several concerns:
The study argues that AI might become another technological advance with limited impact due to its engagement in a "politics of avoidance" [1][2][3][4].
Vitale and Shipton contend that AI in healthcare often focuses on treating diseases while ignoring underlying health determinants. This approach could divert attention from more fundamental structural problems in global public health [1][2][3][4].
"We keep investing in these tech silver bullets that fail to actually change public health because they're not dealing with the deeply rooted political and social determinants of health," says Vitale [1][2][3][4].
The researchers warn that AI could perpetuate or worsen existing patterns of harm and exploitation in the healthcare industry:
To maximize benefits and minimize harm, Vitale and Shipton argue for implementing regulations before AI further expands in the health sector. They call for:
As Shipton notes, "With AI, we have an opportunity to correct our way of governing new technologies" [1][2][3][4]. The challenge lies in ensuring that AI's potential in healthcare is realized while avoiding the pitfalls that have limited the impact of previous technological advances in public health.
Reference
[1]
Medical Xpress - Medical and Health News
|Will AI tools revolutionize public health? Not if they continue following old patterns, researchers argue[2]
[4]
An exploration of the challenges and opportunities in integrating AI into healthcare, focusing on building trust among medical professionals and ensuring patient safety through proper regulation and data integrity.
2 Sources
A recent study reveals that nearly half of FDA-approved medical AI devices lack proper clinical validation data, raising concerns about their real-world performance and potential risks to patient care.
3 Sources
AI is transforming healthcare in India by improving diagnostics, supporting clinical decisions, and addressing the shortage of medical professionals. This story explores the impact, challenges, and future of AI-powered healthcare solutions.
2 Sources
Smart hospitals are revolutionizing healthcare by integrating AI and data management. However, the implementation of AI in healthcare faces significant challenges that need to be addressed.
2 Sources
An international initiative has published recommendations to improve dataset usage in building AI health technologies, aiming to reduce potential bias and ensure medical AI is safe and effective for all patients.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved