3 Sources
[1]
How artificial intelligence controls your health insurance coverage
Over the past decade, health insurance companies have increasingly embraced the use of artificial intelligence algorithms. Unlike doctors and hospitals, which use AI to help diagnose and treat patients, health insurers use these algorithms to decide whether to pay for health care treatments and services that are recommended by a given patient's physicians. One of the most common examples is prior authorization, which is when your doctor needs to receive payment approval from your insurance company before providing you care. Many insurers use an algorithm to decide whether the requested care is "medically necessary" and should be covered. These AI systems also help insurers decide how much care a patient is entitled to -- for example, how many days of hospital care a patient can receive after surgery. If an insurer declines to pay for a treatment your doctor recommends, you usually have three options. You can try to appeal the decision, but that process can take a lot of time, money and expert help. Only 1 in 500 claim denials are appealed. You can agree to a different treatment that your insurer will cover. Or you can pay for the recommended treatment yourself, which is often not realistic because of high health care costs. As a legal scholar who studies health law and policy, I'm concerned about how insurance algorithms affect people's health. Like with AI algorithms used by doctors and hospitals, these tools can potentially improve care and reduce costs. Insurers say that AI helps them make quick, safe decisions about what care is necessary and avoids wasteful or harmful treatments. But there's strong evidence that the opposite can be true. These systems are sometimes used to delay or deny care that should be covered, all in the name of saving money. A pattern of withholding care Presumably, companies feed a patient's health care records and other relevant information into health care coverage algorithms and compare that information with current medical standards of care to decide whether to cover the patient's claim. However, insurers have refused to disclose how these algorithms work in making such decisions, so it is impossible to say exactly how they operate in practice. Using AI to review coverage saves insurers time and resources, especially because it means fewer medical professionals are needed to review each case. But the financial benefit to insurers doesn't stop there. If an AI system quickly denies a valid claim, and the patient appeals, that appeal process can take years. If the patient is seriously ill and expected to die soon, the insurance company might save money simply by dragging out the process in the hope that the patient dies before the case is resolved. This creates the disturbing possibility that insurers might use algorithms to withhold care for expensive, long-term or terminal health problems , such as chronic or other debilitating disabilities. One reporter put it bluntly: "Many older adults who spent their lives paying into Medicare now face amputation or cancer and are forced to either pay for care themselves or go without." Research supports this concern - patients with chronic illnesses are more likely to be denied coverage and suffer as a result. In addition, Black and Hispanic people and those of other nonwhite ethnicities, as well as people who identify as lesbian, gay, bisexual or transgender, are more likely to experience claims denials. Some evidence also suggests that prior authorization may increase rather than decrease health care system costs. Insurers argue that patients can always pay for any treatment themselves, so they're not really being denied care. But this argument ignores reality. These decisions have serious health consequences, especially when people can't afford the care they need. Moving toward regulation Unlike medical algorithms, insurance AI tools are largely unregulated. They don't have to go through Food and Drug Administration review, and insurance companies often say their algorithms are trade secrets. That means there's no public information about how these tools make decisions, and there's no outside testing to see whether they're safe, fair or effective. No peer-reviewed studies exist to show how well they actually work in the real world. There does seem to be some momentum for change. The Centers for Medicare & Medicaid Services, or CMS, which is the federal agency in charge of Medicare and Medicaid, recently announced that insurers in Medicare Advantage plans must base decisions on the needs of individual patients - not just on generic criteria. But these rules still let insurers create their own decision-making standards, and they still don't require any outside testing to prove their systems work before using them. Plus, federal rules can only regulate federal public health programs like Medicare. They do not apply to private insurers who do not provide federal health program coverage. Some states, including Colorado, Georgia, Florida, Maine and Texas, have proposed laws to rein in insurance AI. A few have passed new laws, including a 2024 California statute that requires a licensed physician to supervise the use of insurance coverage algorithms. But most state laws suffer from the same weaknesses as the new CMS rule. They leave too much control in the hands of insurers to decide how to define "medical necessity" and in what contexts to use algorithms for coverage decisions. They also don't require those algorithms to be reviewed by neutral experts before use. And even strong state laws wouldn't be enough, because states generally can't regulate Medicare or insurers that operate outside their borders. A role for the FDA In the view of many health law experts, the gap between insurers' actions and patient needs has become so wide that regulating health care coverage algorithms is now imperative. As I argue in an essay to be published in the Indiana Law Journal, the FDA is well positioned to do so. The FDA is staffed with medical experts who have the capability to evaluate insurance algorithms before they are used to make coverage decisions. The agency already reviews many medical AI tools for safety and effectiveness. FDA oversight would also provide a uniform, national regulatory scheme instead of a patchwork of rules across the country. Some people argue that the FDA's power here is limited. For the purposes of FDA regulation, a medical device is defined as an instrument "intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease." Because health insurance algorithms are not used to diagnose, treat or prevent disease, Congress may need to amend the definition of a medical device before the FDA can regulate those algorithms. If the FDA's current authority isn't enough to cover insurance algorithms, Congress could change the law to give it that power. Meanwhile, CMS and state governments could require independent testing of these algorithms for safety, accuracy and fairness. That might also push insurers to support a single national standard - like FDA regulation - instead of facing a patchwork of rules across the country. The move toward regulating how health insurers use AI in determining coverage has clearly begun, but it is still awaiting a robust push. Patients' lives are literally on the line.
[2]
Algorithms Are Deciding Who Gets Health Care. Patients Are Paying the Price.
Over the past decade, health insurance companies have increasingly embraced the use of artificial intelligence algorithms. Unlike doctors and hospitals, which use AI to help diagnose and treat patients, health insurers use these algorithms to decide whether to pay for health care treatments and services that are recommended by a given patient's physicians. One of the most common examples is with prior authorization, which is when your doctor needs to receive payment approval from your insurance company before providing you care. Many insurers use an algorithm to decide whether the requested care is "medically necessary" and should be covered. These AI systems also help insurers decide how much care a patient is entitled to - for example, how many days of hospital care a patient can receive after surgery. If an insurer declines to pay for a treatment that your doctor recommends, you usually have three options. You can try to appeal the decision, but that process can take a lot of time, money and expert help. (Only 1 in 500 claim denials are appealed.) You can agree to a different treatment that your insurer will cover. Or you can pay for the recommended treatment yourself, which is often not realistic because of high health care costs. As a legal scholar who studies health law and policy, I'm concerned about how insurance algorithms affect people's health. Like with AI algorithms used by doctors and hospitals, these tools have the potential to improve care and reduce costs, and insurers say that AI helps them make quick, safe decisions about what care is necessary and avoid wasteful or harmful treatments. But there is strong evidence that the opposite can be true. These systems are sometimes used to delay or deny care that should be covered, all in the name of saving money. Presumably, companies feed a patient's health care records and other relevant information into health care coverage algorithms and compare that information with current medical standards of care to decide whether to cover the patient's claim. However, insurers have refused to disclose how these algorithms make their decisions, so it's impossible to say exactly how they work in practice. Using AI to review coverage means fewer medical professionals are needed to review each case, saving insurers time and resources. But the financial benefit to insurers doesn't stop there. If an AI system quickly denies a valid claim and the patient appeals, that appeal process can take years. If the patient is seriously ill and expected to die soon, the insurance company might save money simply by dragging out the process in the hope that the patient dies before the case is resolved. This creates the disturbing possibility that insurers might use algorithms to withhold care for expensive, long-term or terminal health problems, such as chronic or other debilitating disabilities. Research supports this concern - patients with chronic illnesses are more likely to be denied coverage and to suffer as a result. In addition, Black and Hispanic people and those of other nonwhite ethnicities, as well as people who identify as lesbian, gay, bisexual or transgender, are more likely to experience claims denials. Insurers argue that patients can always pay for any treatment themselves, so they're not really being denied care. But this argument ignores reality. These decisions have serious health consequences, especially when people can't afford the care they need. Unlike medical algorithms, insurance AI tools are largely unregulated. They don't have to go through Food and Drug Administration review, and insurance companies often say their algorithms are trade secrets. That means there's no public information about how these tools make decisions, and there's no outside testing to see whether they're safe, fair or effective. No peer-reviewed studies exist to show how well they actually work in the real world. The Centers for Medicare & Medicaid Services recently announced that insurers in Medicare Advantage plans must base decisions on the needs of individual patients - not generic criteria. But these rules still let insurers create their own decision-making standards, and they still don't require any outside testing to prove their systems work before using them. Plus, federal rules can only regulate federal public health programs like Medicare. They do not apply to private insurers who do not provide federal health program coverage. Some states, including Colorado, Georgia, Florida, Maine and Texas, have proposed laws to rein in insurance AI. A few have passed new laws, including a 2024 California statute that requires a licensed physician to supervise the use of insurance coverage algorithms. But most state laws suffer from the same weaknesses as the new CMS rule. They leave too much control in the hands of insurers to decide how to define "medical necessity" and in what contexts to use algorithms for coverage decisions. They also don't require those algorithms to be reviewed by neutral experts before use.In the view of many health law experts, the gap between insurers' actions and patient needs has become so wide that regulating health care coverage algorithms is now imperative.The FDA is staffed with medical experts who could evaluate insurance algorithms before they are used to make coverage decisions. The agency already reviews many medical AI tools for safety and effectiveness. FDA oversight would also provide a uniform, national regulatory scheme instead of a patchwork of rules across the country. Some people argue that the FDA's power here is limited. For the purposes of FDA regulation, a medical device is defined as an instrument "intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease." Because health insurance algorithms are not used to diagnose, treat or prevent disease, but to determine coverage, Congress may need to amend the FDA's purview before the agency can regulate those algorithms.Meanwhile, CMS and state governments should require independent testing of these algorithms for safety, accuracy and fairness. The move toward regulating how health insurers use AI in determining coverage has clearly begun, but it is still awaiting a robust push. Patients' lives are literally on the line. Jennifer D. Oliva is a professor of law at Indiana University. This commentary was produced in partnership with The Conversation, a nonprofit, independent news organization dedicated to bringing the knowledge of academic experts to the public.
[3]
How AI is deciding your health insurance claims
June 20 (UPI) -- Over the past decade, health insurance companies have increasingly embraced the use of artificial intelligence algorithms. Unlike doctors and hospitals, which use AI to help diagnose and treat patients, health insurers use these algorithms to decide whether to pay for health care treatments and services that are recommended by a given patient's physicians. One of the most common examples is prior authorization, which is when your doctor needs to receive payment approval from your insurance company before providing you care. Many insurers use an algorithm to decide whether the requested care is "medically necessary" and should be covered. These AI systems also help insurers decide how much care a patient is entitled to - for example, how many days of hospital care a patient can receive after surgery. If an insurer declines to pay for a treatment your doctor recommends, you usually have three options. You can try to appeal the decision, but that process can take a lot of time, money and expert help. Only 1 in 500 claim denials are appealed. You can agree to a different treatment that your insurer will cover. Or you can pay for the recommended treatment yourself, which is often not realistic because of high health care costs. As a legal scholar who studies health law and policy, I'm concerned about how insurance algorithms affect people's health. Like with AI algorithms used by doctors and hospitals, these tools can potentially improve care and reduce costs. Insurers say that AI helps them make quick, safe decisions about what care is necessary and avoids wasteful or harmful treatments. Presumably, companies feed a patient's health care records and other relevant information into health care coverage algorithms and compare that information with current medical standards of care to decide whether to cover the patient's claim. However, insurers have refused to disclose how these algorithms work in making such decisions, so it is impossible to say exactly how they operate in practice. Using AI to review coverage saves insurers time and resources, especially because it means fewer medical professionals are needed to review each case. But the financial benefit to insurers doesn't stop there. If an AI system quickly denies a valid claim, and the patient appeals, that appeal process can take years. If the patient is seriously ill and expected to die soon, the insurance company might save money simply by dragging out the process in the hope that the patient dies before the case is resolved. This creates the disturbing possibility that insurers might use algorithms to withhold care for expensive, long-term or terminal health problems , such as chronic or other debilitating disabilities. One reporter put it bluntly: "Many older adults who spent their lives paying into Medicare now face amputation or cancer and are forced to either pay for care themselves or go without." Research supports this concern - patients with chronic illnesses are more likely to be denied coverage and suffer as a result. In addition, Black and Hispanic people and those of other nonwhite ethnicities, as well as people who identify as lesbian, gay, bisexual or transgender, are more likely to experience claims denials. Some evidence also suggests that prior authorization may increase rather than decrease health care system costs. Insurers argue that patients can always pay for any treatment themselves, so they're not really being denied care. But this argument ignores reality. These decisions have serious health consequences, especially when people can't afford the care they need. Moving toward regulation Unlike medical algorithms, insurance AI tools are largely unregulated. They don't have to go through Food and Drug Administration review, and insurance companies often say their algorithms are trade secrets. That means there's no public information about how these tools make decisions, and there's no outside testing to see whether they're safe, fair or effective. No peer-reviewed studies exist to show how well they actually work in the real world. There does seem to be some momentum for change. The Centers for Medicare & Medicaid Services, or CMS, which is the federal agency in charge of Medicare and Medicaid, recently announced that insurers in Medicare Advantage plans must base decisions on the needs of individual patients -- not just on generic criteria. But these rules still let insurers create their own decision-making standards, and they still don't require any outside testing to prove their systems work before using them. Plus, federal rules can only regulate federal public health programs like Medicare. They do not apply to private insurers who do not provide federal health program coverage. Some states, including Colorado, Georgia, Florida, Maine and Texas, have proposed laws to rein in insurance AI. A few have passed new laws, including a 2024 California statute that requires a licensed physician to supervise the use of insurance coverage algorithms. But most state laws suffer from the same weaknesses as the new CMS rule. They leave too much control in the hands of insurers to decide how to define "medical necessity" and in what contexts to use algorithms for coverage decisions. They also don't require those algorithms to be reviewed by neutral experts before use. And even strong state laws wouldn't be enough, because states generally can't regulate Medicare or insurers that operate outside their borders. A role for the FDA In the view of many health law experts, the gap between insurers' actions and patient needs has become so wide that regulating health care coverage algorithms is now imperative. As I argue in an essay to be published in the Indiana Law Journal, the FDA is well positioned to do so. The FDA is staffed with medical experts who have the capability to evaluate insurance algorithms before they are used to make coverage decisions. The agency already reviews many medical AI tools for safety and effectiveness. FDA oversight would also provide a uniform, national regulatory scheme instead of a patchwork of rules across the country. Some people argue that the FDA's power here is limited. For the purposes of FDA regulation, a medical device is defined as an instrument "intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease." Because health insurance algorithms are not used to diagnose, treat or prevent disease, Congress may need to amend the definition of a medical device before the FDA can regulate those algorithms. If the FDA's current authority isn't enough to cover insurance algorithms, Congress could change the law to give it that power. Meanwhile, CMS and state governments could require independent testing of these algorithms for safety, accuracy and fairness. That might also push insurers to support a single national standard - like FDA regulation - instead of facing a patchwork of rules across the country. The move toward regulating how health insurers use AI in determining coverage has clearly begun, but it is still awaiting a robust push. Patients' lives are literally on the line. Jennifer D. Oliva is a professor of law at Indiana University. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author.
Share
Copy Link
Health insurance companies are increasingly using AI algorithms to determine coverage and care decisions, raising concerns about patient care quality, fairness, and regulation.
Health insurance companies have increasingly adopted artificial intelligence (AI) algorithms over the past decade to make crucial decisions about patient care and coverage 123. Unlike AI used by healthcare providers for diagnosis and treatment, insurers employ these algorithms to determine whether to pay for treatments recommended by physicians and how much care a patient is entitled to receive.
One of the most common applications of AI in health insurance is in the prior authorization process. In this scenario, an algorithm decides whether requested care is "medically necessary" and should be covered 123. These AI systems also help insurers determine the extent of care a patient can receive, such as the number of hospital days allowed after surgery.
When insurers decline coverage for a recommended treatment, patients typically face three options:
Source: The Conversation
While insurers argue that AI helps them make quick, safe decisions and avoid wasteful treatments, there is growing concern about the potential negative impacts of these algorithms 123:
Delayed or Denied Care: Evidence suggests that these systems may be used to delay or deny care that should be covered, prioritizing cost savings over patient health.
Lack of Transparency: Insurers have refused to disclose how these algorithms operate, citing trade secrets, which makes it impossible to evaluate their fairness and effectiveness 123.
Disproportionate Impact: Research indicates that patients with chronic illnesses, as well as Black, Hispanic, and LGBTQ+ individuals, are more likely to experience claim denials 123.
Financial Incentives: The use of AI for coverage decisions creates a disturbing possibility that insurers might withhold care for expensive, long-term, or terminal health problems to save money 123.
Unlike medical algorithms, insurance AI tools are largely unregulated 123:
Recent developments in regulation include:
The Centers for Medicare & Medicaid Services (CMS) announced that insurers in Medicare Advantage plans must base decisions on individual patient needs 123.
Some states, including California, have passed laws requiring physician supervision for insurance coverage algorithms 123.
However, these regulations have limitations:
Many health law experts argue that the gap between insurers' actions and patient needs necessitates stronger regulation of health care coverage algorithms 123. Suggestions include:
As the debate continues, the impact of AI on health insurance decisions remains a critical issue at the intersection of technology, healthcare, and policy.
Summarized by
Navi
[1]
[2]
U.S. News & World Report
|Algorithms Are Deciding Who Gets Health Care. Patients Are Paying the Price.[3]
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
10 Sources
Technology
23 hrs ago
10 Sources
Technology
23 hrs ago
Nvidia is reportedly developing a new AI chip, the B30A, based on its latest Blackwell architecture for the Chinese market. This chip is expected to outperform the currently allowed H20 model, raising questions about U.S. regulatory approval and the ongoing tech trade tensions between the U.S. and China.
11 Sources
Technology
23 hrs ago
11 Sources
Technology
23 hrs ago
SoftBank Group has agreed to invest $2 billion in Intel, buying common stock at $23 per share. This strategic investment comes as Intel undergoes a major restructuring under new CEO Lip-Bu Tan, aiming to regain its competitive edge in the semiconductor industry, particularly in AI chips.
18 Sources
Business
15 hrs ago
18 Sources
Business
15 hrs ago
Databricks, a data analytics firm, is set to raise its valuation to over $100 billion in a new funding round, showcasing the strong investor interest in AI startups. The company plans to use the funds for AI acquisitions and product development.
7 Sources
Business
7 hrs ago
7 Sources
Business
7 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
15 hrs ago
15 Sources
Technology
15 hrs ago