2 Sources
[1]
New AI model helps choose doctors based on diagnoses and decision-making
University of Texas at AustinJan 13 2025 Years ago, as she sat in waiting rooms, Maytal Saar-Tsechansky began to wonder how people chose a good doctor when they had no way of knowing a doctor's track record on accurate diagnoses. Talking to other patients, she found they sometimes based choices on a physician's personality or even the quality of their office furniture. I realized all these signals people are using are just not the right ones. We were operating in complete darkness, like there's no transparency on these things." Maytal Saar-Tsechansky, professor of information, risk, and operations management at Texas McCombs In new research, she uses artificial intelligence to judge the judges: to evaluate the rates at which experts make successful decisions. Her machine learning algorithm can appraise both doctors and other kinds of experts - such as engineers who diagnose mechanical problems - when their success rates are not publicly available or not scrutinized beyond small groups of peers. Prior research has studied how accurate doctors' diagnoses are, but not in ways that can be scaled up or monitored on an ongoing basis, Saar-Tsechansky says. More effective methods are vital today, she adds, when medical systems are deploying AI to help with diagnoses. It will be difficult to determine whether AI is helping or hurting successful diagnoses if observers can't tell how successful a doctor was without the AI assist. Evaluating the experts With McCombs doctoral student Wanxue Dong and Tomer Geva of Tel Aviv University in Israel, Saar-Tsechansky created an algorithm they call MDE-HYB. It integrates two forms of information: overall data about the quality of an expert's past decisions and more detailed evaluations of specific cases. They then compared MDE-HYB's results with other kinds of evaluators: three alternative algorithms and 40 human reviewers. To test the flexibility of MDE-HYB's ratings, three very different kinds of data were analyzed: sales tax audits, spam, and online movie reviews on IMDb. In each case, evaluators judged prior decisions made by experts about the data: such as whether they accurately classified movie reviews as positive or negative. For all three sets, MDE-HYB equaled or bested all challengers. Against other algorithms, its error rates were up to 95% lower. Against humans, they were up to 72% lower. The researchers also tested MDE-HYB on Saar-Tsechansky's original concern: selecting a doctor based on the doctor's history of correct diagnoses. Compared with doctors chosen by another algorithm, MDE-HYB dropped the average misdiagnosis rate by 41%. In real-world use, such a difference could translate to better patient outcomes and lower costs, she says. She cautions that MDE-HYB needs more work before putting it to such practical uses. "The main purpose of this paper was to get this idea out there, to get people to think about it, and hopefully people will improve this method," she says. But she hopes it can one day help managers and regulators monitor expert workers' accuracy and decide when to intervene, if improvement is needed. Also, it might help consumers choose service providers such as doctors. "In every profession where people make these types of decisions, it would be valuable to assess the quality of decision-making," Saar-Tsechansky says. "I don't think that any of us should be off the hook, especially if we make consequential decisions." University of Texas at Austin Journal reference: Dong, W., et al. (2024). A Machine Learning Framework for Assessing Experts' Decision Quality. Management Science. doi.org/10.1287/mnsc.2021.03357.
[2]
AI tool aims to improve expert decision-making accuracy
Years ago, as she sat in waiting rooms, Maytal Saar-Tsechansky began to wonder how people chose a good doctor when they had no way of knowing a doctor's track record on accurate diagnoses. Talking to other patients, she found they sometimes based choices on a physician's personality or even the quality of their office furniture. "I realized all these signals people are using are just not the right ones," says Saar-Tsechansky, professor of information, risk, and operations management at Texas McCombs. "We were operating in complete darkness, like there's no transparency on these things." In new research, she uses artificial intelligence to judge the judges: to evaluate the rates at which experts make successful decisions. Her machine learning algorithm can appraise both doctors and other kinds of experts -- such as engineers who diagnose mechanical problems -- when their success rates are not publicly available or not scrutinized beyond small groups of peers. "A Machine Learning Framework for Assessing Experts' Decision Quality" is published in Management Science. Prior research has studied how accurate doctors' diagnoses are, but not in ways that can be scaled up or monitored on an ongoing basis, Saar-Tsechansky says. More effective methods are vital today, she adds, when medical systems are deploying AI to help with diagnoses. It will be difficult to determine whether AI is helping or hurting successful diagnoses if observers can't tell how successful a doctor was without the AI assist. Evaluating the experts With McCombs doctoral student Wanxue Dong and Tomer Geva of Tel Aviv University in Israel, Saar-Tsechansky created an algorithm they call MDE-HYB. It integrates two forms of information: overall data about the quality of an expert's past decisions and more detailed evaluations of specific cases. They then compared MDE-HYB's results with other kinds of evaluators: three alternative algorithms and 40 human reviewers. To test the flexibility of MDE-HYB's ratings, three very different kinds of data were analyzed: sales tax audits, spam, and online movie reviews on IMDb. In each case, evaluators judged prior decisions made by experts about the data: such as whether they accurately classified movie reviews as positive or negative. For all three sets, MDE-HYB equaled or bested all challengers. The researchers also tested MDE-HYB on Saar-Tsechansky's original concern: selecting a doctor based on the doctor's history of correct diagnoses. Compared with doctors chosen by another algorithm, MDE-HYB dropped the average misdiagnosis rate by 41%. In real-world use, such a difference could translate to better patient outcomes and lower costs, she says. She cautions that MDE-HYB needs more work before putting it to such practical uses. "The main purpose of this paper was to get this idea out there, to get people to think about it, and hopefully people will improve this method," she says. But she hopes it can one day help managers and regulators monitor expert workers' accuracy and decide when to intervene, if improvement is needed. Also, it might help consumers choose service providers such as doctors. "In every profession where people make these types of decisions, it would be valuable to assess the quality of decision-making," Saar-Tsechansky says. "I don't think that any of us should be off the hook, especially if we make consequential decisions."
Share
Copy Link
Researchers at the University of Texas at Austin have developed an AI algorithm called MDE-HYB that evaluates expert decision-making accuracy, potentially revolutionizing how we choose doctors and assess professional performance.
Researchers at the University of Texas at Austin have introduced a groundbreaking AI model designed to evaluate the accuracy of expert decision-making, with potential applications ranging from healthcare to engineering. The machine learning algorithm, named MDE-HYB, aims to address the longstanding challenge of assessing professional performance, particularly in fields where success rates are not publicly available or scrutinized beyond small peer groups 12.
The research was inspired by Professor Maytal Saar-Tsechansky's personal experiences in medical waiting rooms. She observed that patients often based their choice of doctors on superficial factors such as personality or office decor, rather than on the physician's diagnostic accuracy 1.
"I realized all these signals people are using are just not the right ones. We were operating in complete darkness, like there's no transparency on these things," Saar-Tsechansky explained 1.
Developed by Saar-Tsechansky, doctoral student Wanxue Dong, and Tomer Geva from Tel Aviv University, the MDE-HYB algorithm integrates two key components:
This approach allows for a more comprehensive assessment of expert performance 12.
The researchers rigorously tested MDE-HYB against other evaluation methods:
The algorithm's flexibility was validated using diverse datasets, including sales tax audits, spam detection, and online movie reviews 12.
While the research originated from concerns about choosing doctors, the potential applications of MDE-HYB extend far beyond healthcare. The algorithm could be used to:
Saar-Tsechansky emphasizes the broad applicability of the tool: "In every profession where people make these types of decisions, it would be valuable to assess the quality of decision-making" 1.
The researchers acknowledge that MDE-HYB requires further refinement before practical implementation. Saar-Tsechansky stated, "The main purpose of this paper was to get this idea out there, to get people to think about it, and hopefully people will improve this method" 12.
As AI continues to play an increasingly significant role in various professional fields, tools like MDE-HYB may become crucial in ensuring accountability and improving decision-making processes across industries.
Summarized by
Navi
[2]
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
11 Sources
Business
13 hrs ago
11 Sources
Business
13 hrs ago
SoftBank makes a significant $2 billion investment in Intel, boosting the chipmaker's efforts to regain its competitive edge in the AI semiconductor market.
22 Sources
Business
22 hrs ago
22 Sources
Business
22 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
21 hrs ago
15 Sources
Technology
21 hrs ago
Microsoft introduces a new AI-powered 'COPILOT' function in Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
8 Sources
Technology
14 hrs ago
8 Sources
Technology
14 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
13 hrs ago
10 Sources
Technology
13 hrs ago