Curated by THEOUTPOST
On Tue, 31 Dec, 12:01 AM UTC
7 Sources
[1]
John Hopkins and Stanford robots learn surgery by watching videos - SiliconANGLE
John Hopkins and Stanford robots learn surgery by watching videos Researchers from John Hopkins University and Stanford University have revealed details of how they are training robots with videos to perform surgical tasks with the skill of human doctors, in what could be a significant step forward in medical robotics. Robotics in surgery is not new, with various use cases over a number of years. But where the new technology from Johns Hopkins and Stanford gets interesting is how it leverages imitation learning to train robots through observation rather than explicit programming. The researchers equipped their existing da Vinci Surgical System with a machine-learning model capable of analyzing surgical procedures recorded by cameras mounted on the robot's instruments. The videos, captured during real surgeries, provide a detailed visual and kinematic representation of the tasks performed by human surgeons. To train the robots, the team used a deep learning architecture similar to those found in advanced artificial intelligence language models but adapted it to process surgical data. The adapted system analyzes video inputs alongside motion data to learn the precise movements required to complete tasks such as needle manipulation, tissue handling and suturing. The idea here is that by focusing on relative movements -- adjusting based on the robot's current position rather than following rigid, predefined paths -- the model overcomes limitations in the accuracy of the da Vinci system's kinematics. Mimicry is one thing, but the model goes further with the inclusion of a feedback mechanism that allows the robot to evaluate its own performance. Using simulated environments, the system can compare its actions against the ideal trajectories demonstrated in the training videos, allowing the robot to refine its techniques and achieve levels of precision and dexterity comparable to highly experienced surgeons, all without the need for constant human oversight during training. To ensure that the robots could generalize their skills, the model was also exposed to a diverse range of surgical styles, environments and tasks. According to the researchers, the approach enhances the system's adaptability by allowing it to handle the nuances and unpredictability of real-world surgical procedures, which can be highly variable depending on the patient and surgeon. "In our work, we're not trying to replace the surgeon. We just want to make things easier for the surgeon," Axel Krieger, an associate professor at Johns Hopkins Whiting School of Engineering who supervised the research, told the Washington Post. "Imagine, do you want a tired surgeon, where you're the last patient of the day and the surgeon is super-exhausted? Or do you want a robot that is doing a part of that surgery and really helping out the surgeon?"
[2]
Johns Hopkins and Stanford robots learn surgery by watching videos - SiliconANGLE
Johns Hopkins and Stanford robots learn surgery by watching videos Researchers from Johns Hopkins University and Stanford University have revealed details of how they are training robots with videos to perform surgical tasks with the skill of human doctors, in what could be a significant step forward in medical robotics. Robotics in surgery is not new, with various use cases over a number of years. But where the new technology from Johns Hopkins and Stanford gets interesting is how it leverages imitation learning to train robots through observation rather than explicit programming. The researchers equipped their existing da Vinci Surgical System with a machine-learning model capable of analyzing surgical procedures recorded by cameras mounted on the robot's instruments. The videos, captured during real surgeries, provide a detailed visual and kinematic representation of the tasks performed by human surgeons. To train the robots, the team used a deep learning architecture similar to those found in advanced artificial intelligence language models but adapted it to process surgical data. The adapted system analyzes video inputs alongside motion data to learn the precise movements required to complete tasks such as needle manipulation, tissue handling and suturing. The idea here is that by focusing on relative movements -- adjusting based on the robot's current position rather than following rigid, predefined paths -- the model overcomes limitations in the accuracy of the da Vinci system's kinematics. Mimicry is one thing, but the model goes further with the inclusion of a feedback mechanism that allows the robot to evaluate its own performance. Using simulated environments, the system can compare its actions against the ideal trajectories demonstrated in the training videos, allowing the robot to refine its techniques and achieve levels of precision and dexterity comparable to highly experienced surgeons, all without the need for constant human oversight during training. To ensure that the robots could generalize their skills, the model was also exposed to a diverse range of surgical styles, environments and tasks. According to the researchers, the approach enhances the system's adaptability by allowing it to handle the nuances and unpredictability of real-world surgical procedures, which can be highly variable depending on the patient and surgeon. "In our work, we're not trying to replace the surgeon. We just want to make things easier for the surgeon," Axel Krieger, an associate professor at Johns Hopkins Whiting School of Engineering who supervised the research, told the Washington Post. "Imagine, do you want a tired surgeon, where you're the last patient of the day and the surgeon is super-exhausted? Or do you want a robot that is doing a part of that surgery and really helping out the surgeon?"
[3]
John Hopkins and Stanford robots learn surgery by watching videos in breakthrough research - SiliconANGLE
John Hopkins and Stanford robots learn surgery by watching videos in breakthrough research Researchers from John Hopkins University and Stanford University have revealed details of how they are training robots with videos to perform surgical tasks with the skill of human doctors, in what could be a significant step forward in medical robotics. Robotics in surgery is not new, with various use cases over a number of years, but where the new technology from Johns Hopkins and Stanford gets interesting is how it leverages imitation learning to train robots through observation rather than explicit programming. The researchers equipped their existing da Vinci Surgical System with a machine-learning model capable of analyzing surgical procedures recorded by cameras mounted on the robot's instruments. The videos, captured during real surgeries, provide a detailed visual and kinematic representation of the tasks performed by human surgeons. To train the robots, the team used a deep learning architecture similar to those found in advanced artificial intelligence language models but adapted it to process surgical data. The adapted system analyzes video inputs alongside motion data to learn the precise movements required to complete tasks such as needle manipulation, tissue handling and suturing. The idea here is that by focusing on relative movements - adjusting based on the robot's current position rather than following rigid, predefined paths - the model overcomes limitations in the accuracy of the da Vinci system's kinematics. Mimicry is one thing, but the model goes further with the inclusion of a feedback mechanism that allows the robot to evaluate its own performance. Using simulated environments, the system can compare its actions against the ideal trajectories demonstrated in the training videos, allowing the robot to refine its techniques and achieve levels of precision and dexterity comparable to highly experienced surgeons, all without the need for constant human oversight during training. To ensure that the robots could generalize their skills, the model was also exposed to a diverse range of surgical styles, environments and tasks. According to the researchers, the approach enhances the system's adaptability by allowing it to handle the nuances and unpredictability of real-world surgical procedures, which can be highly variable depending on the patient and surgeon. "In our work, we're not trying to replace the surgeon. We just want to make things easier for the surgeon," Axel Krieger, an associate professor at Johns Hopkins Whiting School of Engineering who supervised the research, told the Washington Post. "Imagine, do you want a tired surgeon, where you're the last patient of the day and the surgeon is super-exhausted? Or do you want a robot that is doing a part of that surgery and really helping out the surgeon?"
[4]
Researchers Use Videos to Teach Robot Surgeons Human-Like Moves
But that all might change now, the Post says, since researchers have successfully trained state of the art of robot surgeons with next-generation technology, using videos of procedures, so that machines now have the ability to "perform surgical tasks with the skill of human doctors." Researchers from Johns Hopkins University and Stanford University were able to teach the machines to manipulate needles, and tie knots so that they could suture wounds autonomously. And, the Post notes, the robots went beyond simply imitating their training material, and were able to correct mistakes without being commanded to fix their work. The training, and the ability of these systems to go slightly beyond the data they've already incorporated echoes how current generation AI chatbots are trained through exposure to vast amounts of real world data, often in text form. Similarly, in the manner that some AI supporters spin the technology as a way to improve workers' efficiency instead of outright replacing people in other businesses, the new robot surgeons aren't being touted as out-and-out replacements for fallible human doctors.
[5]
Researchers successfully train robots to perform surgery by watching videos
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? In a significant leap forward in medical technology, researchers have developed autonomous surgical robots. However, there are many issues that need to be addressed before these machines are actually used on humans. Researchers from Johns Hopkins University and Stanford University have successfully trained robots to perform surgical tasks with the precision of human doctors by watching videos. This advancement, presented at the recent Conference on Robot Learning in Munich, marks a significant step towards more autonomous surgical robots and could be a partial solution to the looming shortage of surgeons in the US. Robotic assistance in surgery is not new. Since 1985, when the PUMA 560 first assisted in a brain biopsy, robots have been helping doctors perform various procedures, including gallbladder removals, hysterectomies, and prostate surgeries. These robots, guided by doctors using joystick-like controllers, have been instrumental in minimizing human hand tremors during delicate procedures. However, the recent breakthrough takes this technology to a new level. The research team has developed robots capable of performing complex surgical tasks autonomously, including manipulating needles, tying knots, and suturing wounds. What sets these robots apart is their ability to learn from videos and correct their mistakes without human intervention. The team's approach to training these robots is similar to that used in developing language models like ChatGPT. However, instead of working with words, the system employs a language that describes the position and direction of the robot's gripper. "We built our training model using videotapes of robots performing surgical tasks on practice suture pads," Dr. Axel Krieger, an associate professor at Johns Hopkins Whiting School of Engineering who supervised the research, explained to The Washington Post. "Each image in the video sequence is converted into numerical data, which the model then translates into robot actions." This method significantly reduces the need for programming each individual movement required for a medical procedure. The trained robots demonstrated their skills in a different environment, successfully performing tasks on pork and chicken samples. "We've developed a system where you can talk to the robot like you would to a surgical resident," Ji Woong "Brian" Kim, a postdoctoral researcher on the team, said. "You can say things like, 'Do this task,' or 'Move left' and 'Move right.'" The development of more autonomous surgical robots could help address the projected shortage of 10,000 to 20,000 surgeons in the United States by 2036, according to the American Association of Medical Colleges. "We're not trying to replace the surgeon. We just want to make things easier for the surgeon," Dr. Krieger said. While the progress is impressive, experts say numerous challenges remain before fully autonomous surgical robots become a reality. "The stakes are so high because this is a life and death issue," Dr. Dipen J. Parekh, director of robotic surgery at the University of Miami Miller School of Medicine, said. "The anatomy of every patient differs, as does the way a disease behaves in patients." Furthermore, as the technology advances, it raises important questions about responsibility, privacy, and access. Dr. Amer Zureikat, director of robotic surgery at the University of Pittsburgh Medical Center, noted several concerns about accountability in the event of surgical errors. Determining liability when multiple parties are involved in the development and use of autonomous surgical robots would be complex, to say the least, with potential culpability extending to various stakeholders, including the supervising physician, the AI developers, the hospital administration, or even the robot manufacturers themselves. Privacy concerns also loom large, particularly regarding the use of real surgical videos for training these systems. Additionally, there are questions about equal access to the technology and the potential for surgeons to become overly reliant on robotic assistance.
[6]
In a first, surgical robots learned tasks by watching videos
Robots have been trained to perform surgical tasks with the skill of human doctors, even learning to correct their own mistakes during surgeries. They don't get fruitcakes or Christmas cards from grateful patients, but for decades robots have been helping doctors perform gallbladder removals, hysterectomies, hernia repairs, prostate surgeries and more. While patients lie unconscious on the operating table, robotic arms and grippers work on their bodies at certain stages in these procedures -- all guided by doctors using joystick-like controllers, a process that minimizes human hand tremor. Now, a team of Johns Hopkins University and Stanford University researchers has reported a significant advance, training robots with videos to perform surgical tasks with the skill of human doctors. The robots learned to manipulate needles, tie knots and suture wounds on their own. Moreover, the trained robots went beyond mere imitation, correcting their own slip-ups without being told -- for example, picking up a dropped needle. Scientists have already begun the next stage of work: combining all of the different skills in full surgeries performed on animal cadavers. A new generation of more autonomous robots holds the potential to help address a serious shortage of surgeons in the United States, the researchers said. Presented at the recent Conference on Robot Learning in Munich, the research comes almost four decades after the PUMA 560 became the first robot to assist in the operating room, helping with a brain biopsy in 1985. The new work is currently undergoing review for publication in a journal. And the next-generation surgical robots will need to demonstrate safety and effectiveness in clinical trials, and receive approval from the Food and Drug Administration before they can become a fixture in hospitals. While some studies have shown that robotic surgery can be more expensive to the overall health-care system without performing significantly better than traditional surgery, a 2023 paper in AMA Journal of Ethics concluded that surgeons are becoming more experienced using the robots resulting in improvements. Nonetheless, scientists and doctors are already touting the reliability, skill and increasing autonomy of surgical robots as an important step toward addressing a potential crisis. The combination of an aging population that will require more surgery, and a stagnant level of student doctors that has the United States on pace to experience a shortage of 10,000 to 20,000 surgeons by 2036, according to a report this year by the American Association of Medical Colleges. "In our work, we're not trying to replace the surgeon. We just want to make things easier for the surgeon," said Axel Krieger, an associate professor at Johns Hopkins Whiting School of Engineering who supervised the research. "Imagine, do you want a tired surgeon, where you're the last patient of the day, and the surgeon is super-exhausted? Or do you want a robot that is doing a part of that surgery and really helping out the surgeon?" In 2020, the U.S. reported about 876,000 robot-assisted surgeries. The robots used by Krieger and his colleagues were made from research kits supplied by the medical technology firm, Intuitive. Ji Woong "Brian" Kim, a postdoctoral researcher working with Krieger, said the team has already developed a system "where you can talk to the robot like you would to a surgical resident. You can say things like, 'Do this task.' You can also say things like, 'Move left' and 'Move right.'" "In my mind, I thought they were still a couple of years behind what they have demonstrated here," said Dipen J. Parekh, director of robotic surgery at the University of Miami Miller School of Medicine, who was not involved in the research. But he stressed that many steps remain before robots are able to perform surgical procedures on their own. "The stakes are so high," he said, "because this is a life and death issue." The anatomy of every patient differs, as does the way a disease behaves in patients. "I look at [the images from] CT scans and MRIs and then do surgery," by controlling robotic arms, Parekh said. "If you want the robot to do the surgery itself, it will have to understand all of the imaging, how to read the CT scans and MRIs." In addition, robots will need to learn how to perform keyhole, or laparoscopic, surgery that uses very small incisions. Teaching robots to learn by imitating actions on a video should reduce the need to program them to perform each individual movement required for a medical procedure, according to the researchers. The team's training method resembled the approach used in ChatGPT, except that instead of working with words, it employs a language that describes the position of the robot gripper and the direction it is pointing. Researchers built their training model using videotapes of robots performing surgical tasks on practice suture pads. Each image in the video sequence is an arrangement of pixels that can be expressed in numbers. In simple terms, the model takes numbers that represent images and converts them into another set of numbers that represent different robot actions. After training the robots, researchers produced a separate set of videos demonstrating that the robots could perform the surgical tasks in a different environment -- in pork and chicken. "I thought it was very exciting. It's the dawn of a new era," said Amer Zureikat, who was not involved in the study but serves as director of robotic surgery at University of Pittsburgh Medical Center. Zureikat too cautioned that the work, though "a significant first step," must still overcome numerous hurdles. "The majority are logistical issues that should be rectified over time as artificial intelligence improves." Scientists and doctors will have to figure out how to handle common challenges of surgery, such as bleeding and improperly placed sutures. "If a blunder occurs, who holds responsibility?" Zureikat asked. "Is it the doctor? Is it the AI developer? Is it the hospital facility? Is it the robot manufacturer?" Privacy is also likely to emerge as a major issue. The robots discussed at the Munich conference were not trained using videotape of actual surgeries. However, they will need to train on videotapes of real surgeries if robots are to advance to the point where they can operate safely on their own. That will mean gaining permission from patients to have their surgical videos used to develop robot systems. Zureikat said advances in the use of robot surgical equipment are likely to raise additional questions: "Are patients going to get equal access to the technology?' and 'Will surgeons rely so much on robots that they become less adept at performing surgery without them?"
[7]
Robots Are Learning to Conduct Surgery on Their Own by Watching Videos
The artificial intelligence boom is already starting to creep into the medical field through the form of AI-based visit summaries and analysis of patient conditions. Now, new research demonstrates how AI training techniques similar to those used for ChatGPT could be used to train surgical robots to operate on their own. Researchers from John Hopkins University and Stanford University built a training model using video recordings of human-controlled robotic arms performing surgical tasks. By learning to imitate actions on a video, the researchers believe they can reduce the need to program each individual movement required for a procedure. From the Washington Post: The robots learned to manipulate needles, tie knots and suture wounds on their own. Moreover, the trained robots went beyond mere imitation, correcting their own slip-ups without being told â€* for example, picking up a dropped needle. Scientists have already begun the next stage of work: combining all of the different skills in full surgeries performed on animal cadavers. To be sure, robotics have been used in the surgery room for years nowâ€"back in 2018, the "surgery on a grape" meme highlighted how robotic arms can assist with surgeries by providing a heightened level of precision. Approximately 876,000 robot-assisted surgeries were conducted in 2020. Robotic instruments can reach places and perform tasks in the body where a surgeon's hand will never fit, and they do not suffer from tremors. Slim, precise instruments can spare nerve damage. But robotics are typically guided manually by a surgeon with a controller. The surgeon is always in charge. The concern by skeptics of more autonomous robots is that AI models like ChatGPT are not "intelligent," but rather simply mimic what they have already seen before, and do not understand the underlying concepts they are dealing with. The infinite variety of pathologies in an incalculable variety of human hosts poses a challenge, thenâ€"what if the AI model has not seen a specific scenario before? Something can go wrong during surgery in a split second, and what if the AI has not been trained to respond? At the very least, autonomous robots used in surgeries would need to be approved by the Food and Drug Administration. In other cases where doctors are using AI to summarize their patient visits and make recommendations, FDA approval is not required because the doctor is technically supposed to review and endorse any information they produce. That is concerning because there is already evidence that AI bots will make bad recommendations, or hallucinate and include information in meeting transcripts that was never uttered. How often will a tired, overworked doctor rubber-stamp whatever an AI produces without scrutinizing it closely? It feels reminiscent of recent reports regarding how soldiers in Israel are relying on AI to identify attack targets without scrutinizing the information very closely. "Soldiers who were poorly trained in using the technology attacked human targets without corroborating [the AI] predictions at all," a Washington Post story reads. "At certain times the only corroboration required was that the target was a male." Things can go awry when humans become complacent and are not sufficiently in the loop. Healthcare is another field with high stakesâ€"certainly higher than the consumer market. If Gmail summarizes an email incorrectly, it is not the end of the world. AI systems incorrectly diagnosing a health problem, or making a mistake during surgery, is a much more serious problem. Who in that case is liable? The Post interviewed the director of robotic surgery at the University of Miami, and this is what he had to say:
Share
Share
Copy Link
Researchers from Johns Hopkins and Stanford have developed AI-powered surgical robots that learn from videos, performing complex tasks with human-like precision. This breakthrough could address surgeon shortages and enhance surgical efficiency.
Researchers from Johns Hopkins University and Stanford University have made a significant breakthrough in medical robotics by developing AI-powered surgical robots capable of learning complex procedures by watching videos. This innovative approach could revolutionize surgical practices and address the looming shortage of surgeons in the United States [1][2].
The research team equipped the existing da Vinci Surgical System with a machine-learning model that analyzes surgical procedures recorded by cameras mounted on the robot's instruments. This model, inspired by advanced AI language models, processes both visual and kinematic data to learn precise movements for tasks such as needle manipulation, tissue handling, and suturing [1][3].
What sets this technology apart is its ability to focus on relative movements, adjusting based on the robot's current position rather than following rigid, predefined paths. This approach overcomes limitations in the accuracy of the da Vinci system's kinematics [1].
The AI model incorporates a feedback mechanism that allows the robot to evaluate and improve its performance. In simulated environments, the system compares its actions against ideal trajectories from training videos, refining its techniques to achieve precision comparable to experienced surgeons [1][4].
To ensure versatility, the model was exposed to diverse surgical styles, environments, and tasks. This approach enhances the system's adaptability, allowing it to handle the nuances and unpredictability of real-world surgical procedures [1][2].
The development of autonomous surgical robots could help address the projected shortage of 10,000 to 20,000 surgeons in the United States by 2036, according to the American Association of Medical Colleges [5]. Dr. Axel Krieger, who supervised the research, emphasizes that the goal is not to replace surgeons but to assist them, potentially reducing fatigue-related errors [1][5].
Despite the promising advancements, experts highlight several challenges that need to be addressed before fully autonomous surgical robots become a reality:
Patient variability: The unique anatomy and disease behavior of each patient pose significant challenges [5].
Liability and accountability: Determining responsibility in case of surgical errors involving autonomous robots is complex [5].
Privacy concerns: The use of real surgical videos for training raises questions about patient privacy [5].
Equal access: Ensuring fair distribution of this technology across healthcare systems is crucial [5].
Over-reliance: There are concerns about surgeons becoming too dependent on robotic assistance [5].
As this technology continues to evolve, it promises to enhance surgical precision and efficiency while raising important questions about the future of healthcare and the role of AI in medicine.
Reference
[3]
Researchers at Johns Hopkins University and Stanford University have successfully trained a surgical robot to perform complex tasks with human-level skill using imitation learning, marking a significant advancement in autonomous robotic surgery.
7 Sources
MIT researchers have created a new method called Heterogeneous Pretrained Transformers (HPT) that uses generative AI to train robots for multiple tasks more efficiently, potentially revolutionizing the field of robotics.
6 Sources
A team from the University of Pennsylvania has introduced a novel AI training approach called Knowledge-enhanced Bottlenecks (KnoBo) that emulates the education pathway of human physicians for medical image analysis, potentially improving accuracy and interpretability in AI-assisted diagnostics.
2 Sources
MIT researchers develop LucidSim, a novel system using generative AI and physics simulators to train robots in virtual environments, significantly improving their real-world performance in navigation and obstacle traversal.
2 Sources
Researchers develop an AI system enabling humanoid robots to mimic human movements, including dancing, walking, and fighting, potentially revolutionizing robot agility and adaptability.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved