Curated by THEOUTPOST
On Tue, 12 Nov, 4:01 PM UTC
7 Sources
[1]
This AI taught itself to do surgery by watching videos -- and it's ready to operate on humans
"Imagine that you need to get surgery within a few minutes or you may not survive," John Hopkins University postdoc student Brian Kim tells me over email. "There happen to be no surgeons around but there is an autonomous surgical robot available that can perform this procedure with a very high probability of success -- would you take the chance?" It sounds like a B-movie scenario, but it's now a tangible reality that you may encounter sooner than you think. For the first time in history, Kim and his colleagues managed to teach an artificial intelligence to use a robotic surgery machine to perform precise surgical tasks by making it watch thousands of hours of actual procedures happening in real surgical theaters. The research team says it's a breakthrough development that crosses a definitive medical frontier and opens the path to a new era in healthcare. According to their recently published paper, the researchers say the AI managed to achieve a performance level comparable to human surgeons without prior explicit programming. Rather than trying to painstakingly program a robot to operate -- which the research paper says has always failed in the past -- they trained this AI through something called imitation learning, a branch of artificial intelligence where the machine observes and replicates human actions. This allowed the AI to learn the complex sequences of actions required to complete surgical tasks by breaking them down into kinematic components. These components translate into simpler actions -- like joint angles, positions, and paths -- which are easier to understand, replicate, and adapt during surgery. Kim and his colleagues used a da Vinci Surgical System as the hands and eyes for this AI. But before using the established robotic platform (currently used by surgeons to conduct precise operations locally and remotely) to prove the new AI works, they also ran virtual simulations. This allowed for faster iteration and safety validation before the learned procedures were applied on actual hardware.
[2]
Robot trained on surgery videos performs as well as human docs
A robot, trained for the first time by watching videos of seasoned surgeons, executed the same surgical procedures as skillfully as the human doctors. The successful use of imitation learning to train surgical robots eliminates the need to program robots with each individual move required during a medical procedure and brings the field of robotic surgery closer to true autonomy, where robots could perform complex surgeries without human help. The findings, led by Johns Hopkins University researchers, are being spotlighted this week at the Conference on Robot Learning in Munich. "It's really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery," says senior author Axel Krieger, an assistant professor in Johns Hopkins University's mechanical engineering department. "We believe this marks a significant step forward toward a new frontier in medical robotics." The researchers used imitation learning to train the da Vinci Surgical System robot to perform three fundamental tasks required in surgical procedures: manipulating a needle, lifting body tissue, and suturing. In each case, the robot trained on the team's model performed the same surgical procedures as skillfully as human doctors. The model combined imitation learning with the same machine learning architecture that underpins ChatGPT. However, where ChatGPT works with words and text, this model speaks "robot" with kinematics, a language that breaks down the angles of robotic motion into math. The researchers fed their model hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures. These videos, recorded by surgeons all over the world, are used for post-operative analysis and then archived. Nearly 7,000 da Vinci robots are used worldwide, and more than 50,000 surgeons are trained on the system, creating a large archive of data for robots to "imitate." While the da Vinci system is widely used, researchers say it's notoriously imprecise. But the team found a way to make the flawed input work. The key was training the model to perform relative movements rather than absolute actions, which are inaccurate. "All we need is image input and then this AI system finds the right action," says lead author Ji Woong "Brian" Kim, a postdoctoral researcher at Johns Hopkins. "We find that even with a few hundred demos, the model is able to learn the procedure and generalize new environments it hasn't encountered." "The model is so good learning things we haven't taught it," adds Krieger. "Like if it drops the needle, it will automatically pick it up and continue. This isn't something I taught it do." The model could be used to quickly train a robot to perform any type of surgical procedure, the researchers say. The team is now using imitation learning to train a robot to perform not just small surgical tasks but a full surgery. Before this advancement, programming a robot to perform even a simple aspect of a surgery required hand-coding every step. Someone might spend a decade trying to model suturing, Krieger says. And that's suturing for just one type of surgery. "It's very limiting," Krieger says. "What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days. It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery." Additional authors are from Johns Hopkins and Stanford University.
[3]
Robotic AI performs successful surgery after watching videos for training
Watching old episodes of ER won't make you a doctor, but watching videos may be all the training a robotic surgeon's AI brain needs to sew you up after a procedure. Researchers at Johns Hopkins University and Stanford University have published a new paper showing off a surgical robot as capable as a human in carrying out some procedures after simply watching humans do so. The research team tested their idea with the popular da Vinci Surgical System, which is often used for non-invasive surgery. Programming robots usually requires manually inputting every movement that you want them to make. The researchers bypassed this using imitation learning, a technique that implanted human-level surgical skills in the robots by letting them observe how humans do it. The researchers put together hundreds of videos recorded from wrist-mounted cameras demonstrating how human doctors do three particular tasks: needle manipulation, tissue lifting, and suturing. The researchers essentially used the kind of training ChatGPT and other AI models use, but instead of text, the model absorbed information about the way human hands and the tools they are holding move. This kinematic data essentially turns movement into math the model can apply to carry out the procedures upon request. After watching the videos, the AI model could use the da Vinci platform to mimic the same techniques. It's not too dissimilar from how Google is experimenting with teaching AI-powered robots to navigate spaces and complete tasks by showing them videos. "It's really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery. We believe this marks a significant step forward toward a new frontier in medical robotics," senior author and JHU assistant professor Axel Krieger said in a release. "The model is so good learning things we haven't taught it. Like if it drops the needle, it will automatically pick it up and continue. This isn't something I taught it do." The idea of an AI-controlled robot holding blades and needles around your body might sound scary, but the precision of machines can make them better in some cases than human doctors. Robotic surgery is increasingly common in some instances. A robot performing complex procedures independently might actually be safer, with fewer medical errors. Human doctors could have more time and energy to focus on unexpected complications and the more difficult parts of a surgery that machines aren't up to handling yet. The researchers have plans to test using the same techniques to teach an AI how to do a complete surgery. They're not alone in pursuing the idea of AI-assisted robotic healthcare. Earlier this year, AI dental technology developer Perceptive showcased the success of an AI-guided robot performing a dental procedure on a human without supervision.
[4]
Robot that watched surgery videos performs with skill of human doctor, researchers report
A robot, trained for the first time by watching videos of seasoned surgeons, executed the same surgical procedures as skillfully as the human doctors. The successful use of imitation learning to train surgical robots eliminates the need to program robots with each individual move required during a medical procedure and brings the field of robotic surgery closer to true autonomy, where robots could perform complex surgeries without human help. "It's really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery," said senior author Axel Krieger. "We believe this marks a significant step forward toward a new frontier in medical robotics." The findings led by Johns Hopkins University researchers are being spotlighted this week at the Conference on Robot Learning in Munich, a top event for robotics and machine learning. The team, which included Stanford University researchers, used imitation learning to train the da Vinci Surgical System robot to perform fundamental surgical procedures: manipulating a needle; lifting body tissue, and suturing. The model combined imitation learning with the same machine learning architecture that underpins ChatGPT. However, where ChatGPT works with words and text, this model speaks "robot" with kinematics, a language that breaks down the angles of robotic motion into math. The researchers fed their model hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures. These videos, recorded by surgeons all over the world, are used for post-operative analysis and then archived. Nearly 7,000 da Vinci robots are used worldwide, and more than 50,000 surgeons are trained on the system, creating a large archive of data for robots to "imitate." While the da Vinci system is widely used, researchers say it's notoriously imprecise. But the team found a way to make the flawed input work. The key was training the model to perform relative movements rather than absolute actions, which are inaccurate. "All we need is image input and then this AI system finds the right action," said lead author Ji Woong "Brian" Kim. "We find that even with a few hundred demos the model is able to learn the procedure and generalize new environments it hasn't encountered." The team trained the robot to perform three tasks: manipulate a needle, lift body tissue, and suture. In each case, the robot trained on the team's model performed the same surgical procedures as skillfully as human doctors. "Here the model is so good learning things we haven't taught it," Krieger said. "Like if it drops the needle, it will automatically pick it up and continue. This isn't something I taught it do." The model could be used to quickly train a robot to perform any type of surgical procedure, the researchers said. The team is now using imitation learning to train a robot to perform not just small surgical tasks but a full surgery. Before this advancement, programming a robot to perform even a simple aspect of a surgery required hand-coding every step. Someone might spend a decade trying to model suturing, Krieger said. And that's suturing for just one type of surgery. "It's very limiting," Krieger said. "What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days. It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery." Authors from Johns Hopkins include PhD student Samuel Schmidgall; Associate Research Engineer Anton Deguet; and Associate Professor of Mechanical Engineering Marin Kobilarov. Stanford University authors are PhD student Tony Z. Zhao
[5]
Robot learns to perform surgical tasks expertly just by watching videos
'Wrist' cameras attached to the arms of the robot surgical system capture footage to help train the AI model It takes years of intense study and a steady hand for humans to perform surgery, but robots might have an easier time picking it up with today's AI technology. Researchers at Johns Hopkins University (JHU) and Stanford University have taught a robot surgical system to perform a bunch of surgical tasks as capably as human doctors, simply by training it on videos of those procedures. The team leveraged a da Vinci Surgical System for this study. It's a robotic system that's typically remote controlled by a surgeon with arms that manipulate instruments for tasks like dissection, suction, and cutting and sealing vessels. Systems like these give surgeons much greater control, precision, and a closer look at patients on the operating table. The latest version is estimated to cost over US$2 million, and that doesn't include accessories, sterilizing equipment, or training. Using a machine learning method known as imitation learning, the team trained a da Vinci Surgical System to perform three tasks involved in surgical procedures on its own: manipulating a needle, lifting body tissue, and suturing. Take a look. The surgical system not only executed these as well as a human could, it also learned to correct its own mistakes. "Like if it drops the needle, it will automatically pick it up and continue. This isn't something I taught it do," said Axel Krieger, an assistant professor at JHU who co-authored a paper on the team's findings that was presented at this week's Conference on Robot Learning. The researchers trained an AI model by combining imitation learning with the machine learning architecture that popular chatbots like ChatGPT are built with. However, while those chatbots are designed to work with text, this model spits out kinematics - a language used to describe motion with mathematical elements like numbers and equations - to direct the surgical system's arms. The model was trained using hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures. The team believes its model could train a robot to perform any type of surgical procedure quickly, and far more easily than the traditional method of hand-coding every step required to direct a surgery robot's actions. According to Krieger, this could help make automated surgery a reality sooner than we could previously conceive. "What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days," he said. "It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery." That could be one of the biggest breakthroughs in the world of robot-assisted surgery in recent years. There are some automated devices around for use in complex operations, like Corindus's CorPath system for cardiovascular procedures. However, their capabilities are typically limited to only certain steps of the surgeries they help with. Further, Krieger pointed out that coding each step for a robotic system can be awfully slow. "Someone might spend a decade trying to model suturing," he said. "And that's suturing for just one type of surgery." Krieger also previously worked on a different approach to automating surgical tasks. In 2022, his team of researchers developed the Smart Tissue Autonomous Robot, or STAR, at JHU. Guided by a structural light-based three-dimensional endoscope and a machine learning-based tracking algorithm, the robot intricately sutured together two ends of a pig's intestine, without human intervention. The JHU researchers are now working on training a robot with their imitation learning method to carry out a full surgery. It'll likely be years before we see robots fully take over for surgeons, but innovations like this one could make complex treatments safer and more accessible for patients around the globe.
[6]
Robots Trained by Video: A Leap Toward Autonomous Surgery - Neuroscience News
Summary: For the first time, a robot has been trained to perform surgical procedures by watching videos of expert surgeons, marking a leap forward in robotic surgery. This breakthrough in "imitation learning" means that robots can learn complex tasks without needing to be programmed for every individual movement. By training on surgical footage, the robot replicated procedures with skill comparable to human surgeons, demonstrating its ability to adapt and even correct its actions autonomously. Researchers believe this approach could enable faster and more accurate surgical training for robots, opening doors to fully autonomous surgeries in the future. The technology uses the same foundational AI principles as language models but adapts them to control robotic motion. The study could transform the field of surgery, reducing medical errors and enhancing precision. A robot, trained for the first time by watching videos of seasoned surgeons, executed the same surgical procedures as skillfully as the human doctors. The successful use of imitation learning to train surgical robots eliminates the need to program robots with each individual move required during a medical procedure and brings the field of robotic surgery closer to true autonomy, where robots could perform complex surgeries without human help. "It's really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery," said senior author Axel Krieger. "We believe this marks a significant step forward toward a new frontier in medical robotics." The findings led by Johns Hopkins University researchers are being spotlighted this week at the Conference on Robot Learning in Munich, a top event for robotics and machine learning. The team, which included Stanford University researchers, used imitation learning to train the da Vinci Surgical System robot to perform fundamental surgical procedures: manipulating a needle; lifting body tissue, and suturing. The model combined imitation learning with the same machine learning architecture that underpins ChatGPT. However, where ChatGPT works with words and text, this model speaks "robot" with kinematics, a language that breaks down the angles of robotic motion into math. The researchers fed their model hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures. These videos, recorded by surgeons all over the world, are used for post-operative analysis and then archived. Nearly 7,000 da Vinci robots are used worldwide, and more than 50,000 surgeons are trained on the system, creating a large archive of data for robots to "imitate." While the da Vinci system is widely used, researchers say it's notoriously imprecise. But the team found a way to make the flawed input work. The key was training the model to perform relative movements rather than absolute actions, which are inaccurate. "All we need is image input and then this AI system finds the right action," said lead author Ji Woong "Brian" Kim. "We find that even with a few hundred demos the model is able to learn the procedure and generalize new environments it hasn't encountered." The team trained the robot to perform three tasks: manipulate a needle, lift body tissue, and suture. In each case, the robot trained on the team's model performed the same surgical procedures as skillfully as human doctors. "Here the model is so good learning things we haven't taught it," Krieger said. "Like if it drops the needle, it will automatically pick it up and continue. This isn't something I taught it do." The model could be used to quickly train a robot to perform any type of surgical procedure, the researchers said. The team is now using imitation learning to train a robot to perform not just small surgical tasks but a full surgery. Before this advancement, programming a robot to perform even a simple aspect of a surgery required hand-coding every step. Someone might spend a decade trying to model suturing, Krieger said. And that's suturing for just one type of surgery. "It's very limiting," Krieger said. "What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days. It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery." Authors from Johns Hopkins include PhD student Samuel Schmidgall; Associate Research Engineer Anton Deguet; and Associate Professor of Mechanical Engineering Marin Kobilarov. Stanford University authors are PhD student Tony Z. Zhao. Author: Jill Rosen Source: JHU Contact: Jill Rosen - JHU Image: The image is credited to Neuroscience News Original Research: The findings will be presented at the Conference on Robot Learning
[7]
Watch a robot operate on a pork loin
Robots can already mimic surgeons to a certain degree, but training them to do so often involves complex programming and time-consuming trial-and-error. Now, for the first time, a machine successfully learned to replicate fundamental operation tasks after simply analyzing video footage of medical experts. But before it gets to work on human patients, the tiny robotic arms practiced on a pork loin. Doctors have increasingly integrated the da Vinci Surgical System into an array of procedures since the device's debut in 2000. The small pair of robotic arms ending in tweezer-like graspers are already used in prostatectomies, cardiac valve repairs, as well as renal and gynecologic operations. But the device has its limitations, particularly when it comes to teaching it new tasks. "It [was] very limiting," Johns Hopkins University assistant professor of mechanical engineering Axel Krieger explained in a November 11th profile. Krieger added that programming previously necessitated every step of a surgery to be hand-coded by experts, meaning that a single form of surgical suturing could take as much as a decade to perfect. As Krieger and colleagues explained at this year's Conference on Robot Learning in Munich, Germany, that painstaking era may be nearing its end. Using similar machine learning principles behind reinforcement learning models such as ChatGPT, Kreiger's team recently developed a new model based on kinematics. Instead of a large language model's word-based datasets, the novel da Vinci Surgical System training program relies on kinematics, which translates robotic motions and angles into mathematical computations. After amassing hundreds of videos depicting thousands of human surgeons overseeing da Vinci robots, researchers then tasked the system to analyze the archival trove in order to best imitate the correct movements. The results surprised even the programmers. [Related: First remote, zero-gravity surgery performed on the ISS from Earth (on rubber).] "All we need is image input and then this AI system finds the right action," postdoctoral researcher Ji Woong Kim said. "We find that even with a few hundred demos, the model is able to learn the procedure and generalize new environments it hasn't encountered." Krieger added that their model is also great at learning things no human actually demonstrated through videos. "Like, if it drops the needle, it will automatically pick it up and continue. This isn't something I taught it do." To test out their system upgrade, Kreiger's team instructed a newly trained da Vinci robot to complete various tasks on a pork loin, given its biological similarity to human tissue. The small grippers then demonstrated their ability to pick up dropped needles, tie knots, and complete surgical sutures almost exactly like its human trainers. What's more, it even did so after initially being trained using silicon skin stand-ins, meaning it easily transferred its skills to biological tissues without additional work. Instead of waiting years for robots to learn new surgery strategies, Krieger believes the new learning model will allow da Vinci Systems to perfect procedures "in a couple days." Although the autonomous robot system currently operates between 14 and 18 times slower than a human, researchers believe it won't be long until their machines pick up the pace, as well. "It's really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery," Krieger said.
Share
Share
Copy Link
Researchers at Johns Hopkins University and Stanford University have successfully trained a surgical robot to perform complex tasks with human-level skill using imitation learning, marking a significant advancement in autonomous robotic surgery.
Researchers from Johns Hopkins University and Stanford University have achieved a significant milestone in the field of robotic surgery. They have successfully trained a surgical robot to perform complex tasks with the skill level of human doctors, using a novel approach called imitation learning 1.
The team utilized the da Vinci Surgical System, a widely used robotic platform for precise operations. Instead of painstakingly programming each movement, they employed imitation learning, a branch of artificial intelligence where machines observe and replicate human actions 2.
The AI model was trained using hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during actual surgical procedures. This vast archive of data, collected from nearly 7,000 da Vinci robots used worldwide, provided a rich source for the AI to "imitate" 3.
The researchers combined imitation learning with the same machine learning architecture that underpins ChatGPT. However, while ChatGPT works with text, this model speaks "robot" with kinematics, translating complex surgical movements into mathematical language that the robot can understand and execute 4.
The AI-powered robot demonstrated proficiency in three fundamental surgical tasks:
In each case, the robot performed these procedures with skill comparable to human surgeons. Remarkably, the AI also showed the ability to adapt and correct mistakes, such as automatically picking up a dropped needle and continuing the procedure 5.
This breakthrough has significant implications for the future of robotic surgery:
Accelerated Training: The model can quickly train robots to perform various surgical procedures, potentially reducing the time and resources required for robot programming 1.
Improved Accuracy: By leveraging machine precision, this technology could potentially reduce medical errors and achieve more accurate surgeries 2.
Autonomous Surgery: This development brings the field closer to true autonomy, where robots could perform complex surgeries with minimal human intervention 3.
Accessibility: In the long term, this technology could make complex surgical procedures more accessible globally, especially in areas with limited access to skilled surgeons 5.
The research team is now working on training a robot to perform a full surgery using this imitation learning method. While it may be years before we see fully autonomous surgical robots in operating rooms, this innovation represents a significant step towards that goal 4.
As the technology continues to evolve, it raises important questions about the future role of human surgeons, patient trust in AI-powered medical procedures, and the ethical implications of autonomous surgical systems. These aspects will likely be subjects of ongoing debate and research in the medical and AI communities.
Reference
[1]
[4]
A team from the University of Pennsylvania has introduced a novel AI training approach called Knowledge-enhanced Bottlenecks (KnoBo) that emulates the education pathway of human physicians for medical image analysis, potentially improving accuracy and interpretability in AI-assisted diagnostics.
2 Sources
A new AI-powered tool called FastGlioma can detect residual cancerous brain tumors within 10 seconds during surgery, outperforming traditional methods with 92% accuracy.
6 Sources
MIT researchers have created a new method called Heterogeneous Pretrained Transformers (HPT) that uses generative AI to train robots for multiple tasks more efficiently, potentially revolutionizing the field of robotics.
6 Sources
MIT researchers develop LucidSim, a novel system using generative AI and physics simulators to train robots in virtual environments, significantly improving their real-world performance in navigation and obstacle traversal.
2 Sources
Researchers at Washington State University have developed a deep learning AI model that can identify signs of disease in animal and human tissue images faster and more accurately than human pathologists, potentially revolutionizing medical diagnostics and research.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved