Life, death and software

With no margin for mistakes, control systems for surgical robots must as error-proof as possible

Andre Platzer (CMU photo)

In the field of medical devices, the da Vinci Surgical System has been a breakthrough. It allows a surgeon at a remote console, viewing a 3D image of a surgical area, to operate on a patient using four small robotic limbs.

It’s similar to surgery performed with the aid of a laparoscope—a thin, lighted tube, connected to a camera, which can be inserted into a patient. But it’s also much, much more.

“The depth perception is so much better than it is in a traditional laparoscopy,” says Dr. Michael J. Bonidie, who utilizes the da Vinci for hysterectomies at Magee-Women’s Hospital of UPMC in Pittsburgh. Conventional laparoscopes can only move forward and backwards through an incision; the da Vinci allows a view from several different angles. “It essentially gives you a neck,” Bonidie says.

Other surgical robots preceded the da Vinci, yet none has been as heralded or widely used. Its maker, Intuitive Surgical, has sold more than 2,500 units since 2000, and the robot participated in 367,000 operations in the United States last year, up 34 percent from 2011. Intuitive claims use of the da Vinci decreases scarring, shortens hospital stays, and lowers the risk of complications versus conventional laparoscopy. The da Vinci’s success has made it a kind of status symbol, with hospitals touting their use of the high-tech tool in their advertising.

Yet complications can arise in any kind of surgery, and surgery performed by the da Vinci is no exception. There have been more than 20 lawsuits filed against Intuitive by patients who claim they were harmed by the robot, and so far this year, the U.S. Food and Drug Administration has logged 455 incident reports involving da Vinci surgical robots. (For its part, Intuitive says the company has won every suit that’s been heard in court.)

Many robots have mission-critical applications, but in the case of surgical robots, software must not just be reliable—it has to be practically flawless.

That’s why the developers of the next generation of game-changing surgical robots have enlisted the expertise of André Platzer. A 34-year-old assistant professor of computer science at CMU, Platzer has developed a reputation as the man to turn to when you need a piece of software that can never, ever, fail when it’s put into everyday use. He and his team have created error-proofing techniques for air-traffic control software, as well as for the embedded systems that could control the much-discussed self-driving cars of the future.

“It’s the question I ask most days: How do we create a system that we can bet lives on?” Platzer says.

Platzer’s work isn’t very dramatic—at least not visually. Peek in on him and his team on any given day and you would see people feeding information into a set of computers. In fact, they’re coding scenarios to testing different systems’ reactions to a variety of potentially life-threatening dilemmas, and trying to remove them before they can hurt anyone.

For his first foray into surgical robots, Platzer analyzed a John Hopkins University-developed system—a cousin of the da Vinci—that would literally guide a surgeon’s scalpel during brain surgery to remove a cancerous tumor. The instrument would be wired to a computer containing a magnetic-resonance image, or MRI, scan that would act as a map of the patient’s brain. The job of the robot would be to guide the scalpel to the cancerous area in the least destructive way, then keep the blade within the confines of the tumor zone, which may be no larger than a dime. The robot would provide feedback to the surgeon’s hand in the form of force, to keep the scalpel from going astray.

Checking a computer program or electrical circuit for errors using formal logic relies on mathematical proofs. But classical error-checking techniques study simpler computer programs by making a smart enumeration of all cases, because those programs have clearly defined boundaries—a finite number of states. Air-traffic control systems, self-driving cars, and surgical robots are all hybrids: systems that contain both cybernetic components as well as physical parts that move in space and time—a practically infinite number of states. A traditional enumeration of all states would be impossible to complete.

To cope with these hybrid systems, Platzer and former doctoral student David Renshaw (CS’12) developed an automated theorem prover called KeYmaeraD. KeYmaeraD is a formal verification tool that relies on quantified differential-dynamic logic. To account for factors happening in the physical world, it calculates a range of safe parameters for a given system’s operations, rather than focusing on just one parameter.

Using KeYmaeraD, Platzer and his colleagues evaluated the control algorithm of the JHU robot, testing for any systemic errors in the feedback mechanism. They were able to find a few ways in which the robot could go wrong. In some cases, the system might not warn a surgeon he is veering off course until he has made a motion his wrist can’t take back. In other situations, it might warn a surgeon that she’s exiting the cancerous zone in good time, but not prevent her from jerking her hand backwards into the no-go zone. This intense precision matters when the difference between life-saving surgery and debilitating brain damage might be a few millimeters.

Other robots are under development for a variety of medical circumstances, both ordinary and extraordinary. IBM is working on a version of Watson—the supercomputer that competed on “Jeopardy!”—which could diagnose rare diseases by analyzing a vast inventory of symptoms that doctors might otherwise overlook. NASA is offering a grant for a lab to develop a remote-control system in which an Earth-based surgeon could operate on someone in space using robotic arms.

But none of these systems can be wholeheartedly endorsed until they are absolutely error-proofed—and that’s the role played by teams such as Platzer’s.

This may seem like new territory for Platzer, whose work on planes and cars had him measuring miles and feet as opposed to millimeters, but he says all of his work comes down to the same principles of logic, math and physics.

“If you move in an object towards a boundary, you have to calculate how fast you are moving and when to angle it and where,” he says, “and it’s the same kind of thinking if it’s a ship moving in a harbor, or a scalpel moving into some skin.”

—Nick Keppler is a Pittsburgh-based freelance writer who has contributed to Pittmed, the magazine of the University of Pittsburgh School of Medicine, as well as Pittsburgh City Paper and Nerve.com. This is his first Link byline.

For More Information: 

Nick Keppler | nickkeppler@yahoo.com