What I've Learned: Russ Taylor

'Father of medical robotics'

Often regarded as the "father of medical robotics," Russell Taylor has helped create breakthroughs in robotics and other computer-assisted interventions in brain, spinal, eye, ear, nose, throat, and other types of surgeries.

Image credit: ILLUSTRATION BY JOHN S. DYKES

He is the John C. Malone Professor of Computer Science in Johns Hopkins' Whiting School of Engineering, with joint appointments in Mechanical Engineering, Radiology, and Surgery; director of the Laboratory for Computational Sensing and Robotics; and director of the Engineering Research Center for Computer-Integrated Surgical Systems and Technology. He is a fellow of several professional societies and has won numerous awards.

Do you see this watch I'm wearing? When I graduated from JHU, I received it for winning the Hamilton Award, which back then went to the engineering student who'd also done outstanding work in humanities and social sciences. I was active in the debate team, and I think that's why I got it.

My mentor at JHU was Mandell Bellmore, a professor in the Operations Research Department. Dr. Bellmore allowed me to work with him on coding mathematical optimization algorithms that he had developed with his former graduate students, including one whose name you might know—John Malone [who went on to become the head of TCI Communications and, later, Liberty Media]. So I guess it's a nice coincidence that in 2011 I became the John C. Malone Professor of Computer Science.

Back to Dr. Bellmore: I was really thrilled that he seemed to enjoy spending time with an undergraduate like me. One of the things that Hopkins has done better than almost any school for a long time now is to involve undergraduates in research in a serious way. So one thing I learned through Dr. Bellmore is how that's just a terrific way to teach people.

There was no computer science major available then, so between my interests in computing and engineering I ended up doing a sort of roll-your-own major. Dr. Bellmore was very interdisciplinary.

I learned the relationship between analysis and experiments. I learned how certain principles carry over from one field to another and then a third. I also learned the relationship between a basic mathematical algorithm and a real problem in the real world, meaning that I understood how that problem needed to be formulated so that I could put the algorithm to use.

I went into the PhD program at Stanford, and I was there for six years. I could have finished quicker, but I played an awful lot of volleyball.

At Stanford, I joined the Stanford Artificial Intelligence Lab. It was an amazing place, where much of what we take for granted today was first invented. I learned a lot from my thesis adviser, Jerry Feldman, and from the lab director, John McCarthy. John McCarthy was one of the pioneers of computer science. Even back then, he had a vision of computing eventually becoming the ubiquitous thing it is today.

For example, he got the notion of creating a news service for the ARPANET [a progenitor of the Internet]. So he arranged for the Associated Press newswire to be connected to the lab computer and got a grad student to stitch the pieces together into stories that could be read from a database.

I got interested in robotics at Stanford. My PhD thesis was called "The Synthesis of Manipulator Control Programs from Task-Level Specifications." It had to do with whether a computer can help a human write the programs that will enable a robot to do a very precise task such as assembling mechanical parts.

Two things about robots are that they are programmable and that they can sense things about their environment. A skilled programmer can use this ability to program a precise but not very "accurate" robot to assemble parts that themselves have manufacturing tolerances. But this programming is difficult. I was able to develop ways for the computer to analyze how various sensing actions could affect how accurately the robot could position one part relative to another. This enabled me to automatically select appropriate sensing for each step in an assembly task.

I finished my thesis on the night of the U.S. Bicentennial. After pasting the last figure into the document, I went up onto the roof of the lab, and I was pleased to see that the entire San Francisco Bay area was celebrating the fact that I was done at last. Then I packed up my car for a cross-country drive. IBM had offered me a job in its robotics research group in Westchester County, N.Y.

One thing I learned at IBM is how valuable it is for researchers to work directly with customers. We spent a great deal of time working side by side with manufacturing engineers to help solve production problems, while also pursuing research into the technology that would make this possible. Our approach involved a process of rapid iteration. By that, I mean we would try to do something, and then we'd get them to tell us what was wrong with it. Then we would make changes, and we'd get them to tell us why it still wasn't right, and so on—until we got it right.

It's the style we use today when working with surgeons to develop and improve the robots and other technologies we develop in computer-integrated interventional medicine.

Actually, the lesson is even larger than that. Because of those experiences in the manufacturing world, I can really see the ways in which hospitals should be run more like factories. I'm not saying we should treat people like objects, of course, but we can use technology to help physicians deliver more consistent, safe, and effective care. Further, we can save the information used in treating each individual patient and analyze it statistically to improve our treatment processes continuously.

OK, let's back up. The way I got into medical robots, well, that happened after I became a middle manager at IBM. I got tired of spending most of my days in budget meetings, personnel evaluations, management presentations, and the like. It's ironic, I know, since those are the same meetings that occupy so much of a professor's time.

In any case, I said to my boss, "Look, I want to take a year off and do an internal sabbatical. I'd like to work with a small team to develop a system that accomplishes something real." One of the projects that I suggested built upon some preliminary work that my department had done helping two surgeons at the University of California, Davis, build a robot that could help in hip replacement surgery. In about a year we had a prototype built. One of the UC Davis grad students named it Robodoc. I never liked the name. But it was the first robot to do any sort of serious surgery. What it did was prepare the patient's thighbone in a very accurate way to receive an orthopedic implant.

About the same time, I worked with a surgeon at NYU on a system for craniofacial osteotomies. In these surgeries, they cut the patient's facial bones into several fragments, and then they plate the pieces back together in a way that creates a more normal appearance. We used a specialized camera system to track the positions of the bone fragments and provide this information to the surgeon to help align them accurately according to a plan made from a CT scan of the patient. That was one of the first times anyone used this kind of surgical navigation tool for anything outside of neurosurgery.

After my internal sabbatical, I started a research group within IBM to work on medical robotics. Perhaps the most important was a robot developed with Johns Hopkins to act as a "third hand" for surgeons performing laparoscopic surgery. After I moved to JHU, IBM licensed my patents to Intuitive Surgical, which makes the da Vinci surgical robots and which has become the largest medical robotics company.

I eventually came to the conclusion that if I wanted to pursue computer-integrated interventional medicine for the rest of my career, it made more sense to do it within the same institution as the physicians I'd be working with. There was no place more ideal for me than Hopkins. It's an amazingly easy place to work, where everyone has this shared passion for what we do, and there is little departmental friction to stand in the way of cutting-edge research.

I came in 1995. And by that time all the things I'd learned along the way had come together for me into a real vision for how to proceed. The first thing in that vision is the rapid iteration working style from IBM. We work directly with our surgeon customers, and we constantly try out different things and get them to tell us what's wrong. That's been a key to our success.

You start with everything you know about a patient. A lot of that is in the form of images, but lab results and histories can be a part of it, too. You take all that information into the operating room—or, if it's something other than surgery, into the intervention suite—and that's where a process we call "registration" takes place. What that means is that you fuse together the virtual reality of the model with the actual reality of the patient.

Then you can use technology—maybe it's a robot, maybe it's something else—to help the physician do precisely what was planned and verify that what was planned really got done. The robot isn't the surgeon; the robot is a surgical tool. The humans provide the judgment and intelligence.

With a robot, you can give a surgeon the ability to feel things, sense things, and see things that are beyond human thresholds. You can give a surgeon the ability to get inside a patient with tiny little mechanical hands rather than having to make a great big hole. You can even overcome the natural human tremor in a surgeon's hands.

In addition to helping with the immediate task at hand, this partnership of information, technology, and people can enable us to create a "flight data recorder" for the operating room. The computer can remember precisely what actions were performed, in what order, as well as everything else about an individual procedure. It should be possible to use machine learning and statistical methods to relate all this information to clinical outcomes in order to improve our processes.

That is the vision, really. And early on in my time here at Hopkins, I was able to work with colleagues to develop it into the largest proposal of my career: asking the National Science Foundation to help us build a center around these concepts. That proposal got us $30 million of seed money to establish the Engineering Research Center for Computer-Integrated Surgical Systems and Technology.

One of the keys to our success applied another lesson I had learned earlier in my career: It's good to collaborate with your competitors. While I was working on the proposal, I got to thinking, Who else out there might try to submit something like this? The Hopkins team was already amazing, but adding others would just make us stronger. That's how our original proposal came to include Hopkins, MIT, Carnegie Mellon, and Harvard Medical School. Somebody told me once that around NSF it came to be known as the "dream team proposal."

What else have I learned? I've learned that people are the most important keys to success. Having good people around you, that's what puts you in a position to succeed. I've been really, really lucky on that front.