The next time you’re lying on a surgery table with your stomach sliced open and your guts hanging out, it could be Kinect telling surgeons where not to cut. All surgeons should, hopefully, know where not to cut already, but when they’re performing robotic-assisted surgeries, things get a little trickier.
More after the jump.
The biggest problem with robotic-assisted surgery is that the surgeons using the robotic arms have no sense of touch. Fredrik Ryden, who is an electrical engineering grad student from the University of Washington, has taken Kinect and some custom code in order to try and give surgeons a sense of â€œtouchâ€ when performing these operations.
Ryden’s code for Kinect makes the camera scan a three-dimensional area and then have it respond when objects enter that area. According to one of the professors from Ryden’s university, the code combined with Kinect allows surgeons to â€œdefine basically a force field around, say, a liver. If the surgeon [gets] too close, he would run into that force field and it would protect the object he didn’t want to cut.â€
It’s sort of like a custom proximity alarm that alerts surgeons when they’re moving robotic arms too close to a fragile part of the body. According to the University of Washington, a similar system without Kinect could cost upwards of $50Â 000, so the big deal here is not that it’s a new technique, but that it’s an incredibly cost-effective technique.
What everyone seems to have forgotten, however, is that this is all part of Kinect’s master plan of an AI dominated Earth. It’s using its first couple of months to win us over and lull humanity into a false sense of security; any day now it’ll plunge us all into a bleak future where machines govern humans, a la Enslaved: Odyssey to the West. I might be completely ok with this providing I get one of those light-hover-board thingies that Monkey rides around on.
Last Updated: January 19, 2011