Table of Contents Show
Robotic surgery is getting better fast thanks to the same type of AI that powers ChatGPT
Hello and welcome to Eye on AI. In this week’s edition…LLMs pave the way for autonomous robotic surgery; Apple Intelligence is released for the iPhone; The U.K. government launches a chatbot; AI workers seek whistleblower protections; and generative AI offers five million reasons to rethink e-waste.
Two of the most fascinating topics I’ve covered in this newsletter are AI’s transformative effects in healthcare and how large language models have reignited progress in robotics. Today’s edition brings these together: I’m talking about AI-powered robotic surgery.
I attended a Johns Hopkins University panel where researchers specialized in this area predicted that, in large part thanks to LLMs, autonomy is poised to become “a ubiquitous part of surgery” in the near or at least mid-term future. Similar to how vehicle autonomy began with automating some capabilities, they said autonomous robots will initially take on select tasks that support human surgeons. They also had no doubt we’ll see surgeries routinely performed fully autonomously by robots in our lifetime.
LLMs change the game
After hitting a wall with its previous approach of trying to pre-program robots to make specific movements, the robotics field has found a way forward with the same approach that’s made LLMs so successful. Axel Krieger, a mechanical engineering researcher focused on the development of surgical systems, spoke on the panel about new research he and his associates would be sharing this week about how the Transformer—the same one that powers tools like ChatGPT, only with robot action as the output—can be used to overcome limitations surrounding how a surgical robotic system called Da Vinci learns tasks via imitation learning. The team demonstrated its findings through the successful execution of three fundamental surgical tasks: tissue manipulation, needle handling, and knot-tying.
Overall, robotic surgery research has followed the trends of AI, moving from a “model-based approach” to a “learning-based approach” that enables surgical robots to learn via demonstrations rather than be designed with specific domain expertise. These models also improve over time with more data and can scale to accomplish different kinds of tasks.
The downside of the learning approach, however, is that it’s more opaque and unpredictable than the model approach. Russell Taylor, another researcher on the panel, argued that the introduction of these capabilities into actual clinical use will have to be gradual, with clear assurances that the robot will not harm the patient. He also argued that robots should only be used if they have a clear advantage, either enabling surgeons to perform tasks better and more safely, or making it possible to do something that wasn’t possible before. If these robots are further proven to be more consistent and precise, it could be a game-changer for safety and better surgery outcomes: human surgeons are human after all, which means they get fatigued and make mistakes.
The future of surgery
Already, surgeries like lasik eye surgery are performed with a high degree of autonomy, but research is starting to show promise for even more complicated surgeries. In another study, Krieger and team found that a Smart Tissue Autonomous Robot (STAR), which is designed for complex soft tissue surgeries, can actually outperform expert surgeons on some key metrics.
“We have fewer of these hesitancy events when we misplace a needle and have to pull it back out, so significantly better in these errors. And also in the spacing (of the sutures)—much more consistent,” he said.
The goal behind AI-powered robotic surgery is two-fold: make surgery less invasive for patients and meet the growing demand. According to data shared during the panel, there is already a shortage of surgeons and the caseload is expected to double as the aging population grows. Robots could augment critical portions of surgery with better precision and increasing autonomy. Each year, 310 million major surgeries are performed. In 2023, about four million were performed with robotic assistance, and this rate is expected to grow by 18% annually.
This doesn’t mean surgeons will be out of work anytime soon, but their role will likely change.
“The more likely scenario is that the surgeon will remain in overall strategic control while delegating specific tasks to the robot while the surgeon supervises,” said Taylor. “As AI algorithms become more powerful, the delegated tasks will become more sophisticated.”
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
AI IN THE NEWS
Anthropic partners with Palantir and AWS to sell its AI to defense customers. The firm—which has a reputation for approaching AI more safely than competitors like OpenAI– announced today that it will provide its Claude models to U.S. intelligence and defense agencies. Meta earlier this week also announced it’s making its Llama models available to defense customers, and OpenAI has been seeking inroads to the department of defense as well. Last month, OpenAI hired a top executive from Palantir (a company that remains highly controversial for its predictive policing technologies, government surveillance work, ICE contracts, and more) as its new chief information security officer. All together, these actions signal a massive shift among the most powerful AI companies toward using the technology for defense and military purposes—the very use case many have feared most.
Apple releases its AI-powered iOS 18.2 in public beta. The release for iPhone 16 users includes the ChatGPT integration with Siri, an AI emoji generator, writing assistance across apps, image generation capabilities, and the ability to search for and identify real-world objects using the device’s cameras. The caveat is that the features—collectively called Apple Intelligence—are not being turned on by default and users will have to sign up for a waitlist for some features. Apple has not given a timeline for when the full suite of features will be available to all users. You can read more in TechCrunch.
The U.K. government launches a chatbot for business users. Powered by OpenAI’s GPT-4o, the chatbot is designed to help users more easily navigate the 700,00 pages of the gov.UK website and find answers to questions about business regulations and more. It will be tested with 15,000 users before being rolled out more broadly. The Guardian got an early preview and said the results are “mixed,” also noting the chatbot warns users of the possibility of hallucinations.
AI workers seek whistleblower protections. Lawrence Lessig, a Harvard law professor who’s representing current and former OpenAI employees raising concerns about the company, spoke about the efforts to Bloomberg Law. AI workers argue that advancements in the technology pose threats they can’t legally expose under current law. They want Congress to grant them specific protections as it considers an AI package to boost research and promote guidelines for technology. The legislation is gaining momentum to be passed in the post-election lame-duck session as pressure mounts to regulate AI.
FORTUNE ON AI
Trump 2.0 will have a massive impact on Big Tech, AI, chips and more—in Silicon Valley and beyond —By David Meyer
Perplexity bets big on AI search during the election while Google and OpenAI say they can’t help —By Sharon Goldman
Meta’s plans to build a nuclear-powered data center for AI fell through because of rare bees —by Paolo Confino
AI CALENDAR
Nov. 19-22: Microsoft Ignite, Chicago
Nov. 11-15: Web Summit, Lisbon, Portugal
Dec. 2-6: AWS re:Invent, Las Vegas
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia
Dec. 9-10: Fortune Brainstorm AI, San Francisco (register here)
EYE ON AI NUMBERS
5 million
That’s how many tons of e-waste the hardware (computer chips, networking cables, etc.) used to train generative AI could generate by 2030, according to MIT Technology Review. It’s “a relatively small but significant fraction of the global total”—60 million tons annually—but experts say it’s part of a growing problem and that the rise of generative AI represents a reason and opportunity to find sustainable solutions.