Robots, AI and health: will it heal us – or kill us?

The surgeon waiting for you in the operating room has downloaded all the diagnostic files related to your case and performs the procedure with micrometer precision.

The risk of error or infection is reduced, incisions are minimized and recovery time is accelerated.

Except that the person performing the surgery will not be a human, but a super-intelligent robot specially programmed to perform emergency abdominal surgery.

This may sound like science fiction, but for those working at the forefront of artificial intelligence and robotics, it’s seen as a likely future for healthcare.

“I would expect that at some point in the future you should expect something like this,” said Subramanian Ramamoorthy, Professor and Personal Chair of Robotic Learning and Autonomy at Edinburgh University’s School of Informatics.

“Predictions are always difficult, but technically that would be the natural progression. It will gradually go from smaller procedures to larger procedures.

“Introduction will be slow. We will start with existing, minimally invasive surgeries and help people one step further.

“Then, once people start trusting it, you take it a step further into task-directed robotic surgery: you tell me what piece you want to cut out — say, a polyp — and then that’s automated.

“Then one day in the future – it’s hard to predict when – we can imagine you could have a whole series of surgeries, but we’re not there yet. From an economic point of view, we are not even at the beginning.”

Current ‘robot surgery’ used in the NHS such as Da Vinci robots are still operated and guided by a human surgeon (Image: PA)

READ MORE: NHS Glasgow in ‘world’s first’ AI trial for COPD patients

Ramamoorthy is one of the pioneers of artificial intelligence (AI) in medicine.

Working in a purpose-built lab at Edinburgh’s Bayes Center that mimics a typical operating room environment, he pioneered the development of sensor-guided autonomous robots that can help cancer surgeons “push towards tighter margins” – meaning less healthy tissue removed and recovery rates improve.

This work on safe AI for surgical assistance builds on ideas Ramamoorthy first explored through research into self-driving vehicles.

He sees parallels between the incremental advances in autonomous driving technology — from parking assistance to driverless cars — and the incremental advances in healthcare from surgeon-guided robots (already a reality) to the autonomous robotic surgeons of the future.

“Everyone starts out cheering and a little bit disappointed, and then you gradually grow,” Ramamoorthy said.

“It’s exactly the same here. For insiders, the hype wasn’t justified; likewise, the feeling that some people have that it won’t happen is also unjustified because it was always going to be a long game.”

READ :  Remark Holdings, Inc. partners with AAEON to simplify the delivery of AI-driven video analytics for a complex world

Recently there have been calls to pause AI development for six months amid fears it is unsafe (Image: Edinburgh University)

When it comes to diagnostics, AI is already gaining a foothold in the NHS.

A successful study in Grampian used AI as a “second pair of eyes” to scan 80,000 mammograms for signs of breast cancer.

It is also being piloted in Glasgow to alert doctors to COPD patients who are most at risk of emergency hospital admissions, so that preventative measures can be taken instead.

However, when it comes to robots performing operations, says Ramamoorthy, it’s a bit like the transition from map reading to GPS.

He said: “Right now a surgeon in the room outside is looking at the imaging, keeping it in his head and then going in and doing the surgery based on what he can see.

“It’s a bit like the old-fashioned way of steering a ship after looking at the map elsewhere, whereas here we’re talking more of GPS-guided navigation.

“What we’re looking for here is real-time diagnostics that give the robots micron-level accuracy.

“The issue of understaffing is secondary in a lot of ways – not because it’s unimportant – but they’re nowhere near getting rid of people because people will still be sitting there monitoring and monitoring it.

“In the beginning, the accuracy will be the drivers.”

READ MORE: We must heed the danger of the rise of artificial intelligence

Ramamoorthy will address the latest developments during a presentation at the Bayes Center on Thursday.

The Games Robots Play event is part of a week-long discussion series on artificial intelligence taking place as part of the Edinburgh Science Festival.

It comes days after AI experts, including Twitter billionaire Elon Musk and Apple co-founder Steve Wozniak, called for a global pause in human-competitive intelligence technology training, warning they “pose a profound risk to society and the world.” represent humanity”.

It follows the March 14 release of GPT-4, the next generation of the deep learning language model behind the chatbot ChatGPT.

While Musk and Wozniak warn that no one can “understand, predict, or reliably control” these emerging innovations, others have issued a moratorium on AI development with “[delaying] to let the Manhattan Project and the Nazis catch up” – a reference to nuclear weapons.

No one survives the war against the machines. #T2

READ :  New collaboration explores path to responsible AI

— Terminator 2 Movie (@Terminator2Mov) November 11, 2019

To the uninitiated, all of this seems unnervingly reminiscent of HAL 9000 – the rogue computer in 2001: A Space Odyssey – or Skynet, the fictional artificial intelligence system in the “Terminator” series that, when it became self-aware that started global nuclear warfare before its human inventors could stop it.

The late theoretical physicist Professor Stephen Hawking once warned that it was impossible to predict whether humanity would be “infinitely helped, ignored, or possibly destroyed.”

“If we don’t learn to prepare for and avoid potential risks, AI could be the worst event in the history of our civilization,” he said at a 2017 technology summit in Lisbon.

The question of whether sentient robots are “friends or monsters” will be discussed by a guest panel at the Edinburgh Science Festival on Tuesday, moderated by Professor Michael Herrmann of the Edinburgh Center for Robotics.

He said: “In the past there was no real question as to whether robots or machines could be sentient, but now there’s a sense that something has changed – a new quality has been achieved – so we need to ask those questions again. ”

The concept of sentience in robots raises a number of dilemmas, from the ethical to the existential: Should robots have rights, for example, and if we can create consciousness in machines, that doesn’t prove once and for all that they aren’t a godsend for them People?

Robots with human-like intelligence and consciousness are ‘possible’ – but the consequences are uncertain (Image: CornerShopPR)

One of the panelists, Rupert Robson, author of The Sentient Robot, notes that we still don’t know why consciousness exists.

He said: “If you think about our brains, all sorts of cognitive and emotional functions are going on — all sorts of information processing.

“The question is, why doesn’t all this processing of information happen in the dark like a calculator does?

“And yet we know that it is not possible to do it in the dark – we are aware of it. That is sentience.

“But it’s not absolutely clear what sentience or ‘consciousness’ brings to the party, because all this information processing is going on anyway.

“Do AIs or algorithms like ChatGPT and GPT4 have sentience or consciousness?

“Absolutely not – not yet.

“Is there any chance that we will be able to figure out consciousness to embed in robots? Yes, that is possible.

READ :  Northeastern continues ambitious hiring around multiple global issues

“But it won’t happen by accident. It will happen because we engineered it into the robot.”

READ MORE: Why artificial intelligence will be key to NHS staff challenges and wait times

For his part, Robson believes sentience may be what actually saves us from a Terminator-style doomsday.

“Make no mistake, we’re going to develop – over time – really super smart robots, with a much wider range of intelligence than ChatGPT, and at this point we have a danger – a risk to ourselves – and we need to mitigate this one.” Risk.

“I think sensation is a way of doing that.

“If [the robots] See the world through our eyes, if they can empathize with us because they’re sentient, then I think there’s an argument – a good argument – that we have a better chance of them being kind to us than hostile .”

dr Cian O’Donovan, a researcher at University College London, is back in the more mundane world of healthcare to ensure we use AI to our advantage – not to replace staff, but to free up clinicians and caregivers for more spending time with patients.

He said: “It’s not just about ‘the robots coming and doing all the jobs’ – the robots coming, that means we have to think really hard about education.

“Patients will benefit as robotics and automation technologies allow them to spend more time with human caregivers.”

Maximizing person-to-person contact time in care is seen as one of the potential benefits of AI and automation (Image: Getty)

O’Donovan warned that unless we plan for an aging population, AI is “not a panacea” for the labor shortage.

He added: “There is a risk that because of the successes – or perceived successes – in areas such as diagnostics or the replication of chess players, we transfer those successes to other areas too quickly.

“When you think of wards, of nursing homes, these environments are so unpredictable and so far removed from the board games, from the x-ray labs, or, in the case of robots, from the factory floor.

“I don’t think that’s fully priced in by governments that believe AI technologies are the future across the board.”

“Sentient Robots: Friends or Monsters?” takes place on Tuesday 11th April at the Bayes Centre, Edinburgh

“Can Robots Care?” with Dr. Cian O’Donovan takes place on Wednesday, April 12 at the Bayes Center

Games Robots Play will be held on Thursday, April 13th at the Bayes Center