Track topics on Twitter Track topics that are important to you
“Technology made large populations possible; large populations now make technology indispensable.”
– Joseph Krutch (Writer)
Since the 19th century, we have undergone several stages of machine revolutions.
The first stage was mechanisation. The advent of modern production methods eased and sped up manufacturing, massively increasing output and ushering in the Industrial Revolution.
The late 60s brought the age of the working computer. In 1967, an IBM infomercial, ‘The Paperwork Explosion’, predicted a dystopian future where unstoppable progress had created so much paperwork that it threatened humanity’s very existence, a development that could only be reversed with IBM’s advanced business machines. “They take care of the paperwork so you don’t have to,” concluded the ad, with the cast adding, “Machines should work. People should think.”
The current age of machine revolution is now celebrating a 20-year anniversary – in 1997, chess supremo Garry Kasparov was defeated by IBM’s Deep Blue, rocketing fledgling AI technology into the public eye. It was a task many deemed impossible at the time, as many more did before Google Deep Mind’s AlphaGo defeated the world number one board game Go player in January last year, a decade ahead of expectations.
As always, the most ambitious and innovative continue to stretch and break through the boundaries of possibility. This stage of machine revolution is turning IBM’s 1967 infomercial on its head. Now, the machines are thinking too.
A hard stop?
The age of the thinking machine is finding a natural home in healthcare, as discussed in recent articles, including Oncology’s Incessant Grip by Jeff Elton. Big data is a major driver; as technology develops and the global population booms, so has the quantity of data collected, everything from a patient’s history and symptoms to lab tests, medical images and research papers.
The quality and quantity of this data is increasing exponentially – it is expected to grow more than 50-fold this decade alone, reaching 25,000 petabytes (25bn gigabytes) worldwide by 2020. To put that into some perspective, 2 petabytes is the equivalent of all US academic research libraries put together. No wonder that AI is needed to work through this rising flood of data, and help us avoid a digital version of IBM’s dystopian future.
It is a problem described as “potentially a hard stop to human progress” by Bernie Meyerson, IBM’s Chief Innovation Officer.
While a doctor reads about half a dozen medical research papers a month, AI’s like IBM Watson can read half a million papers in 15 seconds. Deep-learning AI start-up, Enlitic, has technology that can analyze and interpret a medical image in milliseconds, up to 10,000 times faster than the average radiologist.
“Electronic health records [are] like large quarries where there’s lots of gold and we’re just beginning to mine them,” says Dr Eric Horvitz, Managing Director of Microsoft Research, who specializes in applying AI to healthcare settings.
While a human might only scratch the surface of this ‘gold’, and AI can sift through thousands upon thousands of ‘tonnes’ of material, deep-learning technology, such as Enlitic’s, can go even further, scouting not only for evidence of a particular disease (as most doctors and traditional computer-aided diagnostics networks do) but looking for and identifying multiple different diseases simultaneously.
Patterns in chaos
The successes in diagnostics are astounding. In April, UK researchers used machine-learning algorithms to correlate medical histories with rates of heart attacks. The team was given data on 295,000 people and asked to predict who would have heart attacks among 83,000 further samples, where subsequent medical history was already known. The results were then compared to the predictions based on current best practice and American College of Cardiology/American Heart Association (ACC/AHA) guidelines, which includes patient age, smoking history, cholesterol levels, diabetes history, etc.
As science writer Matthew Hutson wrote: “All four AI methods performed significantly better than the ACC/AHA guidelines. The best one — neural networks — correctly predicted 7.6% more events than the ACC/AHA method, and it raised 1.6% fewer false alarms. In the test sample of about 83,000 records, that amounts to 355 additional patients whose lives could have been saved.”
Interestingly, the algorithms also found areas where the ACC/AHA guidelines were erroneous. “Several of the risk factors that the machine-learning algorithms identified as the strongest predictors are not included in the ACC/AHA guidelines, such as severe mental illness and taking oral corticosteroids,” wrote Hutson. “Meanwhile, none of the algorithms considered diabetes, which is on the ACC/AHA list, to be among the top 10 predictors.”
Support or replace?
Whilst they are fantastically accurate at quickly interpreting vast quantities of data, AIs are yet to demonstrate that they can replace humans altogether. “Human brains bring passion to the work, they bring common sense,” says IBM’s Meyerson. “By definition, common sense is not a fact-based undertaking. It is a judgment call.”
In fact, there is mounting evidence that AI serves healthcare best when working in combination with people, especially doctors, as another weapon in the arsenal to fight disease. A new study, also reported in April, trained AI to recognize signs of tuberculosis in chest x-rays – the best results came when two different deep convolutional neural network (DCNN) models, AlexNet and GoogLeNet, worked together. They achieved an accuracy of 96%, higher than many radiologists, however, that number jumped to an astounding 99% when a human radiologist was called in when the two models disagreed.
A global health boost
These findings suggest that AI could save healthcare professionals hundreds or even thousands of hours of analysis each year, enabling them to reach more patients.
“An AI solution that could interpret radiographs for presence of TB in a cost-effective way could expand the reach of early identification and treatment in developing nations,” concluded study co-author Paras Lakhani, MD, from Thomas Jefferson University Hospital in Philadelphia.
Such developments could also bring down the overall costs of treatment, aiding global healthcare systems struggling under the burden of serving almost eight billion people. They could remove localized healthcare poverty and offer places without good healthcare professionals access to the help, analysis and treatment they require.
Nuclear war on an iPad
However, with big data come big challenges. The recent WannaCry hack attack showed us just how vulnerable many companies are to security breaches, providing a wake-up call to all organisations and governments afflicted, including the UK’s NHS. Although the exploit used by WannaCry was patched a few days later, large organisations that had not been updating their systems were vulnerable.
As patient data becomes more detailed and the world becomes more interconnected, security could become a major issue. As Bill Clinton’s former Secretary of Defence and nuclear expert, William J Perry recently commented on BBC Radio: “Just because we can launch nuclear weapons from the President’s iPad, doesn’t mean we should.”
Whilst the dangers of patient data being hacked are not of the same critical importance as nuclear weapon systems, cybersecurity issues can have damaging long-term impacts on people’s lives. Many diseases, for instance, still come with social stigma in some countries, including diseases such as AIDS where, if details were to be released, the impact on a patient’s ability to lead a normal life could be huge.
The recent leaks of confidential patient photos and associated medical records from the renowned Grožio Chirurgija clinic in Lithuania is an important example. Many patients were people in the public eye and their data included nude photographs and private contact detail and bank account information. The data is currently being sold on the dark web for €2,000 a pop, or €344000 for the whole 25,000 patients, while individual patients, and the clinic itself, have been blackmailed, often with texts to private numbers threatening to release photos unless a ransom is paid.
These revolutionary changes and the issues that accompany them further underpin the importance of open and frank communication within the healthcare industry. Not only must people and AI work hand in hand to maximise the enormous potential of emerging technologies, but we must also communicate the efficacy, safety and incredible utility this technology possesses both within the industry and among the general population.
As Arthur C Clarke famously wrote: “Any sufficiently advanced technology is equivalent to magic.” The human race’s scepticism and fear of magic is well documented by the witch hunts and trials throughout our history. Effective communication is vital towards transitioning these new technologies from magic towards becoming important, yet both widespread and seemingly normal parts of our lives.
Jacqui Sanders is Director at FleishmanHillard Fishburn
Original Article: Thinking Machines: From Magic to NormalNEXT ARTICLE
Health care (or healthcare) is the diagnosis, treatment, and prevention of disease, illness, injury, and other physical and mental impairments in humans. Health care is delivered by practitioners in medicine, chiropractic, dentistry, nursing, pharmacy, a...