No more pilots? No more doctors?

The nightmare sickens me. A small child trusts a man to protect her, take care of her, and shield her from harm. The man, for incomprehensible and useless reasons, neglects her unto death. For me, when I think of Germanwings Flight 9525, I am haunted by the photo of a single tiny girl, taken in the last days of life; the obliteration of that perfect life’s potential. Waste, tragedy, evil.

In the days that followed the crash, the blogosphere, media talking heads and airline safety experts, obsessed about preventing another murder by pilot. Always put two people in the cockpit. Three people, one with a gun. Give them all have guns. Remote technology on the cockpit door. Psychiatric testing, yearly. Monthly? Weekly? Clever, expensive workarounds to “fix” the capacity of the human mind to stray from rational. However, it occurs to me that the answer is more obvious, if more threatening. An empty cockpit. No pilots at all.

A computer would not get depressed. A computer would not get angry. A computer would not get tired. A computer would not forget or get confused. A computer would not develop the aberrant logic for which killing is an expression. A computer would not fly a plane into the side of the French Alps.

A computer would also not have flown a plane into the World Trade Center North Tower on September 11, 2001. Or the South Tower. Or the Pentagon. Or have to be forced into a catastrophic crash in a Pennsylvania field. Every plane would take off, fly and land perfectly.   It would not matter of someone’s wife leaving him, failing health, toxic medication or some fanatical calling requiring the blood sacrifice of innocents.

It may be that the only rational conclusion is that men cannot be trusted with complex technology on which depend the lives of other men. And women. Or children. Or a tiny girl for whom I weep.

Put in the context of the Institute of Medicine (IOM) 1999 report, To Error is Human, the 150 people killed in 9525 are only symbols, footnotes. When it comes to avoidable death by human failure, the American health system overshadows all the airplane deaths in history, including in war. As many as 98,001 preventable deaths per year, by medical error. That is 269 per day. 11.19 every hour. One every 5 min, 22 seconds. That is an A320 Airbus, twice, everyday, for always.

While the IOM study has been extensively debated, the concept of such needless human waste terrifies patients, doctors and healthcare planners. It is the focus of innumerable projects, Lean and Six Sigma analyses, and an expanding culture of “quality.” The entire medical establishment, and even those not of “the establishment,” is struggling to reduce this ongoing incomprehensible tragedy to “acceptable loss.”

Nonetheless, we must not forget; the airline industry nearly perfected “Six Sigma.” They already get quality right. The lesson may be that that there is a limit to making man better. Perhaps medical men cannot be trusted with complex technology on which depend the lives of other men. Women. Children. Or, perfect little girls.

Of course, healthcare is nowhere as “simple” as flying a machine through a series of predictable and repetitive maneuvers. There is much more than terrain and weather to consider. Having practiced medicine for 35 years, I know each day and each patient are different.

However, I also understand that medical technology is incredibly complex and is so rapidly changing that no human can possibly keep up. I know doctors get tired, physicians are flawed and, while they rarely, if ever, are motivated by the twisted logic by which a pilot kills his passengers, they none-the-less make mistakes by which their patients suffer and occasionally die.

What does this mean for the future of medical care? What is the role of the doctor? The computer? The empowered patient? Do we rely on software, perhaps Apps in smart phones, to order tests, make diagnoses, prescribe care? Will robots perform intimate exams? Do machines in the back of pharmacies do surgery? Can artificial intelligence lead research, produce innovation, inspire leadership? If so, who or what do they lead? Who educates families in the middle of dysfunction, crisis and emotional distress? Who sits at cancer’s bedside, hand on shoulder, listening, providing courage, relieving fear, giving hope?

We have a conundrum. While we may be repulsed by the idea that doctors might go the way of travel agents, transcriptionists or book stores, 100,000 deaths each year in a nation with arguably the world’s most advanced healthcare system is unacceptable. Around the world, over time, millions die from preventable error. Hardware, software and Big Data, could be the solution. Myriad lives might be saved. Nonetheless, as the doctor exits, replaced by the wall-mounted camera, sensors on the exam table and a monitor on the desk, what does it mean for the humanity left behind?

 

 

 

P.S. As I complete this piece, I come upon the story of Mr. Iftikhar Hussain, 64 years old, from Chicago, Illinois. It seems he went for a drive on Sunday, to visit the family in Indiana. Carefully following GPS instructions given by the computer in his 2014 Nissan Sentra, he turned onto a ramp to a bridge over the Indiana and Ship Canal. Iftikhar deftly swerved around several bright orange barrels, which almost, but not completely, blocked the road ahead. The car burst into flames and his wife Zohra died as their car plummeted off the bridge, which had been demolished several months before. There are some fatal errors during which man and machine collude.

10 Comments

  • Josh
    In having this discussion, another worthwhile question to add to yours above is this: is there something in the relationship between the person who provides care (traditionally, doctor, but could be nurse, PA, NP, etc.) and the person who seeks it (i.e., the patient) apart from the mere application of technical skill? Is there anything important that happens when a human, who has knowledge and skill, sits with a human, who is sick and vulnerable, and is simply present in the suffering? Some sub-specialists may be likened to the technology you describe above. They may be technicians of disease, rather than physicians in any meaningful sense of the term. I think the difference matters for humans.
  • J.S. Clark
    As always, Doctor, a thoughtful and discerning piece. Jeff
  • Liz
    A computer could just malfunction… hard drive crash, virus, malware, trojan… Artificial intelligence is not yet so intelligent that it can completely replace a human in complex settings.
  • meyati
    Or it could be run by my cable, phone, and internet provider----We are supposed to have 2 days free, because of their engineer inspired update.. remember all of those planes will crash during a software program change, update. I call on my cell phone- and the automatic responder tells me to use my useless computer- whatever. i press the buttons for options that sound as though they might apply to me, and I still end up in the wrong department.
  • D Someya Reed
    Yes, but not only can a computer have unintentional faulty programming or be intentionally programmed to do the wrong thing, in a pilot-less airplane it WILL be hacked. The target is too tempting for the sick and twisted. One thing that bothers me is...we know the statistics for medical errors (at least those that can't be suppressed) yet what are the consequences to those responsible for the errors? Without consequence, we humans get lazy, get sloppy and get callous. Without consequence, aren't we telling those who should have known better or should have used greater care that what they did was OK? Does this seem enough to conclude that they "probably" won't do it again? Which leads to something even more troubling. Why is medicine looking to come up with "acceptable loss?" Why is medicine even entertaining a MILITARY euphemism? I understand the medical concept of "acceptable risk" but not "acceptable loss." Acceptable loss assumes that you are working towards a goal. In a medical setting that would have to be the eradication of disease and cure for the patient. Medically, this would mean that in order to save "x" number of patients we are willing to sacrifice "y" number of patients. How does this not reduce the patient to that of a test subject, lab rat or guinea pig? But think about this a bit further. What if any of you were individually responsible for deciding what constitutes acceptable loss? How would you do it? Who would you decide to be acceptable to lose? The rich? The poor? What if you, personally, had to tell each family that you decided their loved one was an acceptable loss? Your personal safety would have to be a concern. But what would you change if you knew that you would be protected from any consequence of your decision to save this patient but not that one? And there you have your computerized physician. No feelings, no concerns, just following the programming. S.o.r.r.y...f.o.r...y.o.u.r...l.o.s.s. P.l.e.a.s.e...p.a.y...a.t...t.h.e...w.i.n.d.o.w. Science fiction has often become science fact, perhaps predictive of the future. Some of the better film adaptations of books covering the over-reliance on technology in pursuit of the perfect world (and their characters/consequences): Colossus in 'Colossus, The Forbin Project' Hal-9000 in '2001, A Space Odyssey' Any Cyberdyne system in the 'Terminator' series Even Carousel in 'Logan's Run'...just replace "over 30" with "sick" Can we who are so flawed make something that is truly flawless? Isn't that pretty much what they said about the Titanic? Can we make a technological perfection while excluding any human error or negative human manipulation? Even one error is less than perfect. Even one life lost to error (especially preventable error) is "unacceptable loss."
  • Dg
    Computers have glitches therefore data is always flawed as is man--something resulting in unexpected and unwanted death and suffering. Should you never risk walking or running because you might fall and break a leg?? Should doctors not try to saves lives because some patients may die because of errors or lack of knowledge? The human condition will always be fragile - it's only one breath away from death.
    • D Someya Reed
      Your point is well taken. However, should you run headlong, at break-neck speed down a mountainside at the urging of someone who will neither run with you at your side (with equal risk) nor fully explain the risks you are taking (alone) even though he knows both what the risks are and that you don't know them? His reasons for his actions being that he doesn't want you to lose hope that you "might" make it, even though the deck is stacked against you, while he sits safely at either the top or bottom of the mountain, relatively risk-free. Or worse, he might explain that he wasn't taught how to tell you the risks at school. Can you spell accountability? No one needs to be taught compassion and caring...it's a choice. And, really, that's all that is needed with a little touch of ambition to look up the information yourself (or find an appropriate mentor) when you know this sort of thing is going to be part of your job. The job you chose. Though it's funny, and not Ha-Ha, that when the risk is all ours, we can seem to independently learn so much even if it is mostly from the dreaded Internet. And how often does it turn out that the information we find is accurate and not wacky? But back to your run...Oh, he most likely will be sad if you fail but he won't encounter any physical trauma such as you might. So, as proposed above, a better way would be to have a third-party build a break-neck speed, mountainside descender machine that will not be under the third-party's control on the day of your run, nor under the control of the person urging you to run, nor under your control. So now we've added another significant level of risk (just for you) and we've really not taken people out of the equation, have we? People will have built the machine and the technology that drives it. Sure, both the urger and third-party builder may have some financial or reputational risk but that pales in comparison to the risks you are taking. Technology is not always the answer. Neither is removing people from the equation. Even NASA, who wrote the book on redundant systems, would never tell you that their technology is perfectly safe. Leading people to believe that technology will always solve "the human condition" can be just as bad as telling them that a new drug may be discovered tomorrow. Both can lead to false hope unless they are here now and safe for you to use. Eternal optimism is not always appropriate. Sometimes, though not always, the best things for us may also be the simplest. Always looking to or relying on technology to be the ultimate answer can be just as bad as not using it at all. You are correct, though; we should walk and run even if we might break a bone. The risks vs. benefits are pretty easy to see and we all pretty much know them whether we want to admit it or not. The "nots" would be those of us who did something stupid and we should have known better but did it anyway. How many of us can say we have the same level of understanding when it comes to our medical risks?
  • Kathryn Zusmanis
    I feel health professionals follow prescribed standard, or routine of care regimens. IF - THEN...algorithms for treatment, as a computer would. Are there cases where lives may be improved based on a patient's health team's practice, and not the acceptable algorithm? When human, artful treatment is applied outside of an evidence based methodology, a medical practioner may be subjected to legal consequences.
  • […] a recent post in Sunrise Rounds points out that far more people die every year because they trust Doctors than because they trust […]
  • Jacki H.
    But would a computer land a plane on the Hudson? In times of despair, I think it's important to remember the extraordinary moments in which human ingenuity and sheer grit triumphed. Sometimes tragedy strikes, and it's terrible, and it's devastating, and it overtakes everything, but there are many more pilots like the humble hero of the Hudson, who, to hear him tell it, was just doing his job. I think the analogy holds. It's been a few years, I hope you're doing well (and remember me)! I'm glad to have rediscovered your blog, it's a good read.

Leave a Reply to Dg Cancel reply