The management of the effective use of penicillin in the early 1940s is often described as one of the most important moments in the history of medicine.14 This turning point in medical treatment provided physicians the capacity to intervene routinely in the “natural process” by eradicating lethal infection. The effect was to elevate the “science” over the “art” of medicine and propel it on its way toward conquering disease and zeroing in on death as the enemy.15
Over the next two decades, other remarkable inventions and discoveries followed, such as the advent of implantable pacemakers. By the early 1960s, dialysis treatment had expanded from a treatment for acute renal disease to include treatment for those with chronic conditions. By the early 1970s, it had become a benefit within the Medicare program.
As diagnostics and treatments improved, so did attempts to surgically replace or mechanically repair failing organs. Transplantation of vital organs, such as kidney, lung, liver, and heart, made significant strides during the 1960s, and by 1968 the Uniform Anatomical Gift Act was in place. Life-extending tube feedings for children introduced by Dr. J. Rhoads16 and colleagues in 1968 provided further evidence of medicine’s capacity to intervene on behalf of dying patients and complicate end-of-life care decisions. These advances led to additional questions about the appropriate time to retrieve organs from donor patients. In 1968, after seven months of deliberation, the ad hoc Committee of the Harvard Medical School to Examine the Definition of Brain Death17 argued that a new definition of death to replace the previous one based on the permanent cessation of heart activity was necessary in the wake of resuscitative efforts that made a beating heart a false and untenable sign of life. “What immensely complicated the issues surrounding brain death was the fact that, although the patient was no longer viable, the patient’s organs were.”18
Within the next five years doctors in England coined the term “persistent vegetative state” and the initial descriptive account of the syndrome by Drs. Jennett and Plum19 appeared in Lancet in 1972.
At about this same time, state legislatures began weighing in. They formulated policy and adopted their own statutes that defined at what point death actually occurred and what obligations providers owed to patients being sustained by “artificial life support.”
In early 1973, the American Hospital Association issued a patients' bill of rights.20 “Although the 12-point bill was vague and general, it was the first such document and included many basic concepts of patients' rights, such as the rights to receive respectful care, to be given complete information about diagnosis and prognosis, to refuse treatment, to refuse to participate in experiments, to have privacy and confidentiality maintained, and to receive a reasonable response to a request for services.”21
As a result of the explosive growth in medical technology during the 1950s and 1960s, moral theologians, thanatologists, and philosophers began grappling with social impact of these “advances.”
Herman Feifel, editing the seminal work, The Meaning of Death, in 1959 revealed the differing attitudes of doctors and patients toward death, finding that many patients welcomed honest discussions about death and their conditions despite physician reluctance.22 By far, the most widely known and referenced document on the ethical dimensions of life sustaining treatment was authored by Pope Pius XII in response to a question put to him by anesthesiologists at an international congress in 1957.23 The allocution, which makes ethical distinctions for both professionals and patients on issues of ordinary and extraordinary care as well as excessively burdensome treatment provided guidance for most Catholic and many non-Catholic health care institutions during the latter half of the century.
In the late 1960s, medical and ethical interpretations of the scientific “advances” were coming into focus as the Euthanasia Educational Council developed the first “living will” in 1967.24
In 1975, America experienced its first “right-to-die” case. Karen Ann Quinlan, a 21-year-old coed in a drug and alcohol induced coma, became the subject of a court case pitting her parents against the hospital and state appointed guardian. Failing to convince the Superior Court of the argument allowing their daughter to be removed from the respirator and die naturally from her injuries, the family appealed and received a favorable ruling by New Jersey’s Supreme Court. As a result of that decision, Quinlan’s father was appointed guardian and the family was allowed to make the determination to withdraw the respirator. By that time, however, Karen’s condition was such that her weaning from the machine allowed her to breathe on her own. She died nine years later from the effects of pneumonia at the age of 31.
In a memoir, her mother writes:
Karen became the symbol of abuse of technology in this technological age. She gave both fields, law and medicine, a case they could not avoid. She gave the public an issue that was pertinent to their lives.
For the first time in history, people were made aware of the decisions that had to be made. Moreover, Karen's situation showed us all that what happened to her could happen to anyone at anytime.25
The Quinlan case was the first of many, as Americans came to grips with the real issues of advancing technology, life threatening illness, and the fates of patients whose wishes were unknown.
In an effort to address many of the complicating issues resulting from the progress of science and research, the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research published its works during the early 1980s. Among other issues the Commission addressed the difficult topics of defining death, patients with permanent loss of consciousness, the withholding and withdrawing of life sustaining treatment, and the importance of advance directives.
The Commission issued two reports on these subjects in 1981 and 1983.26, 27 Congress specifically assigned the commission to start with "the matter of defining death" because of the decade long debate on the subject, and “a broad agreement existed on how it should be resolved both medically and legally.”28
In describing the work of the Commission some years later, Alexander Morgan Capron, its executive wrote:
The commission brought together groups whose competing statutory proposals had stymied action in most states orchestrating agreement on a uniform proposal, which was then adopted generally across the country. This facilitated the leading medical authorities on the subject to promulgate what was recognized as the accepted medical criteria for declaring death.29
As an offshoot to this assigned topic, the Commission decided to undertake another large study on the situations in which patients, families, and physicians must decide whether to forgo life-sustaining treatment. Medical thinking, case law, and public awareness on this topic were all rather rudimentary at this time. "Living wills" had been around for about 15 years but few people had them and only 15 states had "Natural Death" statutes authorizing the use of "directives to physicians." Moreover, most people -- including many health care providers -- operated from the assumption that it was wrong (and even illegal) ever to discontinue life-support, perhaps even when the patient's wishes to do so were known. Drawing on the best ethical and legal analysis, the Commission articulated why this was not the case, provided a framework for hospital ethics committees (which were just being widely instituted), and urged states to formulate and adopt durable power of attorney for health care statutes (which nearly all of them did over the decade that followed).30
Nearly a quarter century has passed since the publication of those reports; yet, estimates now suggest that only about 30 percent of adult Americans have initiated documents appointing proxy decision makers.31
The social consensus or “generally adopted agreement” cited by Capron that emerged from 30 years of medicine, religion and ethics was finally codified in the adoption of the Patient Self-Determination Act adopted by Congress in 1990.32 Obligations of health care providers were clearly spelled out, requiring health care institutions to:
- Provide to patients at the time of admission, a written summary of
- health care decision making rights (each state has developed such a summary for hospitals, nursing homes, and home health agencies to use); and
- facility policies with respect to recognizing advance directives.
- Ask about an advance directive and document that fact in the patient’s medical record (if yes, it is up to the consumer to ensure the institution receives a copy).
- Educate institutional staff and community about advance directives.
- Never discriminate against patients based on whether or not they have an advance directive. Thus, it is against the law for the institution to require any patient either to have an advance directive or not have one as a condition of treatment.33
Since the early 1990s, health care educators, palliative care and hospice practitioners, and end-of-life coalition leaders have worked to educate Americans about the importance of making their wishes known and naming someone to speak on their behalf when they can no longer speak for themselves. While the “right to refuse” treatment has become a fundamental element of end-of-life decision making in the United States -- unequivocal in cases where the patient clearly sets it forth or by substitution in a named proxy -- the citizenry remain conflicted about “when” it is proper to refuse treatment in cases where levels of consciousness and disability are questionable.
Since the U.S. Supreme Court ruling in the Nancy Cruzan case in 1990, which allowed for the withdrawal of nutrition and hydration from those in vegetative states, little has changed in the public policy arena. Despite multiple challenges over a number of years, and especially during the protracted Terri Schiavo case in Florida, consensus has remained.
Americans have reached consensus that: (1) people have a right to refuse life-sustaining medical interventions; and (2) interventions that can be terminated include artificial nutrition and hydration. The one unresolved issue is how to decide for mentally incompetent patients. Only about 20 percent of Americans have completed living wills, and data show that family members are poor at predicting patients’ wishes for life-sustaining care. Despite court cases and national consensus that these are private and not legislative matters, the Schiavo case is unlikely to change practices except to increase the number of Americans who complete living wills.34
So, if the consensus is clear and the benefits to naming a proxy are known by Americans, why have more adults not completed the documents? Clearly it is due to the reliance by many Americans on the prevailing reality of default surrogacy; they depend on their loved ones and healthcare professionals to make these decisions for them.
Most health care consumers trust their loved ones and their medical practitioners.35 Even those without family trust their providers.36 The issue is not a political one to be resolved in courts, and evidence points to the fact that providers are unlikely to change their practice of withholding or withdrawing treatments once they have made a clinical determination to do so.37 However, clinical practice improvements may benefit both patients and proxies alike in translating complicated medical terminology, survival assessment and the realities of living with advanced disease states. Discussion and treatment techniques, such as time limited treatment options and focusing on treatment goals rather then treatment modalities hold some hope in helping to determine timely end-of-life decision making.38
What remains central to resolving this dilemma is how to engage Americans in meaningful dialogue about these issues. Such dialogue must be timely, culturally sensitive, sufficiently customized, yet uniformly recognized as grounded in the principles of autonomy and informed consent. It must also include the clinical practitioner in a more holistic discussion of the benefits of treatments and care. It must include a more descriptive and realistic notion of the medical benefits of palliative care. The intersection of those crossroads between the medical and social dimensions of end-of-life decision making is the core of the problem at hand. They are interrelated and codependent.