Johns Hopkins Magazine
Johns Hopkins Magazine Current Issue Past Issues Search Get In Touch
  S P E C I A L    R E P O R T
Trials
&
Tribulation

Last summer's painful events have forced the university to confront tough questions about its program for ensuring patient safety in clinical research trials. Where does Johns Hopkins go from here?

By Dale Keiger and Sue De Pasquale
Illustrations by Naomi Shea


Last April, a Hopkins associate professor of medicine named Alkis Togias began an asthma research protocol titled "Mechanisms of Deep Inspiration-Induced Airway Relaxation." Funded by the National Institutes of Health (NIH), Togias's study, in the words of its consent form, was "to find out how the tubes that carry air into the lungs can stay open, even when we breathe all types of irritating chemicals." The project was aimed at helping scientists better understand an illness that afflicts an estimated 17 million Americans; asthma is one of only three chronic diseases in the United States with an increasing death rate. Volunteers were to inhale a chemical called hexamethonium; researchers would monitor how the volunteers' lungs responded to the irritant.

The first test subject developed a dry cough, which abated in little more than a week. The second volunteer apparently suffered no ill effects. The third, Ellen Roche, a healthy 24-year-old technician at Hopkins's Asthma and Allergy Center, also developed a cough. But her symptoms did not abate. Instead, her condition deteriorated alarmingly. She was admitted to intensive care on May 9, in respiratory distress, five days after inhaling the chemical. Within a month, despite the best efforts of Hopkins doctors, she was dead.

Roche's death, the first death ever of a healthy volunteer at Johns Hopkins, stunned Hopkins researchers. They were stunned again on July 19 when the U.S. Office for Human Research Protections (OHRP) suspended all federally funded research involving human subjects at nearly all Hopkins divisions. (The School of Public Health and Homewood, which operate under different government "assurances," were not affected by the suspension.) The shutdown halted some 2,400 protocols. The initial response from the university was anger: A media release on the day of the suspension called OHRP's action "unwarranted, unnecessary, paralyzing, and precipitous," and you can still find researchers who--not for attribution--use the words "excessive" and "disproportionate" and "bordering on unethical." Hopkins nonetheless accepted responsibility and moved forward immediately with a corrective action plan. OHRP partially lifted its suspension after five days, and in the weeks following the suspension, Hopkins cooperated with government regulators and committed to an arduous re-review of every current research protocol. A faculty committee conducted an internal review of the circumstances surrounding Roche's death, and Hopkins administrators commissioned an external review. The findings from both committees forced Johns Hopkins to confront inadequacies in its protection of human research subjects. And Hopkins administrators and faculty began a long, painful process of institutional soul-searching.

Both internal and external reviews of Togias's study criticized him for not halting the experiment after the first volunteer began coughing, and for not reporting a change in the study's procedure to the Hopkins institutional review board (IRB) that was charged with monitoring the protocol. Both reviews faulted the IRB for an inadequate review of the experiment, and for approving a faulty consent form. But the questions posed since Ellen Roche's death run much deeper than a review of procedures. Does the national system of regulatory oversight and IRBs adequately protect research volunteers, or is it outmoded and overwhelmed by profound changes in the research environment? Was the system inherently flawed from the beginning? What must change to enable vital research to continue, but with better assurance of volunteer safety?

At a town meeting last July, Edward Miller, chief executive officer of Johns Hopkins Medicine, made it clear that he believed Hopkins had to do more than just tweak its review process. "There has got to be a cultural change here," Miller said. Chief executive officers do not just toss around the phrase "cultural change." Miller challenged his colleagues to examine their fundamental approach to research on human volunteers. He exhorted them to go well beyond whatever the government required, to establish a new benchmark for human research subject protection. "We're going to have to raise the bar higher," Miller said. "There can't be any slippage. None."

Does the national system of oversight and IRBs adequately protect research volunteers? Or is it outmoded and overwhelmed by profound changes in the research environment? The summer of 2001 proved to be a bad few months for Hopkins. The June 2 death of Ellen Roche and OHRP's clampdown in July created the first in a series of damaging national headlines. Later that month, publications in India reported allegedly improper research on oral cancer patients there; the principal investigator on that study was a professor from the Krieger School of Arts and Sciences. A Hopkins faculty investigative committee later determined that the professor had not sought the mandatory federal and university approvals for the experiment, nor conducted adequate screening tests on animals; the professor was sanctioned by the university, but may appeal. Then, in August, there was another flurry of stories when the Maryland Court of Appeals overturned a lower-court decision and allowed a pair of lawsuits against the Kennedy Krieger Institute, a Hopkins affiliate, to proceed. The suits, filed by two mothers who had participated in a study on the abatement of lead paint in Baltimore inner-city housing, alleged that Kennedy Krieger had been negligent in warning them about risks to their children. OHRP subsequently opened an investigation of the Kennedy Krieger study. The university issued strong evidence that soundly defended the project (and the court later modified its decision), but the upshot, nonetheless, was more bad headlines.

It was little consolation that Hopkins was but the latest major institution to face serious questions concerning the safety of research subjects. In May 1999, federal regulators temporarily closed down research at Duke after that institution failed to respond to requests for proper monitoring of human subject volunteer safety. At the University of Pennsylvania, an 18-year-old named Jesse Gelsinger died in September 1999 from drugs administered as part of a gene therapy study; the Food and Drug Administration (FDA) put a hold on all other gene therapy trials at Penn, and the university subsequently decided to do no more gene therapy testing. Last March the Seattle Times reported that more than 20 patients had died in flawed experiments at Seattle's Fred Hutchinson Cancer Research Center, experiments conducted by researchers who had substantial financial stakes in the outcome.

As regulators, review committees, and administrators considered what had transpired at Hopkins, some themes emerged. Hopkins had instituted a system of review of research protocols that, in the words of the external review committee, made a serious incident more likely to occur. Many researchers at Hopkins, said the report, "believe that oversight and regulatory processes are a barrier to research and are to be reduced to the minimum." And the external committee noted an adversarial relationship between Hopkins and government regulators. The conclusion to the external review report began with the following: "This unfortunate incident highlights some defects that seem to be particular to Hopkins."

See ...
In the Name of Science
"Medical progress is based on research which ultimately must rest in part on experimentation involving human subjects," notes the Declaration of Helsinki, a landmark document first developed by the World Medical Association in 1964. Indeed, clinical trials have been the linchpin of major medical breakthroughs. Think of the biggest advances of the last century--from the Salk polio vaccine to the drug "cocktails" currently holding AIDS in abeyance among those HIV positive- -and you must credit the millions of people whose willingness to serve as test subjects made such breakthroughs possible.

At Johns Hopkins there are hundreds of desperately ill people enrolled in clinical trials, and hundreds more clamoring to be accepted into new studies. Many of these patients suffer from terminal diseases that have no effective treatments or cures; for them, the clock is ticking. Clinical trials often hold out their only hope (see Hope for a Cure). Patients with pancreatic cancer, for instance, are eager to enroll in a gene therapy study being conducted by Hopkins oncologist Elizabeth Jaffee. Pancreatic cancer is fast-moving and deadly; typically diagnosed at an advanced stage, patients at that point have just three to six months to live. Though Jaffee's gene therapy "vaccine" has yet to be proven, Jaffee says, "the point is, there's nothing else out there for pancreas cancers. These people don't have a lot of options."

Clearly, such patients are vulnerable. Vulnerable, too, are other groups of people who regularly serve as human research subjects: children, those from "at-risk" populations such as drug abusers or the homeless--and even healthy subjects, like Ellen Roche, who volunteer in part to help scientists better understand the basic science behind disease (see In the Name of Science).

What Is a Protocol?
A research protocol is a blueprint, laying out the objectives and purpose of a study for review by the institutional review board. In it, the principal investigator must explain the criteria that will be used for selecting human subjects and offer a detailed description of the study design. Also included: a discussion of dosage (when a test drug is involved), as well as a description of clinical procedures or lab tests that will be used to monitor the effects of the drug and to minimize risks. A copy of the study's informed consent form is also included.
Ensuring the special protection of all research subjects is the responsibility of each IRB reviewing protocols funded in whole or part with federal funds. Any institution receiving federal research dollars has to submit research protocols to an IRB. Most research centers like Hopkins create their own IRBs, composed of scientists and clinicians from within the institution, plus at least one lay representative from the community at large, often a member of the clergy. An IRB must have at least five members but typically includes 15 or 20 people, with expertise ranging from pharmacology to psychiatry to oncology, from neurology to nursing to immunology. An IRB's prescribed function is to review every proposed experiment (though some very low-risk research is exempt) to ascertain that it has scientific merit, conforms to federal regulations (if the federal government has funded the research), has sufficient safeguards to protect the safety of volunteer test subjects, and is ethically sound. An IRB must approve any protocol before it can be carried out. (Currently, federal regulation does not require IRB review of privately funded protocols, unless they are subject to FDA regulation. The Hopkins policy, however, has required IRB review of all protocols, regardless of the funding source.)

For years, critics have voiced concerns about the IRB system. In March 1996, the General Accounting Office noted the heavy workloads faced by IRBs and questioned the thoroughness of their reviews. In mid-1998, June Gibbs Brown, the inspector general of the U.S. Department of Health and Human Services, issued a report titled "Institutional Review Boards: A Time for Reform," which stated, "The effectiveness of IRBs is in jeopardy." The report noted that after approving a research protocol, IRBs conducted minimal continuing review and were subject to conflicts of interest (for example, if one of the reviewers owns stock in a pharmaceutical company sponsoring a particular protocol). An article by the Human Research Ethics Group at the University of Pennsylvania Health System, published in the December 9, 1998, issue of the Journal of the American Medical Association, said, "These regulations last underwent major revision in 1981 and have remained unchanged despite significant changes in the nature of clinical science, the financial sources of research support, and the institutional environment in which clinical research is conducted." In April 2000, the inspector general issued a follow-up report that noted some progress, but pointedly stated that "overall, few of our recommended reforms [from the 1998 report] have been enacted."

Says Peter Lurie, deputy director of Public Citizen's Health Research Group, "The fundamental flaw or limitation of IRBs is that it's always been the researchers who are in effect regulating themselves. Some people paint that as a strength- -'Who else has this kind of information?' But we need to start exploring new models that reduce that conflict. There is already a move in this direction with more lay people on IRBs. One place to start getting more people is from sister institutions or neighboring institutions. But right now, most IRBs emphasize the 'I'--there is too much 'I' and not enough 'R.' "

Greg Koski shares concerns about the current review system. Koski, an associate professor of anesthesia at Harvard, was named the first director of OHRP in June 2000. This oversight agency was created when NIH's Office for Protection from Research Risks was moved to the Department of Health and Human Services and renamed. The move was ordered in part because of questions about OPRR's ability to conduct objective, disinterested oversight when it was reporting to the very agency that funded the research.

Says Koski, "The process for protecting subjects should not be an impediment to doing research. But we have to recognize where our priorities are. If we as a society are going to look to science for these benefits, and accept these benefits knowing that the only way we can get them is if we actually use people as subjects, then we have a moral and ethical obligation to make sure that we are looking out for their interests and well-being and rights. That's got to be the first priority."

Koski believes that the review process, which was formulated in the 1970s, contained a fundamental flaw from the start: "My own feeling is that we set divergent courses at the very beginning, by almost arbitrarily saying that the protection of subjects will be the responsibility of these institutional review boards. The scientists will get the money and do the research, but the IRBs will protect the subjects. It makes an assumption that the responsibility for the protection of human subjects lies with these review committees. The correct notion, in my mind, is that the responsibility is just as much the investigator's as the IRB's.

"To make it even worse," Koski adds, "because we established the IRBs as part of an administrative process that was viewed by many as either an underfunded or unfunded federal mandate, there was a natural tendency to put the minimum amount of resources into this 'administrative process,' to meet the minimum regulatory requirements instead of doing the right thing, which is to say: This is not an administrative process. This is the foundation of trust on which we do human research."

Hopkins senior administrators acknowledge Koski's point about commitment of resources to IRBs. Institutional review has become subject to the same pressures exerted on all administrative processes: Move the paper through the pipeline and stay lean. This pressure, exacerbated by the advent of managed care, which forced institutions like Hopkins to do everything possible to contain costs, ran head-on into the phenomenal burgeoning of research during the last decade.

Since the mid-'90s, government funding for research at Johns Hopkins has increased at a rate of 15 to 18 percent per year, totaling some $298.5 million in awards in fiscal year 2001. Funding from commercial sources has increased as well, totaling close to $50 million in fiscal 2001, while "other" funding sources (such as the American Heart Association or other disease societies) totaled $60.6 million in fiscal 2001. All this funding--totaling $408.9 million at Hopkins in fiscal 2001--has meant a surge in the workload of IRBs just when institutions were forced to do everything they could to contain administrative costs.

At Hopkins and elsewhere, approximately one third of the money awarded for grants and contracts goes to support the administrative and facility costs of doing research (formerly known as "indirect costs"). It is from this pot that money to support the work of IRBs is drawn. But at Hopkins, IRB budgets did not keep pace with the increase in available income. Between fiscal 1993 and 2001, indirect cost funds increased at twice the rate of funding for IRB budgets, according to Michael Amey, assistant dean for research administration at the School of Medicine.

By mid-2001, there were some 2,400 active protocols on the books at the School of Medicine in East Baltimore and at the Bayview campus, with researchers submitting an additional 80 per month. To review all these protocols, there were basically two IRBs in place--one in East Baltimore and the other at Bayview. One more IRB had been approved for East Baltimore and had just begun operating by the time OHRP shut down Hopkins's federally funded research involving human subjects.

See ...
The Hope for a Cure
The upshot was a single East Baltimore IRB that had to examine protocols coming in at a rate of nearly three per day. Each protocol contains 30 to 90 or more pages of highly technical material requiring all manner of expertise to fully understand. IRB members were supposed to read, comprehend, and critique these documents, meet regularly to discuss them, and finally judge whether they were valid scientific pursuits that satisfied all ethical and regulatory requirements.

Lewis Becker, professor of medicine, has served on IRBs for 20 years and currently chairs one of the Hopkins boards. "In a typical week I'd review perhaps 15 or more protocols," he says. That review, plus subcommittee and full IRB meetings, would consume approximately 10 hours per week, he estimates, on top of 40-50 hours of other work: "I do both basic and clinical research. I'm the principal investigator for a program project grant. I'm the principal investigator for at least a couple of other NIH grants. I see patients. I do attending, teaching." And except for the board's chairman, members were unpaid for their IRB work and skeptical of its value for their prospects of promotion. Noted the 1998 JAMA article: "Some IRBs report growing difficulty in attracting dedicated faculty to what is too often a largely thankless job."

No one disputes the outcome at Hopkins: The volume of protocols overwhelmed the system. By mid-2000, for example, the backlog of un-transcribed minutes from IRB meetings stood at 18 months. Chi Van Dang, vice dean for research, acknowledges the staggering burden: "That was one of the things that we sort of knew. But the culture here was, 'We feel that we can take it on our shoulders until our knees buckle.'"

Phase 1 Clinical Trial
Marks the first test of a drug in humans and is limited to relatively few subjects (20-80). Used to investigate dosage and toxicity, not for testing efficacy. Most often uses patients as volunteers, not healthy subjects.

Phase 2 Trial
Used to test efficacy and obtain additional data on safety of a drug. Involves a limited number of subjects (200-300).

Phase 3 Trial
An expanded trial, often enrolling several thousand subjects, that is designed to gain additional evidence of efficacy.

Phase 4 Trial
A post-marketing study of an FDA-approved drug that is aimed at gaining more information (such as elucidating the incidence of a specific adverse reaction).
Source: University of Rochester Medical Center

Hopkins had tried to make the process more efficient by creating its own review system. Each IRB member was still expected to read every protocol. But he or she would then forward comments to a subcommittee of the IRB--a small subset of faculty members who would collect and discuss them at a weekly meeting. Only those issues that the subcommittee deemed significant would be put before a meeting of the full IRB, which occurred every other week. Unless someone had expressed serious reservations about a protocol, it was not discussed by the full committee--a situation at odds with the discussion that OHRP had presumed was taking place.

Dang says, "We thought we had a pretty good system. It actually had three levels of review: the whole committee writing comments, the subcommittee, and then you bring it back [to the whole IRB] for significant issues. We thought, 'That's pretty good.' I've attended these meetings and people are very diligent when they have real issues to discuss. We thought everything was in order."

The external review committee that examined Hopkins in the aftermath of Ellen Roche's death differed. Its report was critical of what the committee's chairman, Samuel Hellman of the University of Chicago, calls "the idiosyncratic nature of the [Hopkins] IRB." The committee's report stated bluntly: "The protocol review process is grossly inadequate and it does not conform to current standards." It also said that "the Hopkins system ... results in never having anyone with the explicit responsibility to conduct a thorough review of a specific proposal. The Hopkins system limits, by its design, active discussion by the full committee, and loses the expertise that committee members bring to the review."

Late last fall, after Hopkins IRB members had nearly completed the arduous task of re-reviewing all protocols, administrators realized the extent to which Hopkins procedures had indeed been inadequate. Says Dang, "If our system was perfect before, you would expect that the concerns raised by the committees on the re-review [would be minimal]. The fact is, there are some concerns, whether minor or major, with up to about half of the protocols. I have to emphasize minor and major. Sometimes they could be very minor." IRB head Becker says that in his view, some changes were minor to the point of being foolish, like redoing hundreds of consent forms so that a payment to a volunteer is listed under "costs and compensation" rather than "benefits." Dang notes, however, that "they are issues that the investigator has to address. The data suggests to us that we really have some work to do, no doubt about it." The re-review also turned up numerous inactive studies that had remained on the books, eating up administrative resources.

IRB Checklist
Committee members serving on institutional review boards (IRBs) address the following questions in reviewing protocols:

Does this protocol have scientific value?

Does the protocol have scientific validity?

Does the study have a valid scientific design and yet pose an inappropriate risk for subjects?

Are risks to subjects minimized?

Are the risks to subjects reasonable in relation to anticipated benefits, if any, to subjects and the importance of the knowledge that may reasonably be expected to result?

Is the selection of subjects equitable?

Are additional safeguards in place for subjects likely to be vulnerable to coercion or undue influence?

Will informed consent be obtained from research subjects or their legally authorized representatives?

Is there adequate provision for monitoring the data collected to ensure the safety of subjects?

Are there adequate provisions to protect the privacy of subjects and to maintain the confidentiality of data?

One thing is for certain: The process for reviewing clinical research protocols at Johns Hopkins has already begun to change radically. "This is an opportunity to really restructure and rebuild our process," says Dang. The goal, as required by OHRP, is that every protocol will be substantively reviewed at a convened meeting of the full IRB. But how to get there? First, by increasing the number of permanent IRBs so that committee members can devote more time to each protocol.

To keep the workload manageable over the long term, Hopkins will operate three or four permanent IRBs in East Baltimore and two at Bayview. Some protocols, Dang notes, will continue to be outsourced to the Western Institutional Review Board, a commercial external IRB that Hopkins first began using last fall to help review new Hopkins protocols while Hopkins IRBs completed the re-review process. Says Dang, "The Western IRB is very good at doing multicenter, industry-sponsored trials in a very efficient way, dealing with hundreds of subcenters."

Hopkins also will boost the "front end" of the review process with additional earlier layers of scrutiny, or prescreening. One proposal would have researchers submit their protocols to some form of academic review, at the department or divisional level, before the protocols get to the IRB. The thinking: the more expertise brought to bear in examining a proposed project, the greater the chance that safety problems will be spotted, flagged, and corrected. External review committee chairman Hellman, of the University of Chicago, notes that discussion of "whether the investigation is important, whether the investigation can result in meaningful answers, requires specialists who aren't regular members of the IRB." Says Hellman, "I believe that is most effectively done at some closer level"--by colleagues who are most familiar with the particular field in question.

This could address a concern voiced by researchers such as Lawrence Appel, a Hopkins associate professor of medicine who studies hypertension (and who has served on IRBs at Bayview and the Veterans Administration). He notes that almost every IRB member is indeed an expert--but often not in the field that pertains to the protocol under consideration. "If an IRB is dealing with a common medical condition like hypertension, there will be many experts (internists and cardiologists) who would be astute reviewers. But for less common medical problems, IRBs may not have the expertise to evaluate, in depth, the science."

For studies involving drugs for which the FDA does not require an IND (an investigational new drug application, which must be filed with the FDA for any research using a new drug), researchers are being required to collaborate with a Johns Hopkins librarian and pharmacist on literature searches for previous studies involving the drug. Such searches can be hit or miss, even in this age of information technology. Results vary widely depending on which keywords or parameters are plugged in, and widely used databases such as PubMed only index journals back as far as about 1960. Alkis Togias's PubMed search failed to turn up several 1950s studies that linked hexamethonium (administered through other means) to lung problems, studies that he found only after Roche became ill. (It would later come to light that researchers who led a 1978 study at the University of California at San Francisco--which Togias cited as evidence that inhaling doses of hexamethonium was safe--failed to report that two healthy volunteers fell sick after inhaling the drug, one seriously enough to be hospitalized.) Neither the internal nor external committee faulted Togias for his literature search. The external review said his search had been "reasonable and consistent with most institutions' standards," and the internal review noted that though an earlier edition of Fishman's Pulmonary Diseases and Disorders mentions hexamethonium toxicity, the current one does not. Nevertheless, the hope is that bringing additional expertise to the literature search process will increase the odds that potential side effects of a drug or substance will be spotted.

To further fortify the front end of the review process, Dang says, Hopkins will hire new administrative staff specially trained in federal regulation and informed consent. They will work closely with researchers to make sure their work complies with government rules, and to ensure that consent forms are clear and complete in spelling out the purpose, risks, and benefits of the research. Michael Klag, who holds the newly created position of vice dean for clinical investigation, says the goal is to have all protocols arrive at the IRB in better shape, so committee members can "focus the discussion on the ethical issues, the controversial issues," rather than have them "get lost in the quagmire of checking for forms and asking, 'Does this meet the regs?' "

Dang is also working toward making the entire review process electronic, eliminating the thick piles of paper that currently accompany each protocol. The time savings should be substantial. Currently, a protocol under IRB review must plod its way, consecutively, through other institutional committees--radiation safety, for instance--some of which meet only once or twice a month. From start to finish, the entire IRB review at Hopkins can take six months. Dang envisions an electronic system that would enable the players in the review process to work on parallel tracks. A researcher would submit her protocol via the Web to the front-end specialists, who would address various compliance issues, then forward it to all the necessary committees. Protocols could be tracked easily, and questions or changes addressed much more quickly, thereby eliminating unproductive downtime. At meetings of the IRB, committee members would be able to work through each protocol electronically; once the meeting is finished, the minutes would be immediately available.

The changes Hopkins leaders have in mind won't come cheaply. Administrators have no estimate of the price tag for new technology and increased personnel. Some costs can be absorbed through the sponsoring agencies themselves.

Beyond that, CEO Miller says he is committed to investing whatever resources are necessary. The time for trying to do more with less--in terms of administrative infrastructure-- has passed, he says. "There are going to be increased costs, but what did it cost us this summer?" Miller asks, noting the tragedy of Roche's death, and the hidden costs of noncompliance--countless hours devoted to re-reviewing thousands of protocols, lost time for projects put on hold, missed opportunities. There is also, of course, the threat posed by lawsuits that could be brought by injured parties. (The family of Ellen Roche reached an out-of-court settlement with Johns Hopkins in October, for an amount that neither the family nor the university will disclose.)

The Nuremberg Code
The basis for ethics codes in research, it was published in 1947 and arose out of the Nuremberg War Crimes Trial. Some 23 Nazis, 20 of them physicians, were charged with conducting medical experiments--including systematic torture, mutilation, and killing--on thousands of concentration camp victims during World War II. Among other things, the Nuremburg Code made voluntary consent a requirement in clinical research studies and noted that risks should be minimized and not significantly outweigh potential benefits.
   In 1964, the landmark Declaration of Helsinki elaborated on the ethical principles that should guide human subjects research, noting the need for a clearly formulated protocol reviewed by an independent committee.
Integral to strengthening the review process at Johns Hopkins, many faculty contend, is placing greater value on IRB service. "If you are appointed to an IRB, it doesn't necessarily make your day," notes Curtis Meinert, director of the Center for Clinical Studies at Hopkins's Bloomberg School of Public Health. For junior researchers who are hard-pressed to attract grants and publish research in a race against the promotions clock, service on the IRB looms as a roadblock to the advancement of their careers. "You're not going to advance up the academic ranks by being the most conscientious person on the IRB," says Meinert, who has served on six IRBs, and chaired one, during his long career at Hopkins. IRB performance, he says, falls "pretty far down the list" for the promotions committee. "It's publications, publications, then service." Yet the responsibility IRB members shoulder is enormous. Notes Meinert, "If the music stops and you're in the chair, you're roasted."

Michael Klag, in his new vice dean position that was created in the wake of the OHRP shutdown, says he is committed to changing that. "We have to see [IRB membership] as a scholarly activity of peer review, not only a service. It's not just checking items off a list. It requires a knowledge base, time, and thought." Klag says he will lobby administrators to make all IRB posts paid positions. And he will push to give IRB service considerably more weight in the promotions process.

See ...
Advocating for the Value of Clinical Trials
The changes currently being implemented at Johns Hopkins earned the praise of OHRP and the external committee. "The speed and enthusiasm with which Hopkins has embraced our recommendations reflects their commitment to the highest level of protection of human subjects of clinical research," the external committee noted, in an addendum to its August report.

Such changes are largely administrative, however--vital but insufficient unless researchers fully embrace the need for volunteers' safety. Hopkins administrators have delivered that message from the top: Every researcher must assume responsibility for understanding and following all federal rules and regulations. Compliance must become personal--not something left up to the IRB to handle. This is part of the "change of culture" that CEO Miller alluded to last summer.

What is Equipoise?
Genuine uncertainty on the part of the clinical investigator regarding the relative therapeutic merits of each arm in a trial. Although an individual clinician may not be in equipoise (there may be some in the medical community convinced of a treatment's effectiveness, and an equal number opposed), there is a lack of consensus in the scientific community as a whole. Clinical equipoise is the justification for conducting a randomized controlled trial. Once the answer is evident, the continuation of the trial can't be justified.
Source: New England Journal of Medicine, 1987; 317:141-5
"In some ways," says Miller, "I'd say there's an antibody response by our faculty to following those rules and regulations, because it's thought to stifle creativity." Among medical types, elaborates Dang, "the emotional reaction is, 'You know, I became a physician not to hurt people. Why do you have these regulations? You're questioning my integrity. How can you possibly think that I'm going to do something bad?' But that's not the point. We have to get past that."

Says Miller, "It has to start at the top. Perhaps--raise my hand--there has been fault at the top for saying: 'Advance service excellence, increase volume, decrease costs, be innovative' ... and not saying: 'Do it all within the rules and regulations.' Maybe I didn't say that five years ago and I should have. Maybe I didn't appreciate it as much, either."

Many Hopkins investigators say that the events of last summer have already prompted them to think more closely about regulations. For example, Togias was faulted by the internal and external review committees for not reporting the adverse event experienced by his first test subject, who developed a short-lived cough that Togias attributed to a bug going around campus.

Says Jaffee, the oncologist working to develop a vaccine for pancreatic cancer, "I think harder now about reporting adverse events." She notes that if a cancer patient dies after completing the study, the death is reported in a yearly update. "If they die unexpectedly on a study, it should be reported immediately. I think this is fairly easy. The hard part comes in when someone on a study dies in a car accident." Jaffee says she now knows that the death needs to be reported immediately "since it is possible the study drug may have altered the person's driving abilities." Such knowledge isn't intuitive, however, she says. "As a new investigator, it is difficult to learn all of this on our own. I think much of what has happened here and elsewhere has led to the realization that continuing education processes need to be strengthened."

Hopkins has already begun to step up training in regulatory compliance, through Web-based course work, for instance. OHRP provided six hours of on-site training for Hopkins IRB members, which was videotaped for future use, and faculty can expect to see an increasing array of compliance checklists and report cards. "This is a place that is data-driven," Miller says. "When people start to see data and focus on it, a lot of things get fixed."

There will be penalties for those who don't comply, adds Medicine's dean and CEO: "There has to be some consequence of non-compliance. There will be some people who always believe that they are above the rules. The institution cannot take the risk of having one [person] bring the institution down."

The key, says Miller, lies in having everyone at the institution embrace the idea that federal regulations are in place for good reason: patient safety. "If we only call it 'compliance,' we're not going to get anywhere," Miller says. "There's got to be a buy-in that there's really value added to this. If we follow the rules, will it be safer for patients who come to us and trust their care to us, whether it's in clinical investigation, or clinical treatment? I don't really think we can separate these two, to tell you the truth. We have to have a culture in which everybody is trying to do the right thing, the right thing all the time."

"There will be some people who always believe that they are above the rules," says Miller. "The institution cannot take the risk of having one [person] bring the institution down." For Hopkins bioethicists, who grapple daily with the tough ethical questions inherent in research involving human subjects, Miller's call for a change in culture at Hopkins has particular resonance. Like him, they see the emphasis on compliance as a necessary starting point for a broader, long-term institutional exploration of ethical issues.

"From our perspective, what we're not talking about is a change to a culture of compliance, though [compliance] is very important," says Ruth Faden, director of the Johns Hopkins Bioethics Institute. "From an ethics point of view, that's a mere beginning and utterly unsatisfying in the long run." Regulations, Faden notes, provide the parameters, but don't address the specifics. Is it ethical, for example, in studies of minimal risk to enroll children who are in foster care? Should homeless people be tapped for a study and offered $600 to participate? "There are so many questions," Faden says. "Most of the time the regulations don't tell you what to do. They tell you what's prohibited and what considerations to take into account. The real work of research ethics comes in interpreting particular cases. That requires a different kind of culture change."

The goal now facing Hopkins, says Faden, is to get every clinical researcher as engaged in the ethics of a study as he is in its methodology, so that ethics aren't considered solely the domain--and responsibility--of the IRB. Just as Hopkins scientists work at the cutting edge of their disciplines, Faden says, "investigators should also be at the cutting edge of the social and ethical debate about their work." In pediatrics research, for example, debate revolves around defining "minimal risk." Should a project in which children undergo an MRI exam, for instance, be considered minimal risk?

Faden wants to provide ample opportunity for investigators to consult with Hopkins ethicists (there are currently 22 faculty members affiliated with the Bioethics Institute) and she would like to see ethics training launched early on. The institute is recommending, for example, that PhD candidates be required during oral exams to address research ethics specific to their field.

"This is an opportunity for us to look within and decide what kind of research institution we want to be," says Gail Geller, an associate professor of health policy and management who is on the institute's faculty. "We're kind of on the precipice. I'm hoping we don't come down on the side of more i's to dot and more t's to cross."

Geller offers the issue of informed consent as an example. There's the potential to become merely legalistic--Does the wording on the form meet federal requirements?--to the exclusion of important larger issues. When and how was the consent obtained? By whom? Was a cancer patient solicited to sign on to a study, for example, two hours after she learned her case was terminal?

"The [signed] consent form is a reflection that you have complied with the rules, you have done what you are required to do, but the signature on the form does not necessarily mean the person has understood or agreed" to what is being described, says Geller. In the case of Ellen Roche, both review committees concluded that the consent form was inadequate.

What about the ethical implications surrounding the widespread practice--at Hopkins and elsewhere--of relying on university employees and students to serve as healthy research subjects? In its August 8 report, the external review committee raised questions about "subtle coercive pressures," noting that staff members often are compensated and also given time off during the workday to participate in protocols. Unlike sick research volunteers, who often hope for therapeutic benefit from a trial, many healthy volunteers are motivated by the value they place on science and medical progress. And, notes Faden, "You'll find those values disproportionately among students and employees of research institutions. It's a Catch-22." In response to such concerns, Hopkins President William R. Brody established a committee, chaired by Faden, to develop a policy to guide staff and student participation in research protocols.

Faden, who chaired the national Advisory Committee on Human Radiation Experiments during the mid-1990s, began work on her new committee at Hopkins by surveying other major research institutions to see what policies they have in place. Their response: "Tell us what we should do!" Says Faden, "There's little in the way of clear guidance coming from anybody. If we do this right, there will be other large academic institutions who look at what we've done to see whether it makes sense for them." In other words, Hopkins will establish the new benchmark.

Faden doubts her committee--which includes the School of Nursing's Karen Haller; cognitive scientist Michael McCloskey, from Arts and Sciences; and Medicine's Gary Gerstenblith--will suggest an outright ban on using staff and faculty. Its work is more nuanced: coaxing out the conditions under which it is morally acceptable for healthy students and staff to be offered opportunities to serve as research subjects.

In this endeavor, and in the broader effort to increase ethical awareness throughout the institution, Faden is optimistic--an optimism born of Hopkins's track record. "When we take on things, we tend to do it really big and really right," she says. "I think we're poised to do that here."

Would all these changes now under discussion, and all the reforms already implemented, have made a difference in the case of Ellen Roche? One could argue that a better literature search might have turned up the early studies linking hexamethonium to lung problems--or that Togias may not have undertaken his project at all, if California researchers had reported that two research subjects had fallen ill after inhaling hexamethonium in 1978. Perhaps a greater administrative emphasis on regulatory compliance would have prompted Togias himself to report the adverse effect on the first test subject who developed a cough, and thus suspend the study. Perhaps, ultimately, a full discussion of the project by the full IRB would have raised questions about any one of these issues. Perhaps, perhaps, perhaps. No one will ever really know. The external review committee stated that Hopkins's policies and procedures had made a serious incident more likely, but "the tragic outcome may have been unavoidable." Danger accompanies research. There's no way out of it. But Johns Hopkins has acknowledged that it can do more to make research safer, within its walls and at institutions nationwide. In his town meeting remarks, Edward Miller said that the best memorial to Ellen Roche would be to devise a foolproof review process. "Foolproof" may be beyond human capacity. But significant improvement, administrators say, is not.

Miller surveys all the work that has gone into the re-evaluation process at Hopkins, and all the work that will have to go into creating a new, better system, and says, "I think it's not an easy challenge. The faculty is under pressure from a variety of sources. But we just have to do it, because the consequences are too great. It's our reputation. If we can't live up to the reputation that we have established, then we might as well pack it up and turn the lights off. The public demands no less of us, and we should deliver no less than they demand."

Special thanks to Joanne Cavanaugh Simpson, who contributed to the reporting of this article.

Go to In the Name of Science
Go to The Hope for a Cure
Go to Advocating for the Value of Clinical Trials

Return to February 2002 Table of Contents

  The Johns Hopkins Magazine | The Johns Hopkins University | 3003 North Charles Street |
Suite 100 | Baltimore, Maryland 21218 | Phone 410.516.7645 | Fax 410.516.5251