Welcome to Consumer Reports Advocacy

For 85 years CR has worked for laws and policies that put consumers first. Learn more about CR’s work with policymakers, companies, and consumers to help build a fair and just marketplace at TrustCR.org

Science, Precaution And Food Safety: How Can We Do Better?


SCIENCE, PRECAUTION AND FOOD SAFETY:

HOW CAN WE DO BETTER?

 


A Discussion Paper for the US Codex Delegation



By

Edward Groth III, PhD

Senior Scientist

Consumers Union of U.S.,
Inc.


Yonkers, New York





February, 2000






Introduction and Background


An international debate is under way, in various committees of the
Codex Alimentarius Commission, on the use of precaution in making
food safety decisions. The debate about precaution is part of an
important larger discussion of what Codex calls “Risk Analysis,” the
principles and concepts that define its approach to decision-making.
Codex has been working for a decade to spell out these principles, to
standardize its decision-making as much as possible and to make the
basis for decisions more transparent.1
Within the Risk Analysis framework, there is a sub-discussion on “the
role of science and other legitimate factors” in decisions. The basic
principles on which Codex operates recognize that both science and
“other factors relevant to the protection of consumers’ health and
the promo-tion of fair trade in foods” are a legitimate basis for
safety decisions. But to date neither “science” nor “other factors”
have been precisely defined. Efforts to do so are ongoing at the
Codex Committee on General Principles (CCGP), and several Codex
committees that set food safety standards are working with CCGP to
define how science and other factors are applied in their own
work.


Within that broad discussion, the role of precaution as a
legitimate basis for food safety decisions has now begun to be
debated. So far, this debate has been relatively shallow. Some
governments have asserted that the so-called “Precautionary
Principle,” developed for environmental policymaking, can and should
be applied in making some food safety decisions. Other governments
(notably the U.S.) have argued that the “Precautionary Principle” is
vaguely defined, that “precaution” can be misused as a disguised
barrier to trade, that food safety decisions based on Risk Analysis
are inherently precautionary in nature, and that no additional,
separate precautionary “principle” is needed.


While the initial stage of the debate has been somewhat polarized,
there is, nevertheless, a substantial agreement that precaution is a
legitimate element in food safety decisions. Where progress is
needed, it seems, is on better defining where and how precautionary
judgments are now used in food safety decisions, and how precaution
should legitimately be used. By clarifying these issues, Codex should
improve the quality and transparency of its own decisions, and could
provide a reference point to help make similar decisions by national
governments sounder, more consistent and more transparent as well.
These improvements, in turn, should help reduce the risk of unwanted
outcomes-the misuse of precautionary decisions as disguised barriers
to trade, on one hand, and unjustified trade challenges against
legitimately precautionary food safety decisions, on the other. To
help avoid costly disputes, both national governments and the
international community need better yardsticks than they have now to
measure precautionary action, so they can agree when it is legitimate
and done right, and when it is not.


This paper will examine the ways precaution is used now in food
safety decisions, and try to develop some principles about how to
make better use of precautionary approaches. While precaution is an
inherent element of many currently accepted approaches to food
safety, some approaches are much more precautionary than others. The
predominant paradigm now in use for most decisions seeks to base
decisions as much as possible on rather narrowly defined risk
assessments; explicit rules calling for precautionary actions are
rare. But Risk Analysis is certainly compatible with making
precautionary choices, and precaution is part of most food safety
decisions. As we will see, precaution is not an alternative to risk
assessment, or antagonistic to risk assessment; both are essential,
and they are interrelated in complex, inseparable ways.


A goal of this paper is to help develop a conceptual framework, an
intellectually rigorous vocabulary for thinking and talking about the
use of precaution in food safety descisions. Such a framework is
needed both as the U.S. develops its own positions on this subject,
and to help shape the ongoing international discussions.


What is “Precaution” in a Food Safety Context?


At the simplest and most general level, a precautionary approach
means taking action to protect health or the environment, before
there is conclusive scientific evidence that harm is occurring. This
concept, whether it is called a “principle” or not, has been
expressed many times in many different contexts, mostly associated
with environmental protection. The familiar definitions were not
written with food safety in mind, and they are too broad and general
to guide officials making day-to-day decisions.


At the same time, some U.S. food safety laws take a precautionary
stance toward human health risks. A common standard for what “safe”
means, in U.S. laws, is “reasonable certainty of no harm” (generally,
to public health). When a requirement for “reasonable certainty of no
harm” is coupled with a prohibition on using a substance unless that
safety standard is met, the law is inherently precautionary. The U.S.
law on food additives, for example, is precautionary in this way.


In other cases, the same safety standard can be applied, in a
non-precautionary way. For example, pesticide residues and
environmental contaminants are present in foods because they are
dispersed in the environment. When safety limits are set for such
contaminants, the goal typically is also to ensure “reasonable
certainty of no harm.” But when current uses create vested economic
interests in the status quo, risk assessments may readily be biased
toward declaring current practice “safe.” Quite often, in cases of
this nature, the standard applied in practice is not “reasonable
certainty of no harm,” but rather “lack of certainty of harm,” which
is not the same thing at all. (One Swedish official at a recent Codex
meeting humorously called this the “Bodies-in-the-Street Principle,”
in contrast to the “Precautionary
Principle.”2)


Most U.S. food safety laws are less precautionary in their
approach. Until recently, for example, U.S. law governing pesticide
uses required the government to balance public health risk and
benefits to agriculture in setting safety limits for residues in
foods. Uses valued by agriculture were permissible unless they were
found to pose an “unreasonable risk” to public health. In practice,
this standard required convincing proof of substantial risk to
justify risk-reducing steps; it was the “Bodies-in-the-Street
Principle” made law. Past application of this approach in laws on the
use of most chemicals allowed dispersal of many of those chemicals in
the environment, with the result that some are now serious food
contaminants. In general, laws governing the use of foodstuffs other
than additives, and the provisions that apply to microbial risks in
foods, require the government to make a positive funding of hazard in
order to block the sale of a food.


“Reasonable certainty of no harm” ordinarily means, “based on the
available scientific evidence.” Determining “reasonable certainty of
no harm” requires both scientific and non-scientific judgments. (“No
harm” is a question for science, but what is “reasonable certainty”
is a subjective decision that scientists alone should not make.)
“Reasonable certainty” is not an absolute guarantee of safety, and
“no harm” does not mean “zero risk.” In most cases, neither zero risk
nor absolute certainty are attainable, or at least not without costs
society may not be willing to bear. The standard does mean, using the
best science available to be as certain as we can reasonably expect
to be that foods, additives, ingredients and production methods will
not harm public health.


Over the last two decades, the phrase “reasonable certainty of no
harm” has taken on a new and not entirely welcome nuance. Today, the
term usually can be taken to mean, “Reasonable certainty that the
risks are no greater than a socially acceptable level.” The
determination that risks fall within the “acceptable” range is based
on quantitative risk assessment, but the determination of what is an
“acceptable risk” (e.g., a 1-per-million added lifetime risk of
cancer in an exposed population) is obviously a value judgment.


Many thoughtful citizens are not fully at ease with using risk
assessment to determine “reasonable certainty of no harm.” There are
several good reasons for this discomfort. One is that risk
assessments can be and frequently have been be manipulated to obtain
desired results. Also, quantitative risk assessment ordinarily,
almost by definition, can be done only for comparatively
well-understood risks on which good data exist. Less adequately
understood risks may be heavily discounted, even if they are possibly
more serious than the risks for which we have good data.


Experience has also shown that risk assessors frequently tend to
equate lack of proof of harm with proof of lack of harm. Despite the
obvious illogic of that attitude, and often, despite the
precautionary bias of the food safety laws, the practical reality has
too often been that the government must prove harm in order to
curtail economic activity, rather than that those who engage in
risk-generating activities must convincingly show safety. In
practice, the “Bodies-in-the-Street Principle” is alive and well and
the basis for many decisions. The use of “risk assessment” to justify
such decisions has bred deep distrust of risk assessment as an
objective tool.


Another reason some participants distrust the current risk
analysis paradigm is that not everyone agrees with the widely used
definitions of “acceptable risk.” We all recognize that drawing lines
at, say, one-in-a-million, and declaring that risks that can be shown
by risk assessment to be smaller than that meet the legal standard of
“reasonable certainty of no harm,” is a political exercise, not a
scientific one. Many in the consumer sector, at least, believe that
the public as a whole was not asked to ratify this definition of
socially acceptable risk, especially not with respect to food safety,
which touches on deeply held, very personal concerns. If asked what
“acceptable risk” means, most consumers would probably choose a
standard like “As low can reasonably be achieved,” or more simply,
“If a risk can be avoided at a reasonable cost, it should be
avoided.” They would not see that choice as irrationally risk-averse;
they would view it as common sense.


Put this way, the average citizen’s intuitive definition of
“acceptable risk” seems more inherently precautionary than the way
the “reasonable certainty of no harm” standard, coupled with risk
assessment, has often been used, in practice.


From this brief discussion, it is clear that different food safety
laws and regulatory practices in the U.S. (and abroad) use
precautionary approaches in some cases, and less precautionary
approaches in others. Some laws allow government to prevent harm in a
strict sense, by prohibiting new activities unless they are
convincingly shown to be safe. Most of our food safety and
environmental laws give government more limited authority to regulate
risk, by setting standards to keep the risks of permitted activities
within the socially “acceptable” range. Some particularly hazardous
chemicals have been banned, after they were in wide use, but such
preventive actions ordinarily require convincing proof of actual
harm. More commonly, regulation aims to limit risk to a relatively
low level. This approach has some precautionary elements, such as use
of “safety factors” to keep permissible exposure levels well below
levels known to be harmful; but post-market regulation is less
preventive or precautionary than pre-market approval.


Often, the approach prescribed by law and regulatory practice
reflects the nature of the risks being managed. But there appear to
be opportunities to take more consistent, and more consistently
precautionary, approaches to a wide range of food safety
decisions.


A significant question with respect to the use of precaution in a
food safety context is whether setting permissible limits for toxic
or potentially harmful ingredients, additives and contaminants in
foods is an appropriate way to apply precautionary measures. Some
analysts have argued that such decisions are too narrow. For example,
they are usually concerned only with public health risks. Many food
contaminants are also environmental contaminants, and food production
methods such as chemical pest control and genetic engineering of
crops have potentially serious environmental effects, which may
outweigh their public health risks. The indirect effects on human
well-being and economic costs of ecological damage are generally
outside the scope of food safety decisions.


Some authors writing on precaution have suggested that, rather
than try to set “safer” limits for potentially harmful substances in
foods, which tend to ignore the environmental risks posed by the same
substances, we should be redesigning food production systems to
eliminate toxic chemicals and other risky practices. Rather than try
to define safe levels of exposure to myriad different potentially
toxic substances in foods, we should eliminate unnecessary uses of
chemicals and minimize exposure to chemicals in foods. A preferred
precautionary approach, they argue, is to seek least-harmful
alternative ways of achieving food quality and food production goals,
as a general, over-arching
strategy.3,4


These are worthy concepts and they deserve extensive discussion.
But I have chosen to limit the scope of this paper to the narrower
food safety context. The approaches are not mutually exclusive. As
the transition to more sustainable food production advances, it will
generate new, more preventive approaches to ensuring food safety.
And, while our society pursues that transition, we still need to make
food safety decisions-in Codex, and here in the U.S. Those decisions
need to provide “reasonable certainty of no harm,” and to the extent
possible, they should be proactive and preventive. As we improve our
ability to use precaution in setting food safety standards, we will
inevitably increase our awareness of ways to build precaution into
food production systems. The two streams of progress should reinforce
each other. Learning how to make smarter and better use of precaution
in food safety decisions, then, is an important first step.


All Governments Use Precaution


Some governments speak openly about their use of the
“Precautionary Principle.” The European Commission has issued a
“commentary” on the Precautionary Principle and how it should be
used, focused largely on food safety
decision-making.5 Some recent European
decisions on high-profile food safety issues-bans of British beef
imports (now lifted), of hormone use in meat production, and of
specific genetically modified crops-have been justified as based on
the Precautionary Principle. Actions on these issues, by Europe vis a
vis the U.S., or by individual European countries vis a vis their
E.U. trading partners, have been challenged as trade barriers (which
they unquestionably are). The key question is whether they are also
justified safety decisions under the Sanitary and Phytosanitary
Standards (SPS) agreement, which in this author’s view, they may well
also be. These challenges highlight the need for increased
clarification of how precaution is used and should be used as a basis
for food safety decisions.


Some in the U.S. have asserted that the Europeans have used the
Precautionary Principle primarily as a trade weapon; the implication
seems to be that the U.S. would never do anything like that. In fact,
the U.S. has developed many recent food safety policies based clearly
on the need for precaution. We don’t call it a “principle,” but we
have used it in much the same way, and if we were challenged over the
trade impacts, we undoubtedly would defend our decisions just as
vigorously as the Europeans have defended theirs.


Recent examples of precaution-based U.S. food safety decisions
include:


· U.S. policy on preventing Bovine Spongiform
Encephalopathy (BSE). In response to the outbreak of Mad Cow
Disease in the United Kingdom, the U.S. banned imports of British
beef. Later, the ban was extended to all European countries,
including both a few that had also reported BSE cases, and others
whose BSE surveillance efforts the USDA judged to be inadequate.
Domestically, the U.S. banned feeding of rendered protein from
ruminant food animals to other ruminants. Although some NGOs feel
this policy could be stronger, its clear rationale is to prevent
the possible spread of BSE, if BSE ever were detected in the U.S.
No BSE cases have been convincingly shown to have occurred
here-this action is purely precautionary. In recent months, flocks
of sheep in Vermont and a herd of ranch-raised elk in Montana have
been quarantined or slaughtered, because there is a reasonable
suspicion that they might harbor a TSE that might be spread to
U.S. cattle.6 There is no evidence of
disease in these sheep, no evidence yet that the TSE disease
endemic in elk can spread to cattle, and no basis for a
quantitative risk assessment in either case. The U.S. policy here
is clearly, “Better safe than sorry.”

· US response to the Belgian dioxin contamination
incident. When Belgian meat and poultry were discovered to contain
dioxin from contaminated animal feeds last year, the U.S. demanded
assurances from the Belgian government that foods imported here
from Belgium were not contaminated. When Belgium could not
immediately provide that assurance, we banned the imports. The
basis was not a risk assessment showing an unacceptably high risk,
but rather lack of proof that there was not an unacceptable
risk-an inherently precautionary decision. (When we later received
solid evidence that the problem had been corrected, we lifted the
ban.)7


· Food Quality Protection Act. The FQPA, passed
unanimously by both houses of Congress in 1996, substantially
updates the way the EPA sets limits for pesticide residues in
foods. It establishes a uniform standard of “reasonable certainty
of no harm,” which replaces the previous risk-benefit-balancing
standard. It requires EPA to set standards that meet the
“reasonable certainty of no harm” test, and that protect infants
and children as well as average healthy adults. It says that, in
the absence of adequate scientific data to ensure “reasonable
certainty” that residues are indeed safe for children, EPA must
use an additional 10-fold safety factor in setting safe exposure
limits. This explicitly precautionary step gives public health
primacy over “business as usual” when data are insufficient to
show convincingly, for example, that standard safety margins will
adequately protect against damage to the developing brain. In the
past, EPA has had to postpone action on pesticide tolerances while
research to fill data gaps was conducted, dragging out final
decisions for a decade or longer. Under the FQPA, the agency must
take prompt precautionary
action.8


· Limits on Listeria. Listeria monocytogenes is a
comparatively uncommon foodborne pathogen sometimes found in dairy
products and processed meats. Although Listeria infections are
rare, they are serious, resulting in death in a relatively large
fraction of cases, and increasing the risk of miscarriage in
pregnant women. Listeria can grow at refrigeration temperatures;
ordinary hygienic practices, such as keeping opened food packages
refrigerated, are not adequately protective. Methods for
quantitative risk assessment for microbiological agents are not
capable yet of defining a “safe” level of Listeria in foods. Thus,
the U.S. has adopted a “zero tolerance” standard for Listeria in
ready-to-eat foods, a clearly precautionary policy.


Food safety is not the only context in which the U.S. government
has made decisions to protect public health based on precaution. In
late 1998, the Consumer Product Safety Commission urged companies
that make chewable plastic teething rings and toys that are mouthed
by babies to stop using phthalate plasticizers to soften the plastic.
(The CPSC charter requires it to seek voluntary action by industry
before considering a ban or other regulatory steps.) Major
manufacturers and retailers promptly agreed to phase out the
phthalates. In statements to the public, both CPSC and the industry
emphasized that there is no concrete proof that the chemicals are
harmful to babies, or that infants who chew on plastic items could
get a harmful dose. But they agreed that this change in products was
still desirable “as a precaution.”


In summary, while European governments may claim to be applying
the “Precautionary Principle,” and the U.S. has argued that such a
principle is not needed, our government has also made precautionary
decisions, which seem similar to those made by certain of our trading
partners. I have given four recent food safety examples, and a
detailed search would no doubt reveal many others.


To focus the discussions at upcoming Codex meetings, an agreement
is needed that all governments will from time to time encounter food
safety decisions that are legitimately based on precaution, as well
as on science. Codex can then begin a concerted effort to develop
consensus principles for when and how to use precaution in
decisions.


Being Scientific About Science


Some commentators on the Precautionary Principle view it as
separate from, and even antithetical to, science. Hathcock, for
example, has said the Precautionary Principle can “negate the input
of science” and “overrule risk assessment,” and allow “arbitrary”
food safety decisions whenever there is any uncertainty about a
risk.9


With all due respect, this characterization of the role of
precaution in decisions reflects fears (perhaps fed by perceptions of
the unjustified nature of certain decisions, such as the beef
hormones case) about how precaution might be misused to block trade.
But it is not an accurate description of the proper use of precaution
in food safety decisions. Not only is this view mistaken about what
precaution is, it may lead us to misperceive what science is as well.
My view is that the need for precaution and the need for scientific
rigor are inseparably connected. You can’t have one without the
other.


Everyone who participates in Codex seems to agree that decisions
must have a sound basis in science. But there has not been much
discussion of what “sound science” is. Perhaps we all think we know
it when we see it. In this author’s view, for science to be a
credible basis for decisions, science has to be treated
scientifically.


What does that mean? Isn’t science always treated scientifically,
and how else can it be treated? To answer the latter questions first,
I believe it is actually rather difficult to be scientific about
science, in the political arena. To treat science scientifically is
to be rigorous about what we know, what we don’t know, and what we
can’t know through scientific methods.


The natural focus in risk assessment is on the data we have.
However, what we “know,” scientifically, includes awareness of
questions we can pose but can’t answer with current data. There are
also typically questions we don’t know enough to formulate, beyond
the boundary of what we can “know” at any particular point in time.
An artist’s conception of the relationship between what we know and
don’t know is presented in Figure 1.


A distinguishing characteristic of much of the science underlying
food safety questions is uncertainty. Uncertainty is not simply
ignorance; there are different types and degrees of uncertainties,
and they have different impacts on ability to accurately assess
risks. Some uncertainties are familiar-differences between laboratory
animals and humans, lack of data on actual exposures of people to
substances, and lack of toxicity data at low doses, for example. For
most of these uncertainties, risk assessors have, over time,
developed approaches that they believe compensate adequately for
knowledge gaps: mathematical models for extrapolation, default
assumptions, “safety” factors, and so forth.


But there are other types of uncertainties that are not accounted
for well by current risk assessment methods. For instance, we have
few data on interactive effects among the multiple residues that
occur simultaneously in diets. Consequently, risk assessments
typically look at the effects of single agents and ignore possible
interactions. Some effects, such as endocrine effects of hormonally
active agents, are plausible risks because they have been observed in
wildlife.10 But mechanisms of action,
dose-response relation-ships, and relationships between timing of
doses and harm are not well understood for these effects, and there
are few satisfactory bioassay methods and no consensus models for
assessing these risks in humans. Consequently, risk assessments
generally ignore them. Some analysts have argued that the traditional
100-fold “uncertainty factor,” applied in setting many safety
standards, provides an adequate “margin for error” to protect against
all these and other (unknown) risks. But many qualified experts find
that assertion less than credible.


In still other cases, such as possible unintended effects of gene
transfers in genetically modified crops, or the TSE diseases, caused
by an entirely new kind of disease agent, our basic biological
understanding of the problems is too crude to support quantitative
risk assessment. Consequently, assessments are primarily qualitative,
and we are far less sure how accurate they are than we can be with
older, more familiar risks, where experience provides a basis for
verifying and calibrating our predictive models.


Risk assessment tends to be reductionist-to simplify problems, to
treat cause/effect relationships as linear. Many scientists
(including many toxicologists) understand that health and
environmental risks are responses to changes in much more complex
systems, and require less linear, more holistic assessments. In
general, current risk assessment methods are not very good at
modeling complex systems. Making the effort to do so is one way to
highlight all the things we don’t know, but would like to know, about
a risk. However, the most common response is not to try holistic
assessments, because finding out how much we don’t know is usually
not a very satisfying answer. Decision-makers (and risk assessors)
tend to prefer questions that appear to have clear answers-even if
the result is getting a precise answer to the wrong question.


While Figure 1 probably exaggerates the
relative state of knowledge and ignorance for well-studied toxic
effects such as carcinogenicity, for some of the newer and emerging
hazards to our food supply, Figure 1 is probably
a fair representation. The accompanying quote by Schopenhauer reminds
us of the need for scientific humility. In assessing risks we need to
examine with equal rigor what the known facts show, where the
uncertainties lie, the amount and nature of the uncertainties, and
alternative interpretations of the data that are also plausible,
given what we know, don’t know, and can’t know. Assessments that lack
this rigorous attention to uncertainties are not fully “scientific”
exercises, and in this author’s view, they are not a particularly
credible basis for public policy.


Within the Codex community, while all agree that science is
essential, more discussion undoubtedly would be useful on what
exactly we mean by “science.”


Science and scientific knowledge differ from legal facts, as
established in a courtroom, or from legislative decisions, based on
majority votes. Scientific knowledge is inherently tentative.
Einstein is reputed to have once said, “A single experiment could
prove me wrong.” Truth, in science, is determined by an iterative
process of assertion, criticism, challenge and reassessment, as data
accrue. While there is often a prevailing view among experts in a
field as to the best interpretation of the evidence on a question,
well qualified scientists often favor different interpretations, and
vigorous debate among competing views is the norm. Often enough, as
evidence expands, a minority view turns out to be correct. What is
heresy this year can be orthodoxy next year, and things we believe
are true today can be shown to be wrong tomorrow, or a few years from
now. One medical researcher has mused, “I know that half of what I
teach as fact will be proved false in 10 years. The hard part is that
I don’t know which half.”11


When scientists are being scientific, they welcome debate, because
debate is the engine that drives science. They don’t mind being shown
to be wrong, because that’s how scientific truth advances: Hypotheses
are proposed, tested, rejected, replaced by better ones. Arpad
Somogyi, Head of the Health Risk Evaluation Unit in the Directorate
on General Health and Consumer Protection of the European Commission,
has written some thoughtful recent papers on the role of science in
food safety decisions.12,13 Professor
Somogyi emphasizes that scientific knowledge is hardly ever complete,
and never static, and that it is often extremely hard to determine
what, in fact, “current science” on a particular question actually
is. This may be good news for researchers-there is always more work
to do-but it is sobering news for risk managers. It means they are
likely to get a somewhat different answer from each expert they ask
for an opinion.


When science is used as a basis for regulations and national
policies that have economic impacts, it is applied in a more
adversarial atmosphere, and it is far harder to be scientific about
science. Human and institutional tendencies are to treat scientific
knowledge more legalistically in the regulatory arena, to use it to
defend positions. The tentative nature of knowledge is ignored; what
is known is stressed, what is unknown is overlooked; debate is
curtailed; science is used to prove that an existing or proposed
policy is correct.


When the stakes are high, as they often can be when potential
public health hazards are concerned, questions that are still being
intensely debated among scientists are generally resolved
politically. A scientific “consensus” may be obtained, based on the
opinions of a small group of sometimes not-very-well-selected
experts. Once such expert statements exist, pressure grows for other
experts to “get in line.” There is a tendency to declare the
scientific debate “closed,” to act as if all the important questions
have been answered-no matter how vigorously the evidence is actually
still being debated, or deserves to be. Votes may be taken, policy
may be adopted by a majority or by consensus. The science that
supports the chosen policy is then often presented as conclusive.
Does this represent sound, science-based decision-making? No, I’m
afraid, quite often it does not.


These phenomena are in large measure unavoidable. Decisions must
be made; society can’t grind to a halt while research answers every
question that needs to be answered. As Donald Kennedy, former
Commissioner of the U.S. Food and Drug Administration, was heard to
say, “Often you have to decide when the data are not as good as you
would like.” (See Figure 2.) And by and large,
our society has a good track record of making sound, occasionally
precautionary, food safety decisions. But at least a few decisions
have been wrong. The most poignant recent example is the British
government’s chain of decisions on BSE. A risk assessment concluded
that BSE was unlikely to be transmitted to humans from cattle. That
conclusion was wrong, and 51 citizens have so far died of new-variant
Creutzfeldt-Jacob Disease, linked to eating beef from BSE-infected
cattle.14


The vast majority of food safety decisions are not mistakes-at
least, as far as we can tell. But the public knows that mistakes can
be made, and that experts and science-based risk assessments can be
wrong. Consumers are skittish. They need reassurance that
decision-making principles and scientific methods for assessing risks
are as sound as possible.


One good way to make better decisions, and to avoid making
mistakes, is to be rigorously scientific about science. While that
may sound like something we think we already do, it is in fact very
difficult, especially when intensely debated policy choices are on
the line. It is very hard for risk assessors to say, “We honestly
don’t know.” And risk managers, for their part, are often frustrated
by proper scientific equivocation. Senator Edmund Muskie, the author
of most major U.S. environmental laws in the 1960s, after hearing
expert witnesses say, “On the one hand…but on the other
hand….” too often, exclaimed, “What this country needs is more
one-armed scientists!” Scientists who advise policy-makers can face
considerable pressures to offer simplified, one-sided interpretations
of the evidence. We must all remember that there is always an “other
hand,” and that sound decisions require that both hands be kept in
view.


Being scientific about science in assessing a risk enables risk
managers to define the appropriate degree of precaution in managing
that risk. The link between good science and precaution is thus
fundamental, and straightforward. By putting proper emphasis on what
we don’t know or can’t know, risk assessors can put what we think we
do know in perspective. A thorough and rigorous analysis of
uncertainties and their implications is also essential to define
needed types and degrees of precautionary measures.


Summing up, at the start of this section I cited the view that
precaution is antithetical to science. The opposite is in fact true,
and we must avoid the misleading implications of the perceived
dichotomy between “science” and “precaution.”4 Precaution is grounded
in scientific analysis, and applying precaution to decisions requires
rigorous scientific input from a range of disciplines. At times,
precautionary approaches, with their emphasis on what science does
not know as well as what is known, may in fact require more rigorous
science than risk assessment, which has been known to brush aside
uncertainties in order to answer too narrowly-drawn questions.


Hathcock’s quote could be turned around: Risk assessment can also
be used to “overrule” science-based precautionary judgments. But
neither scenario represents sound decision-making. Done properly,
Risk Analysis uses risk assessment and precaution together, as
inseparable and essential components of science-based
decision-making.


Precaution and Research


Most testing in support of food-safety decisions is rather narrow
in scope and linear in concept. Controlled tests in lab animals are
typically done, to identify toxic effects and define dose-response
curves. These experiments are designed to get reliable data on the
relationship between one substance and one effect, by eliminating
confounding variables. The results of such studies typically provide
the basis for quantitative risk assessments, which are used, in
combination with precautionary steps such applying arbitrary but
long-accepted “safety” factors, to define maximum safe exposure
levels for humans.


While such simple experiments are necessary to isolate the effects
of the variable being studied, they unfortunately don’t represent the
“real world.” Rather than be exposed to large doses of single
substances, people, through their diets and through other everyday
activities such as work, breathing, drinking water, and using
consumer products, are in fact exposed to varying combinations of
many potentially harmful substances, usually at very low doses.
Simultaneously, humans are exposed to a variety of other stresses
that arise from physiological states-rapid growth and development (as
in early childhood or puberty), pregnancy (both maternal and fetal
conditions), aging, various disease states-and to medications,
alcohol, tobacco, “recreational” drugs, and so forth. These “host
factors,” plus genetic variability, contribute to very wide
differences in the sensitivity of individuals to harm from
potentially hazardous substances in their foods.


Current toxicological test methods cannot come even close to
modeling the complexity of the actual systems that determine the
health risk from any particular food-borne chemical substance. For
microbiological agents, the link between pathogen and disease is
often far easier to demonstrate, but the effects of potential
interacting variables are just as hard to model. In fact, it is
probably fair to say that, for most food-borne hazards, it approaches
being scientifically impossible to model the actual conditions of
risk, and will remain so for the foreseeable future. This is one
reason that decisions need to be precautionary, of course, and it
suggests a need for more holistic models of “real world,” complex
risks.


To the extent that food safety decisions are based on rigorous
scientific assessments of what we don’t know about risks, as well as
on what is known, such decisions can guide research. More precise
scientific definitions of knowledge gaps and uncertainties can help
focus the search for improved methods to assess subtle, complex,
multi-factorial causative processes.


On the other hand, awareness that causality is complex and that
better models are needed is far from new. Over the past 25 years,
many expert committees defining environmental health research needs
have called for multidisciplinary attacks on this
problem.15 But progress has been slow.
If these knots could be untangled easily, more answers would be in
hand by now. Experience suggests that in addition to seeking to
design and carry out better testing and modeling, we also need to
more clearly spell out the limits of current scientific methods, and
to acknowledge that our basic condition is ignorance. For some risks
we must manage now, precaution will undoubtedly remain an important
basis for decisions for many years to come.


A realistic appraisal of the vastness of the darkness and the
feeble light research can often shine upon it is also essential after
precautionary safety decisions are made. One widely accepted belief
about precautionary decisions is that they are “temporary” measures,
and that precautionary actions should be tied to research to reduce
the scientific uncertainties. This implies that research should
produce sufficient data in a reasonable time to supplant a decision
based on precaution with one based on a quantitative risk assessment.
In fact, this expectation is written into the SPS Agreement, which
permits governments to take precautionary action in the face of
incomplete scientific evidence, but calls such actions
“provisional.”16 In an important test
case on this principle, the World Trade Organization overturned a
national decision on the grounds that research had not been done to
answer the questions that had prompted the initial precautionary
action.


It is easy to see the value of research for reassessing and, if
need be, modifying decisions (whether they are precautionary or
non-precautionary). Sometimes, research can confirm a hazard-i.e.,
demonstrate that the decision was sound. But it is also important to
have realistic expectations for what research can achieve. In many
cases, precautionary action is needed because there are questions
science can’t answer-not just because we don’t have all the data we
need, but because science currently has no valid methods for getting
the needed data. In other cases, we may have the tools we need to
measure harm, but precautionary action will prevent our knowing how
much harm would have occurred if we had not acted. In still other
cases, it is feasible to collect data, but society would be much
better off simply replacing a risk-generating activity with a safer
way to meet the same technical or economic need. Before deciding that
research must be pursued to get more precise estimates of risks of
using an inherently risky food substance, governments should be
permitted to do a comparative cost/risk analysis. If there are
lower-risk and lower-cost alternatives, research to refine risk
assessments for a substance rejected on precautionary grounds is a
bad investment. For either reason-the inability of science to
eliminate uncertainties, or the economic irrationality of research to
make a risky choice more acceptable when there are obviously better
alternatives-it can be bad policy to require research as a condition
of precautionary decisions.


Putting “Principle” Into Practice


While most governments agree that some food safety decisions
currently are and should be based on precaution, not much is agreed
to beyond that point. Especially needed are better practical
definitions of the circumstances that justify taking precautionary
action, and clearer guidelines for how to make and how to explain
such decisions.


Precaution is not a single decision invoked at the end of a risk
analysis; it is an element of multiple decisions at many stages of
the process. Precautionary decisions are not just choices to permit
or to ban a substance or activity; they come in many varieties, from
asking difficult questions, to weighing incomplete or inconclusive
evidence, to looking for safer alternatives, to setting wider safety
margins, and many other forms as well.


Notwithstanding that, much discussion of the “Precautionary
Principle” to date seems to assume a linear model of decision-making.
A common description asserts that a risk assessment should be
attempted first; if scientific uncertainty precludes a decision based
on that risk assessment, risk managers can then invoke the
precautionary principle.17


This model vastly oversimplifies Risk Analysis. When properly
done, it is an iterative process, with myriad decision points along
the way, most or all of which may call for a precautionary choice.
For instance, defining the nature of the food safety hazard, posing
questions risk assessors should answer, and developing risk
assessment policy are just a few of the steps of a Risk Analysis in
which risk managers and risk assessors (and other stakeholders)
should interact, and where precaution can come into play.


When risk assessors are instructed to assess the impacts of
unknowns and uncertainties, precaution becomes a more prominent
element of risk assessment. When risk assessment policy includes
guidelines on how to weight uncertainties, default assumptions to use
if data are insufficient, and what sorts of “safety” factors to
apply, it makes precautionary steps more explicit.


Even more fundamentally, the choice of questions that are within
the scope of the risk assessment is intimately linked to precaution.
A risk assessment that includes questions that sound scientific
insight says are relevant and important, but that are not currently
likely to have conclusive answers, will tend to support a
precautionary decision. A risk assessment that focuses on questions
that can be convincingly answered with current data is much more
likely to support a non-precautionary decision-one in which
precaution is less needed because knowledge is relatively complete,
or one based on the “Bodies-in-the-Street Principle.”


Not only are there myriad points in the decision process at which
precaution comes into play; there are also a wide variety of
different kinds of uncertainties, which may call for different kinds
and degrees of precautionary
responses.18


In practice, food safety decisions fall along a wide continuum. At
one end are the easiest, most straightforward decisions, where we
have all the scientific data we need and there is little controversy
about the soundness of decisions. At the other end are the most
difficult food safety questions. These are issues on which the
science is so incomplete and subject to legitimate debate among
experts that government decisions, no matter how much effort went
into them, can be challenged as unsupported by the evidence, while
observers with no interest in the outcome cannot tell who is right.
Between these two extremes lie many different types of decisions,
each with their own degrees and types of uncertainties.


To illustrate the nature and degree of uncertainties and
precautionary steps encountered, I will describe five more-or-less
arbitrarily defined categories of food safety decisions, below. My
point is not to argue that there are five categories, as opposed to,
say, four or six, but rather to demonstrate the variety of
circumstances that require precautionary responses, and some of the
many ways precaution is appropriately used.


My categories range from cases in which there is ample scientific
evidence and a firm consensus on “reasonable certainty of no harm”
(Type 1), to cases in which the odds of a consensus that an activity
is safe approach zero (Type 5). Figure 3
summarizes the five categories. Interestingly, precautionary steps
are not limited to the latter categories-widely accepted
precautionary measures are applied in all five categories. The
problem is not deciding which categories should use precautionary
decisions, but using them appropriately, consistently and
transparently in every category.


Type 1. Cases in which the science is straightforward
and there are ample data; we can be reasonably certain something is
safe, and permit its use, or can be reasonably certain it does not
meet safety criteria, and ban it.


Examples in this category include GRAS substances (food
ingredients that are “generally recognized as safe”), and numerous
food additives that were shown to be safe before they were allowed to
be used (flavors, thickeners, stabilizers, many preservatives,
etc.)


On the other side, the synthetic estrogen diethylstilbestrol (DES)
was banned from use as a weight-gain promoting drug in beef
production after studies showed that women whose mothers took DES to
prevent miscarriage during early pregnancy had a sharply increased
risk of a rare form of vaginal cancer. Previously, animal studies had
also shown that DES was a carcinogen, but since existing analytical
methods could not detect DES residues in foods, FDA had concluded
that there was no risk requiring regulatory action. During the 1970s,
improved analytical methods revealed residues; this advance, combined
with solid evidence of carcinogenic effects in humans, led FDA to
conclude that it had no choice but to ban DES
use.19


These examples reflect both the sufficiency of scientific data in
many cases and the use of precaution as a basis for decisions, and
show that the two often occur together. Food additives, under U.S.
law, must either be determined to be GRAS or be shown to be safe
before their use is allowed, an inherently precautionary approach.
The Delaney Clause, part of the food additive law, prohibits the
deliberate addition to foods of any amount of a substance “found to
induce cancer” in humans or animals. Although that also sounds like a
precautionary stance, the question of when something “induces cancer”
can be difficult to resolve, since the causes of cancer are complex,
and science is rarely certain about the role of single factors. In
fact, the Delaney Clause has seldom been invoked to ban a food
additive, even when there is considerable suspicion that it might
increase cancer risk (see Type 3 cases, below). The DES case,
however, was unusual, in that convincing human data supported
applying the Delaney Clause as a precautionary action in that
instance.


Type 2. Cases in which there is a great deal of
scientific data, good risk assessments can be done, and reasonably
reliable estimates of “safe” exposure can be agreed upon, but in
which precautionary measures are still justified and applied in
various ways.


The examples I have chosen in this category include regulation of
lead levels in foods, and food-additive uses of caffeine.


Lead. Three decades ago, lead levels in U.S. foods were
much higher than they are now, because lead emissions from
automobiles using leaded gasoline contaminated the food chain, and
because most food cans were sealed with lead solder, which increased
the lead content of canned foods. Lead poisoning associated with lead
paint and other sources is a major public health hazard, and massive
amounts of research have been done on exposed human populations,
attempting to establish safe exposure limits. Based on such research,
public health officials repeatedly lowered the definition of the
amount of lead in a child’s blood that was associated with measurable
adverse effects. By 1980, studies had shown that even “average”
exposure, with no clinical symptoms, was associated with some risk of
adverse effects on nervous system development, learning and behavior
in children.20


In the 1970s, the FDA determined that food lead levels needed to
be lowered, as part of an overall effort to reduce exposure to lead
from all sources. Rather than attempt to set “safe” upper limits for
lead in specific foods, the FDA pressed the canned foods industry to
take feasible steps to keep lead out of foods. The industry accepted
the need for such steps, and infant formula producers switched over
to lead-free cans almost immediately. The canned foods industry as a
whole completed this transition over a decade or so, and achieved
substantial lead reduction with “good manufacturing practices” in the
interim.21 In combination with the
EPA’s phase-out of leaded gasoline, the phase-out of leaded cans
reduced lead levels in the American diet by about 95 percent between
the early 1970s and the early
1990s.22


The phase-out of lead-soldered food cans was a precautionary
strategy. The FDA and the industry might have argued endlessly over
what was a “maximum safe limit” for lead in foods. But the
multi-source nature of lead exposure made the margin of safety for
many children too small; lead in foods contributed to an unacceptably
high overall risk, and all practical steps to reduce lead exposure
needed to be pursued. The most sensible strategy was to keep lead out
of foods to the maximum extent feasible, by removing a key source of
contamination, lead-soldered
cans.23


In the international arena, the Codex system is currently
considering the problem of lead in foods, and is taking the opposite
approach. The Codex Committee on Food Additives and Contaminants has
asked the Joint Expert Committee on Food Additives to identify safe
upper limits for lead in specific foods, and is planning to set
standards (maximum residue levels) based on JECFA’s recommendations.
This narrower approach ignores the multi-source nature of lead
hazards, will not ensure safe exposures even through the diet, and is
decidedly non-precautionary, compared to the U.S. experience.


Caffeine. Caffeine is widely consumed, in coffee and other
beverages and in over-the-counter drugs, for its stimulant effects.
It is also added to soft drinks as a flavor modifier. In the late
1970s, studies showed that caffeine fed to pregnant rats in large
doses caused birth defects. Concerns also existed about the stimulant
effects of caffeine on the nervous system and behavior of children,
who consume large amounts of soft drinks, especially in proportion to
their body weight.24 The Center for
Science in the Public Interest asked FDA to ban the use of caffeine
in soft drinks, based on these concerns.


Subsequent research on women who drank large quantities of coffee
while pregnant did not confirm a risk of birth defects, and FDA
ultimately decided that the safety margin for food additive uses of
caffeine was adequate. FDA applied no precautionary steps beyond
those built into its long-standing approval of caffeine use. But the
market soon produced a far more precautionary response. Soft-drink
manufacturers noted consumers’ concerns about caffeine and quickly
developed caffeine-free versions of their caffeinated products,
enabling consumers who wanted to avoid caffeine to do so easily.
Precaution was not the basis for government action, but came into
play through consumer product choices.


In summary, in both Type 2 cases the risks were relatively well
understood. In the lead case, it was not possible to define a level
of lead exposure that was “reasonably certain to cause no harm.” But
convincing data showed that many children’s total exposure to lead
was unsafe, and that food cans contributed moderately but
significantly to total exposure. A consensus was reached that
exposure should be reduced as much as feasible, and food canners
voluntarily phased out the use of lead-soldered cans over ten years
or so. In the second example, the evidence suggested that caffeine
use in soft drinks posed a minimal risk of birth defects, but many
consumers expressed preferences not to give their children caffeine
in soft drinks. The soft-drink industry responded by offering
consumers a choice of caffeine-free colas, enabling parents to make
their own precautionary decisions. In both cases, perhaps by
coincidence, comparatively precautionary actions were taken by the
private sector, not by government regulators.


Type 3. Cases in which the science is not complete
enough to support consensus risk assessments or resolve every
important question, but decisions have to be made. Some such
decisions have been precautionary in nature, and some far less
so.


There are many examples in this category, including many of the
most controversial food safety debates of recent decades. Generally,
those charged with doing risk assessments have believed they had a
valid basis in science for their decision, but that view has been
disputed by others with both expertise and a stake in the
outcome-industry or consumer organizations. The cases I have chosen
include:


Artificial sweeteners. In the 1960s and ’70s, two of the
most widely consumed sugar- substitutes, cyclamate and saccharin,
came under suspicion as possible human cancer risks based on results
of animal feeding studies. In 1969, when cyclamate was shown to
produce tumors in high-dose rodent studies, FDA banned its use and
also recalled all cyclamate-sweetened foods. These precautionary
actions were at least in part based on recognition that a “safer”
alternative
existed-saccharin.25,26


It wasn’t long, however, before saccharin also was under fire, for
producing bladder tumors in rats. FDA had resolved, in the wake of
the cyclamate case, not to let itself be forced to ban a familiar,
long-used food substance simply because new safety questions had been
raised. In effect, the FDA found the precautionary stance of the food
safety law too restrictive, and wanted more flexibility to consider
both the quality of the scientific evidence and the practical
implications of a ban. The agency reclassified saccharin as an
“interim food additive”-a novel category, not provided for in the
law, which FDA had invented to allow questioned substances to remain
on the market while further studies were conducted. FDA thus took
upon itself the burden of proof, permitting saccharin use to continue
while it sought to determine whether the safety questions that had
been raised amounted to a clearly demonstrated public health
hazard.26


Intense debate over saccharin persisted for several years, during
which expert committees of the National Academy of Sciences reviewed
the evidence several times, and the issue was publicly controversial.
Advocates who favored permitting saccharin use pointed to scientific
uncertainties, while public-interest groups and many in Congress
pressured the FDA to act in the precautionary way they felt the law
required. Ultimately, the FDA was prepared to ban saccharin, but
Congress intervened and blocked the FDA ban, reaching a political
decision to override the precautionary requirements of the food
safety law. The reasons Congress gave for its action were that the
evidence that saccharin posed a cancer risk in humans was
inconclusive, that sugar-free foods had perceived benefits, and that
there was no acceptable alternative for saccharin.26 Congress
required FDA and the NAS to further study the risk issues and
required saccharin-sweetened foods to carry a warning label-a
precautionary action, though far less effective than a ban.


More recently, additional non-caloric sweeteners, including
aspartame and acesulfame-potassium, have been approved by FDA for use
in specific foods. In each case, there has been controversy over
whether the testing done by manufacturers demonstrates safety, or
raises unanswered risk questions.27 The
focus of the debate on Aspartame, the next major sweetener approved
after the saccharin debate, was primarily on effects on the brain and
on other non-cancer risks, which made it appear to be a safer
alternative for saccharin, at least in that there was little reason
to consider it a suspected carcinogen.


Color Additives. Like the artificial sweeteners, many dyes
used to color foods began to be suspected of posing risks of cancer
and various other toxic effects during the 1960s and ’70s. As it had
with artificial sweeteners, the evidence came from high-dose animal
experiments, and risks to people consuming the dyes in actual diets
were unknown. A few dyes were banned, while others for which the
evidence was not strikingly different were not. Some dyes banned by
the U.S. remained in use in Canada, and vice versa. The science was
too incomplete and uncertain to support a consensus approach or
consistent decisions on these
risks.28


After several frustrating attempts to resolve these issues
consistently, FDA backed away from a strict application of the
Delaney Clause to food colorings, taking the position that mere
evidence from animal tests that an additive might cause cancer was
not sufficient. FDA set itself a higher standard: It wanted to be
reasonably certain that an additive posed a significant risk of
cancer, based on a quantitative risk assessment, before banning its
use. This policy shift led to a protracted dispute between FDA and
the public-interest community, but the agency held its course.28


Thus, early experiences with efforts to take precautionary action
on possible cancer risks from sweeteners and food dyes spurred
greater subsequent emphasis on risk assessment, and in particular, on
quantitative estimates of cancer risk. Industry and government came
to agree that food safety decisions must be based on “better
science,” and that precise risk estimates offered a sounder basis for
decisions than “simplistic” precautionary decision rules. After more
than two decades of applying this approach, however, disillusionment
has set in among many participants, and the limitations of risk
assessment are a large part of the reason for the current growth of
interest in precautionary approaches.


BST. A third, recent case in this category is the use of
recombinant bovine somatotropin, or BST, a genetically engineered
hormone used to increase the milk output of dairy cows. The U.S. FDA
approved BST use in the early 1990s, after what FDA and makers of the
drug characterized as a thorough risk assessment that proved its
safety.29 The FAO/WHO Joint Expert
Committee on Food Additives has reviewed BST issues twice, in each
case with heavy participation by FDA scientists, and has agreed with
the FDA position.30 But scientists from
consumer NGOs and some expert bodies in other countries have cited
risk questions that were inadequately answered in the U.S. and JECFA
reviews.31 While most critics do not
assert that the evidence shows BST use is harmful to public health,
they do contend that the data haven’t provided “reasonable certainty
of no harm.” In addition, in the absence of evidence that BST use
directly benefits consumers, many feel that a more precautionary
stance toward unanswered risk questions is
justified.32


BST use is controversial in the U.S., and is prohibited in Canada
(because of effects on cattle) and in European Union. A number of
other countries permit BST use, relying on the U.S. assessment or the
JECFA reviews. Although consumer groups have asked the FDA to require
labeling of milk and dairy products made with milk from cows treated
with BST, the FDA has rejected that request, and has discouraged
state laws and private-sector initiatives to provide such labeling.
Thus, U.S. decisions on BST use reflect a less precautionary approach
than other governments have taken to the same questions.


To summarize Type 3, cases in this category generally illustrate
the difficulty of using either simple precautionary rules or risk
assessment to make many food safety decisions. In each case examined
here, scientific debate had not reached a consensus when decisions
had to be made; substantial gaps, uncertainties and disputes
remained. In some cases, government took precautionary action despite
lack of convincing evidence of significant risk to human health. In
other cases, risk assessments were used to justify permitting the
continued exposure of the public to potential harm, although the
soundness of the risk assessments was often vigorously
challenged.


In most of these cases, broad public support for the definition of
“acceptable risk” used (explicitly or implicitly) by decision-makers
was lacking. In fact, when safer alternative

IssuesFood