Reading Journal

minimum  2 sentences, ideally  3-5  sentences that analyze the reading

What  was the key question/thesis?   

Don't use plagiarized sources. Get Your Custom Essay on
Reading Journal
Just from $13/Page
Order Essay

What evidence was used?   

Was the argument convincing?   

So What?   

How might the strategies/info in the reading be useful for you in ethnographic research?   

Social Science & Medicine 58 (2004) 825–836

Governing peanuts: the regulation of the social bodies of
children and the risks of food allergies

Trevor Rousa, Alan Huntb,*
aInstitute of Political Economy, Carleton University, Ottawa ON K1S 5B6, Canada

bDepartment of Sociology and Anthropology, Carleton University, Ottawa ON K1S 5B6, Canada


This paper explores the way in which children with life-threatening food allergies, their parents and their public
caregivers have increasingly been made subject to both projects of moral regulation and mechanism of governance
aimed at the management of risk. We argue that new regulatory measures in Canada designed to significantly change
the food consumption practices among children in elementary schools have three main consequences. First, they
structure the relationship between ideologies of individualism and community so as to blur the distinction between the
public and private dimensions of school life. Second, such efforts ensure that a discourse, formerly concerned with the
problem of health promotion, has been supplanted by new sets of discourses styled by absent experts that focus on
the management of risk. Third, such regulatory practices have a particular dual effect that is characteristic of liberal
welfare governance. On the one hand, they encourage the individualized development of self-governing subjects, and on
the other, they stimulate a heightened moral problematization of ‘safe’ eating habits within the environment of the
elementary school.
r 2003 Elsevier Ltd. All rights reserved.

Keywords: Children; Allergies; School health; Risk governance; Peanuts; Canada

1. Introduction

This paper explores how children with life-threatening
food allergies, epitomized by the widely publicized case
of peanuts, can usefully be viewed as being constructed
in such a way as to generate specific projects of
regulation that impinge upon the everyday and pecu-
liarly public domain of the school. We draw upon policy
documents developed by the Ottawa-Carleton School
Board (hereafter Ottawa Board); these policies are
representative of those adopted by school boards
elsewhere in Canada of which we have had sight, and
we suspect are representative of similar developments
elsewhere. We focus on peanuts as emblematic of the
wider issue of allergies because it highlights the
remarkable transition of peanuts from the status of the
quintessential childhood food, both as staple and as

comfort food, to the signifier of a new wave of anxiety
and risk. Peanuts exemplify a contemporary environ-
mental insecurity in which everyday phenomena increas-
ingly come to be experienced as dangerous. What
interests us is that the regulatory projects triggered by
anxiety about peanuts involve more than preventive
health measures; rather, they are articulated in terms of
the management of risks, and we will argue have taken
on a distinctive moralizing character. We suggest that
there is more involved than attempts to govern
identifiable health risks to the safety of a small number
of children. It is for this reason that we examine the
impact of these regulatory projects upon the ‘social
bodies’ of children where they participate in the public
sphere, namely at school.
The concept ‘social bodies’ plays an important part in

our argument. At the most general level it signals an
insistence that the body is not simply a physiological
organism, but is significantly ‘social’ in that its condition
and attributes are the outcome of social action. The
concept ‘social body’ serves to convey a concern with the


*Corresponding author. Tel.: +1-613-520-2600; fax: +1-

E-mail address: (A. Hunt).

0277-9536/03/$ – see front matter r 2003 Elsevier Ltd. All rights reserved.

Nancy Chen

Nancy Chen

Nancy Chen

Nancy Chen

condition and characteristics of an aggregate of bodies
(Poovey, 1995). To illustrate, the decision to require the
wearing of school uniforms creates divisions between
different schools and types of schools; decisions on
gender differentiation reinforce gender distinctions
(Symes & Meadmore, 1996). We have chosen to speak
throughout of ‘social bodies’ in the plural to avoid the
still common usage of the term ‘social body’ to refer to
the social totality, society, as if it were a single organism.
The previously unproblematic unitary aggregate of
‘children’ is now disaggregated; ‘allergic children’
become distinguished through their differential ‘risks’
and the differential regulation to which they are subject.
The significant implication is that these regulatory
practices have implications for the aggregate ‘social
bodies’ of children, both allergic and non-allergic, who
are made subject to new forms of regulation.
We focus our attention on three ways in which the
regulation of eating practices in schools is determined by
the social differentiation of allergic and non-allergic
children. Our first concern focuses on food regulation as
one dimension of the increasingly complex regulation of
children in schools. These strategies are designed to
structure consumption behaviour through discourses of
risk management. The desire to monitor and regulate
the dietary regimes of children exemplifies the ways in
which children’s social bodies have historically been
regarded as something strangely other than the sole
property of parents in the private realm; one of the most
persistent attempts to resolve the relation between
parents and schools has been the legal device of treating
the school as being in loco parentis. The general form of
the regulatory practices are projects that we contend
need to be underestood as ones of moral regulation. In
addition to their evident focus on the handling of
medical risk, they are moral in that they project a vision
of a carefully regulated safe school with scrubbed and
disinfected utensils and surfaces, a hygiene infused
with moral enthusiasm. In this projection school
teachers have become ‘responsibilized’ for an expanding
range of risks to the physical, sexual and moral well-
being of school children. Responsibilization is the social
process that imposes specific responsibilities on some
category of social agents; Dean develops this concept in
his account of how the nineteenth-century notion
of the male ‘breadwinner’ responsibilized fathers for
the economic well-being of their children while mothers
were deemed responsible for their nutrition and moral
well-being (Dean, 1991). The increasing concern
with allergic reactions is but one instance of this
responsibilization of educators for the management of
risks affecting children. The protection of children’s
social bodies has increasingly become a public respon-
The second dimension of our concerns focuses on
those techniques of governance that generate an
expanding responsibilization of public caregivers, in
particular, teachers. There has been a good deal of
attention within medical discourses to the idea of
environmental sensitivity as an affliction. The ensuing
social anxieties about allergies reflects growing public
concerns about environmental insecurity. Such anxieties
are not new, but there has been a marked increase in the
public awareness of environmental dangers. It is our
contention that there have been diverse responses from
both state and non-state actors as they have sought to
grapple with the expanding incidence and diversity of
allergic reactions, a set of concerns that it may not be
unreasonable to view as having reached epidemic
proportions. Such concerns soon expand outside the
medical arena and in the case of food allergies soon find
themselves at the door of the classroom as a new
responsibility for teachers.
At the same time in a wider social context, there has
been an incremental growth of attempts to regulate
consumption practices in the public sphere that has
come to be widely perceived as an increasingly insecure
domain. For example, projects aimed at regulating
smoking and the use of perfumes attest to the existence
of a complex mix of medicalizing and moralizing
discourses surrounding consumption practices (Hunt,
2003). Such medico-moral discourses are medical
because they seek to minimize physiological harm; they
are moral because they import normative judgments
about the responsibilities and duties of the agents
(administrators, teachers, parents, pupils, etc.). Such
projects are never simply technical, but involve, im-
plicitly or explicitly, evaluations of conduct and a vision
of an environment purged of risk.
It is tempting to address the regulation of allergies as a
moral panic. However, to do so requires some elucida-
tion of the concept of ‘moral panic’. In the first place, it
serves to advance the contention that many regulatory
projects involve some moral dimension. There will be
wide agreement that the imposition of alcohol prohibi-
tion in the United States involved a significant moral
dimension. However, it is undoubtedly more controver-
sial to argue that current anti-smoking projects have a
moral dimension in that this suggests that they are not
simply matters of public health policy. The ‘panic’
constituent is more problematic since it implies that the
project involves an over-reaction or irrational outburst.
Some projects undoubtedly do have such characteristics;
the satanic child abuse scare that flourished between
1989 and 1991 is one such example (Richardson, Best, &
Bromley, 1991). The problematic feature of imputations
of ‘panic’ is that it betrays a political partisanship by
designating the social action as irrational. It is sig-
nificant that such labels are less likely to be applied, for
example, to anti-globalization or environmental move-
ments toward which the commentator takes a positive
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836826

Nancy Chen

Nancy Chen

Nancy Chen

It is for these reasons that we suggest the need to
make an analytic distinction between the content of any
regulatory project and the normative or political
assessment thereof. It is for this reason that we prefer
the more neutral terms ‘moral regulation’ and ‘medico-
moral discourses’. These caveats having been entered,
we contend that the responses to child allergies
constitute a moral regulation project. The current
projects surrounding child allergies are embodied in
discourses organized around escalating concern over
risks to children’s safety. The major preoccupation of
these discourses revolves around the degree of vigilance
required of school staff in the management of the risks
posed by allergenic foods.
Our third theme focuses attention on a duality at the
heart of school allergy policies. On the one hand, rules
are instituted that individualize the risks confronting the
allergic child. On the other hand, the form of these
interventions is unmistakably social involving a heigh-
tened problematization of the ‘normal’ eating habits of
children and the provision of food by their parents; this
requires the investigation of why specific social practices
come to be conceived as problems and how they are
connected to or divided off from other phenomena
(Osborne & Rose, 1997, p. 97). In general, governance is
both individual and social, individualized and general-
ized. This dualism of is at the core of Foucault’s notion
of ‘bio-politics’ whose target is the well-being of social
aggregates or populations (Foucault, 1997).
Although rarely attracting much attention, schools
have long functioned to encourage ‘social’ or collective
eating habits which transform the individualized likes
and dislikes of home-eating into a readiness to share
standardized meals in collective social situations. While
the socialization of eating was once the preserve of the
school, it should be noted that the significance of
standardized commercial ‘McWorld’ now plays the
dominant role in shaping ‘socialized’ eating habits. As
a food institution, the school is increasingly becoming a
venue for the consumption of food brought from home;
long true in North America, increasingly so in the UK.
Thus risk management requires the monitoring of
children’s diets while still seeking to stimulate patterns
of ‘healthy eating’.
A major focus of the programmes we explore seeks to
structure both school-community projects and the
individualized mechanisms that transform agents them-
selves into self-regulating subjects. The targets of these
projects are the consumption practices. As a result, the
organized technologies of risk management that aim to
reduce instances of allergic reactions among children
have become generalized and disseminated in such a way
as to also monitor and regulate non-allergic children,
school staff and parents. The institutionalization of
public regulation of children’s diets was in the past
primarily a welfarism promoting an adequate diet (for
example, provision of school lunches, vitamin C, free
milk, etc.) in order to produce a ‘healthy population’ of
citizens, workers and mothers (Foucault, 1991). These
programmes operated through a combination of health
promotion and disease-prevention discourses. Today,
while food consumption projects remain focused upon
‘healthy eating’, they have become increasingly pre-
occupied with an emphasis on the avoidance of food-
related risks.
The policy documents of the Ottawa-Carleton District
School board discussed below had their origin in an
early version from 1995 that drew heavily on policies
formulated by Ontario’s Middlesex-London Health
Unit; it is significant that this had been drawn up in
conjunction with Allergy/Asthma Information Associa-
tion (AAIA) and is evidence of the considerable role of
advocacy organizations. The AAIA, formed back in the
1960s, undertakes public education to raise awareness of
the dangers of allergens, provides training in adminis-
tration of epinephrine, and campaigns for food and
airline companies to provide ingredient information.
This last project has resulted in the questionable value of
the proliferation of products which carry the informa-
tion ‘This product may contain peanuts’. In particular,
the AAIA is mandated to lobby for unified public health
policies on the control of allergens. The policies were
revisited and revised in 1997 and 1998 in order to reflect
the views of a recently created committee of school
principals and in order to incorporate the views of the
legal counsel of the local Health Department. The policy
incorporated extracts from the Canadian School Boards
Association (CSBA, 2001).
Estimates of the incidence of food allergies, as distinct
from intolerances, vary. Some 1–2% of the population
in developed societies exhibit allergies to foods; this
figure may be as high as 5% in children under 5 years.
Around 1.3% of children and 0.3% of adults have
allergic reactions to peanuts. There is evidence of an
increased incidence of food allergies over recent decades.
Peanut sensitization, as measure by a standard positive
skin prick test, has increased by 55% while allergic
reactions have raised 95% over the last 10 years. It is
likely that such increases result from a combination of
factors: increased awareness of allergies, better diag-
nosis, increased reporting, and increased consumption
of foodstuffs causing allergic reaction; for example,
peanuts are more widely diffused in prepared foods.
Statistics on the proportion of those with allergies who
experience anaphylactic reaction do not seem to be
There is evidence of an increased vulnerability to
allergies suggested by the fact that they are most
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836 827

Nancy Chen

common in geographical areas where traditional child-
hood diseases (polio, diphtheria, etc.) have been
eradicated and where there have been improvements in
hygiene. The eradication of infectious diseases may have
made the modern immune system ‘less fit’ and thus more
vulnerable to allergens. Better hygiene has meant that
fetuses that used to have to handle parasites present in
the maternal blood, now react to other things in the
blood such as allergens and are thus predisposed to
experience allergies after birth.
There is an apparent tendency to conflate allergies and
food intolerances and this has created an impression
that there has been a large increase in those affected by
allergies. It is possible that some allergies are regarded as
‘life-threatening’ may be less serious; the increased
publicity about the potential severity of allergic reac-
tions may have resulted in understandable caution to
avoid exposure to allergens and may have inflated the
number of reported instances. There is no doubt that
school administrators and principals encounter many
more cases of children with allergies than in the past;
hence the fact that allergy regulation has become a
widespread feature of school life.
The problem of children’s social bodies in a risk society
A focus on social bodies refers to processes of
aggregation that bring together the dispersed circum-
stances of children as part of a population of ‘school
children’. At the same time, the concept of ‘social
bodies’ can also refer to processes of ‘disaggregation’,
the dissection of the social in order to diagnose its
problems, for example, the distinction made in policy
documents between allergic and non-allergic children.
The social bodies of school children can be viewed as
political and economic surfaces, ready to be inscribed
with the telltale marks of projects that seek to regulate,
monitor, or otherwise govern their distinctly social
bodies. Childhood, it should be noted, is the most
intensively governed period of human life (Rose, 1990,
p. 121).
The insistence that individuals are possessed of social
bodies serves to emphasize that each individual body is
influenced by concerns and anxieties that impact upon
the aggregated social bodies, for example, over such
issues as body images, weight and the like. Issues
affecting social bodies can give rise to both solidarities
and conflicts; for example, they may act to provoke
divisions between the parents of allergic children and
those of non-allergic children. Thus the fate of
individual bodies is linked to the relations of social
bodies (Freund & McGuire, 1995, p. 3). The bodies of
children are social and historical constructs in that
discourses on the dietary regimes of children give effect
to the shifting ways in which the health of children has
been conceived. In broad terms, increased value has
been attached to children’s health for reasons connected
to declining infant mortality rates.
The historical significance of children’s social bodies
as objects of governance has its origins in the fact that
the family has long remained at the centre of projects of
social intervention aimed at diminishing social anxieties
with respect, not only to the health, but the morals,
criminal tendencies and educability of children (Donze-
lot, 1979). The quest for familial roots to social
problems is currently visible in the rhetoric of ‘family
values’. The dietary regimes of children are unstable
targets of governance since tensions surround the
boundaries between parental and public duties and
responsibilities; this is particularly the case where some
moralizing element is present. Projects of moral regula-
tion are rarely systematically organized strategies; their
essence is attempts on the behalf of some social group to
problematize the conduct or culture of others, and to
impose regulation upon them (Hunt, 1999, p. 1). Moral
regulation describes a process of moralization in which
some social practice is treated as a moral issue; it finds
its prospects for success or failure in the capacity to be
generalized and disseminated, in the ability to ‘assert
some generalized sense of wrongness of some conduct,
habit or disposition’ (Hunt, 1999, p. 8).
One of the reasons that particular projects of moral
regulation are interesting is that they demonstrate how
local social forces can successfully mobilize themselves
to articulate policy goals that can be imposed upon
policy makers. This is evidence that moral regulation
projects are often initiated from below and that the
primary initiators are frequently not holders of institu-
tional power (Hunt, 1999, pp. 1–2). This is the case with
respect to the regulatory response to allergic children in
schools. The pressure for regulatory intervention came
from parents organized through a network of socio-
medical activists linked primarily through the Internet.
From the contention that the social bodies of children
are targets of moral regulation, it follows that the risks
confronted by children with life-threatening food
allergies have come to be viewed as conditions requiring
intervention. There are a variety of participants involved
in social action aimed at addressing this problem: the
children themselves (distinguished as allergic and non-
allergic), teachers, school administrators, and parents
(again both those of allergic and non-allergic children).
The parents of allergic children tend to be strongly
committed to the view that what is at stake is the
imperative for the regulation of dietary practices within
the school. They tend to exhibit a skepticism about the
abilities of individual teachers to fulfil properly their
expanding responsibilities for the effective governance of
the detailed food practices from the monitoring of
lunch-sharing practices to the scrubbing down of class-
room equipment and desks.
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836828

The scenario that we find in schools today, to borrow
from Bauman (1992), is that the school teachers and
those directly involved in the day-to-day care of
children, are no longer the legislators of regulatory
projects but merely their interpreters. The children
themselves, their parents and their teachers are in
important respects the authors of these projects along-
side the ‘absent experts’ both within and without the
educational bureaucracies; and these experts are not
only official medical personnel, but also the self-created
experts who are, more often than not, the parents of
anaphylactic students.
Allergic and non-allergic children alike find them-
selves immersed within a ‘risk society’ (Beck, 1992a, b;
Giddens, 1991). Risks are characteristically modern;
they are often hidden, impersonal and unobservable
conditions which fuel the uncertainties of the age. They
generate attempts to deal systematically with the
hazards peculiar to modernity. One of the most
prevalent forms of response is actuarial calculation
and insurance (Ewald, 1991). ‘Risk is a wayyof
ordering reality, of rendering it into a calculable form’
(Dean, 1998, p. 25). The relationship between a culture
of risk management and projects of moral regulation is
one that is complementary; projects of moral regulation
are aimed at problematizing conduct that might other-
wise result in the eruption of the unpredictable, making
the development of techniques of risk management the
preferred response in order to maintain security and
order even though results may be difficult to guarantee.
Allergies are important exemplars of risk: they are
complex in their etiology, often unknown until they
manifest themselves in some catastrophic incident, and
they are unequally distributed. Allergies also fit another
dimension of Beck’s analysis in being classless. While the
language of risk is technical, abstract and scientific, risk
is still grounded in moral discourses; for example, the
plethora of discourses surrounding AIDS involve com-
plex mixes of medical, sexual and moral elements.
Further, as Mary Douglas notes risks are social
constructs with a close link between ‘risk’ and ‘social
justice’ (Douglas, 1992, p. 36). More importantly risks
exhibit a paradox of risk and regulation; while we live in
societies with more risk and uncertainty; at the same
time, everyday life has become more standardized and
regulated (Turner, 1995, p. 226, chap. 12). This is
precisely the duality that transects the response to
allergies. This dualism is captured in O’Malley’s (1992)
distinction between two forms of risk principle: ‘pru-
dentialism’ which urges individuals to take responsibility
for the management of their own interests and
‘socialized risk management’ that is instituted through
collectivist policies.
The proclivity of administrators to respond to
problems with policy initiatives and regulatory output
exhibits a second dimension of risk society. Conscious-
ness of risk induces attempts to ‘colonize the future’, to
take steps to make provision for the possibility of risk
incidents and to broadcast the attempt to make
provision to ensure security in the face of future risks
(Giddens, 1991, p. 111). Such attempts exhibit two
distinctive varieties. The first is couched in terms of a
calculative rationality of liability minimization; this may
or may not be linked to attempts at risk reduction.
Liability reduction may content itself with a preventa-
tive approach that seeks to reduce or deflect possible
criticism. In its most extreme and negative form liability
minimization may manifests itself in risk aversion where
regulations seek to smother every imaginable risk to
such an extent that, if adhered to, would result in the
near paralysis of social life or, more likely, in the
systematic avoidance of the regulatory machinery.
A second form has a legal (or, probably more
accurately, quasi-legal) approach in that it seeks to
minimize the likelihood of litigation or the extent of any
legal liability. Although such liability avoidance played
some part in the initiation of the Ottawa Board policy,
the substantive contents of the policy documents are, as
we seek to show, directed primarily at teachers and
school principals; they are significantly practical injunc-
tions rather than inclusive legal generalizations. The
Canadian School Boards Association Handbook (CSBA,
2001, p. 2) is explicit in its concern to minimize the legal
liability of school boards. It notes that ‘the Supreme
Court of Canada has recognized that the ‘standard of
care’ owed by an educator to a student is that of a
‘careful and prudent parent’’. From this it is deduced
that, although as yet not tested, the courts would be
likely to require school boards to adapt the school
environment to accommodate anaphylactic students.
The two approaches to risk handling are illustrated in
the opening paragraph of the Ottawa Board policy
documents issued in 1998 (and reissued in 1999):
Objective: To create a safe and healthy environment
for students through a co-operative effort by staff,
parents, schools, and related agencies, while recog-
nizing that there are limits imposed by legislation,
school configuration, number of students, and
available staff. (O-CDSB, 1998, p. 2)
The Ottawa Board policies return time and time again
to the extra precautions needed on excursions and field
trips since these events involve an increased unpredict-
ability of risk conditions. It is but a short step for the
risk aversion associated with such expeditions to lead to
impediments being put in place which threaten to
smother much extra-curricular activity.
Castel (1991, p. 281) describes the process of moral
regulation as one in which the initiators of such projects
work to ‘dissolve the notion of a subject or a concrete
individual, and put in its place a combinatory of factors,
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836 829

the factors of risk’ (emphasis in original). The possibility
of an allergic child being exposed to the allergic food
product constitutes the specific risk situation. Yet, since
many allergic reactions are the result of cross-contam-
ination rather than being the result of direct ingestion,
the risk condition takes on an expanded character. As a
result, the strategies of managing this risk will inevitably
involve dividing practices that differentiate between
allergic children and those who are allergy-free. Care is
taken to reduce the overt singling out of the allergic
child since this would be incompatible with the
prevailing educational thinking against the separation
of different categories of students.
The Ottawa Board policies reveal considerable tactical
caution with respect to the differentiation between
allergic and non-allergic children and their respective
parents. The opening lines of the recommended draft
letter to parents state:
We have a student in your child’s class who has life-
threatening allergies to peanuts and all types of nuts.
We want to thank parents for your understanding
and co-operation in the past when we have requested
that you avoid sending peanuts and nut products to
Note how parents are ‘brought on side’ by implying
that they have previously cooperated with a ‘no-peanuts’
policy. The next step is that special provisions are
identified for the handling and consumption of food.
The body of the detailed provisions refer to such matters
as where food can and cannot be eaten, what utensils are
supplied, how surfaces are to be prepared and cleaned.
An interesting feature of the Ottawa Board policy is
that, rather than segregating the anaphylactic student, it
is the non-allergic student who is separated should they
bring peanut item to school. The draft letter to parents
Should your child bring a food to school containing
peanut or nut products, please ask your child to let
the teacher know. We will provide alternative eating
arrangements for that day to ensure the safety of the
[allergic] child.
Note that the possibility that a child might bring
peanuts to school established that there is no prohibi-
tory restriction on what food parents may provide for
their children. This serves the significant tactic of taking
account of parents who might regard the growing list of
school ‘Do’s and Don’ts’ as evidence of risk aversion.
This avoidance of prohibitory language contrasts with
the more traditional list of ‘banned substances’ in
elementary schools (gum, candy, etc.).
The problem of a risk society is masked by the fact
that the relationship of allergic risks to the social body
of a child is at first glance a strangely disembodied one.
It is the allergenic food product that functions as the
known life-threatening pathogen, not the child him or
herself. Within the reality of a risk society the efforts of
schools to manage the risks associated with food
allergies will inevitably fail to extricate the child as a
human subject from the combinatory of risk factors that
diminish the carefully cultivated impression of the
school as a well-ordered and secure social space.
Similarly, it is a child’s chance of contact with the
allergen that is the risk to be managed by regulatory
practices that attempt to secure the reduction of that
risk. The Ottawa Board ‘Guidelines’ for safer class-
rooms suggest a detailed level of surveillance by teachers
that it seems unlikely could realistically be sustained:
Please watch student snacks in case there is anyone
with a peanut butter or other nut substance. Those
children should finish their snack and wash their
hands before they go outside.
Please wash all knives, forks, etc. before and after
use to prevent contamination.
Such specific protocols are problematic because they
‘responsibilize’ teachers; it involves not just imposing
responsibilities on teachers, but additionally it marks
out a form of governing through acting on social agents
to change both the imposed responsibilities but also the
one’s that agents take on themselves. Not only do these
policies expand the responsibilities of teachers, but
potentially open them to disciplinary or even legal
liability. We suggest it is significant that the policy
documents place great weight upon the washing of
hands. Not only is this a practical step that might
feasibly be implemented by busy teachers, but it also
resonates, indeed reproduces, older discourses about the
importance of cleanliness that hark back to ideas of
‘cleanliness is next to Godliness’. Yet we find, on closer
examination, that the social bodies of allergic children
are deeply inscribed with the techniques of governance
and regulation because of the unitary nature of the risk
culture’s construction of the child’s social self. The
allergic child is released from the forms of social
obligation that interaction with other children in
mealtime rituals would otherwise entail; this is most
clearly present in the repeated injunctions against food
sharing. At the same time, allergic children are required
to become active agents in the process of self-regulation
of their dietary habits; they are to be trained both at
home and in school about the risks associated with
sharing food or drinking-straws. However, this is not
simply a matter of following rules; there is evidence that
allergic children may be prone to ‘bullying’ at school.
They are trained to live cautiously and may exhibit signs
of timidity. Bullying may be especially serious when the
child is threatened with an allergenic food which is likely
to produce extreme anxiety. Such bullying is
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836830

compounded by the fact that the degree of danger is
likely to be poorly understood by the bully. Thus the
self-regulatory practices to be mastered by the allergic
child involve much more than the avoidance of specific
food items.
Within the broader consideration of the process that
Hacking (1986) describes as ‘making up people’,
strategies for managing allergy risks enable the employ-
ment of what Petersen (1997) calls ‘the agency of
subjects in their own self-regulation’. This serves to
normalize the ‘allergic’ role of the child in relation to his
or her subjective experience of risk, and results in the
recognition of ‘a more complexly structured and
intensely governed self’ (1997, p. 203). Thus the Ottawa
Board policy places considerable emphasis on the role of
the allergic child:
It is strongly recommended that the anaphylactic
student (as age appropriate) learn to take responsi-
bility for his or her own well-being.
Any system of moral regulation requires the presence
of mechanisms of self-discipline. These techniques tend
to stimulate an intensive self-regulation that are further
legitimated by the creation of codified rules. Thus the
Ottawa Board policies set out detailed ‘responsibil-
ities’—carefully avoiding the word ‘rule’—for all parti-
cipants: parents of allergic children, parents of non-
allergic children, allergic children, non-allergic children,
class-teachers, and school principals. It is noteworthy
that in the ‘General Guidelines for Creating Safe and
Healthy Schools for Anaphylactic Students’ prohibi-
tions are explicitly avoided:
It is unrealistic and provocative to attempt to ‘ban/
eliminate’ allergensyThe goal is to minimize and
control allergens through education. It is recom-
mended that the word ‘ban’ not be used in any
It should be noted, however, that in practice there is
an unstable boundary between bans and recommenda-
tions. To be asked not to send children to school with
peanut butter sandwiches can lead to stigmatization of
parents who question the need to eliminate an expand-
ing range of food items through imposing segregation on
their children.
It becomes clear that the allergic child, by virtue of the
risks associated with his or her medical condition and
the allergic role created is in some measure ‘interpel-
lated’ into the role of the anaphylactic. Althusser
described interpellation as the process that ‘hails’
individuals into specific role through their recognition
of the way in which the are labelled as when children
have unflattering nicknames imposed upon them such as
‘Hey, you fatty!’ (Althusser, 1971, pp. 162–163). This
hailing of anaphylactic children situates them within a
specific discursive context and thus facilitates their
induction into the regulatory regime. At the same time,
teachers are also interpellated within the text of the
policy that defines their responsibilities. This attention
to the interactive character of the link between their
medical condition and the subjectivity of the allergic
child, and their relations with teachers and classmates,
adds credence to our contention that allergic children
are inscribed subjects through whom projects of moral
regulation operate to regulate the conduct of the wider
category of participants (parents, teachers, etc.). The
allergic child is an intermediary in the ebb and flow of
the moral anxieties of others that develops despite the
fact that the degree of risk associated with exposure to
food allergens are probably sufficiently low as to defy
even the most detailed regulations. In a related way,
these anxieties can be seen to grow in relation to the
sense of apprehension that the risks to the safety and
health of allergic children are somehow not being
adequately managed.
Nevertheless, the presence of an anaphylactic child in
a school classroom gives rise to a rapidly expanding
network of social agents who are responsibilized for the
management of the risks that arise from the eating
habits of children. It is because the expectations of the
‘allergic’ role can vary across different contexts that the
responsibility for the management of risks can be so
widely diffused between the child itself, parents, school
volunteers, administrators, and teachers (Freund &
McGuire, 1995). But if the allergic role can be regarded
as one that Turner (1987, p. 55) describes as ‘an exit
from social relations for a temporary respite from social
obligations’, then it seems reasonable to suggest that the
opposite is also true, namely, that healthy or non-
allergic roles are criteria for social membership and
engagement. Despite this fact, allergic children continue
to be isolated or separated through dividing practices or
processes of social differentiation like the arrangement
for ‘alternate’ mealtime spaces for non-anaphylactic
Such precautionary practices are part of the modern
organizational ideology of prevention, one that, as
Castel (1991, p. 289) says, is
overarched by a grandiose technocratic rationalizing
dream of absolute control of the accidental, under-
stood as the irruption of the unpredictable. In the
name of this myth of absolute eradication of risk,
[modern ideologies] construct a mass of new risks
which constitute so many new targets for preventive
Although the Ottawa policy lists the potential risks
posed by fish, milk, eggs and wheat, the policies itemized
related only to peanuts. It can be presumed that this is
because to address the full range of potential allergens
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836 831

would be so complex as to defy the practicalities of
school life. It is worth noting that the range of risks
associated with anaphylactic children are disseminated
in such a way as to encompass all of those contingent,
but largely unpredictable, possibilities (for example, of
cross-contamination through shared eating or cutting
utensils, desktops) which subsequently expand the scope
of the regulatory project in such a way as to govern the
conduct of all school staff and students alike.
Such expanding conceptions of risks to the safety of
school-aged children inevitably generate expanding
demands for further regulation, in the form of official
policy documents like those issued by the Ottawa Board.
This view is consistent with the point argued by Hermer
and Hunt (1996, p. 457) that there exists a widespread
assumption that the solution to social problems is
through the invocation of more rules, regulations or
laws. ‘[W]henever people feel that the fabric of society
has been loosened, the law is perceived as the last
defense, the last hope for the enforcement of morality
and order’. Projects and strategies of moral regulation
most often take the form of ongoing sets of practices
that persist until their target either undergoes a
significant transformation into some reconstructed
object, as when homosexuality becomes gay, or alter-
natively, is simply abandoned, as was the fact of
nineteenth-century anti-masturbation crusades (Hunt,
When looked at in this light, it becomes clear that
efforts to regulate the social bodies and dietary habits of
children in schools are components of larger reactive
movements toward the regulation of the agents, in
particular, school administrators and teachers, who are
charged with the responsibility for protecting children.
The reactive character of these movements has its
origins in the anxieties of parents. Thus most projects
aimed at the regulation of the food risks of children are
in large measure derived from a particular form of
anxiety that expresses what Giddens (1991) has called an
ontological insecurity. Late modernity tends to be
associated with a destabilization of a previously
imagined ontological security, that is the confidence
that most humans have in the continuity of the
surrounding social and material environment. This
expresses itself in an increased awareness of risks that
leads to rising anxiety. The older familiar anxieties of
modernity (for example, unemployment, bereavement,
etc.) remain, but a new world of uncertainty has arisen
in which new anxieties arise that vary in their duration;
some arise and persist (AIDS, global warming, etc.)
while others are more short lived (‘road rage’, satanic
child abuse, etc.). Significantly risks are increasingly
contested. Particularly prominent among the new
insecurities are those that relate to the environment
and the increasing anxiety about the risks that it poses to
the well-being of late modern subjects.
These ontological insecurities in turn generate ex-
istential anxieties about the reliability of knowledge and
expertise. The role of experts has changed: professiona-
lized and legitimized by the state they used to agree (at
least in public); today with the increasing diversity of
expertise and a decline in the capacity of official
expertise to exclude competition, they now disagree.
Increasingly important are the self-made experts who
run Web-sites dealing with allergies, which are pre-
dominantly promoted by allergic activists and are an
example of these proliferating knowledges.
The social anatomy of a moral regulation project
We have argued that children’s social bodies tend to
be among the more intensely regulated aspects of their
social existence. While the family and the school engage
in such governance, we have been concerned to
demonstrate that children themselves are implicated in
such practices in such a way that an allergic child
becomes an active participant in the regulatory project.
The variety of practices of food consumption by
children are among the most stringently regulated
behaviours, both on the part of the child, and his
caregivers within and outside the family. Many practices
and rituals are engaged in by an allergic child when
taking care to avoid particular allergenic food products.
The Ottawa Board policies ‘To Create a Safer Class-
room’ includes the following detailed recommendations:
* the home-room teacher regularly reminds students to
help in minimizing risk by not bringing food allergens
to school;
* anaphylactic students are advised that they must eat
only the foods they bring from home;
* no one (including staff) trades or shares food with the
anaphylactic student;
* students are reminded not to share cups or straws;
* desks or other eating surfaces are to be kept clean;
* students are encouraged to bring allergen-free foods
for lunch and recess snacks;
* it is recommended that staff refrain from eating foods
containing allergens, but if they do, proper steps
should be taken to neutralize the effect (for example,
hand-washing, brushing teeth, using mouth wash).
These recommendation have a familiar ring that
resonates with ‘common sense’ rules of hygiene and
are emblematic of ‘proper manners’ in an era in which
attention to etiquette has markedly declined. The move
away from etiquette does not necessarily involve a
normative change; it may simply be that in late modern
societies people not only ‘bowl alone,’ but also eat alone,
often eating on the move and using fingers (Putnam,
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836832

Children as active participants in their social worlds:
In her study of power and resistance in parent-child
relations through mealtime rituals, Grieshaber (1997, p.
652) notes that ‘children actively challenge and resist
parental authority as part of daily domesticity while
engaged in the social practice of consuming food’. Yet,
the allergic child is reminded early and often of the
dangers of laxness in the self-regulation of eating. He or
she learns to be complicit in the regulation of consump-
tion practices, as well as in the regulation of those
around, given the risk of cross-contamination. The
dividing practices in schools that differentiate allergic
and non-allergic children involve an implicit recognition
of the allergic child as a sentient subject cognizant of
those aspects of his or her social body that render him or
her vulnerable.
Constant supervision would not, in principle, be
absolutely essential if children practiced such self-
monitoring. This would be the case if repetitive
regulation of food consumption habits practiced within
the confines of the allergic child’s home was firmly
established. Grieshaber (1997, p. 653) notes that ‘super-
vision throughout meals is constant so that children
eventually learn to consume food in a regulated and
disciplined manner, within a particular time frame and
in a limited space’. Grieshaber may be somewhat
optimistic since ‘eating alone’ becomes established at
an early age for many children; this is, a further respect
in which the differentiation and individualization of the
allergic children renders them subject to the surveillance
of eating practices for longer than other children. This
developing self-governance extends its reach further
once an allergic child reaches elementary school. Social
anxieties about the inherent laxness of children in the
self-regulation of their mealtime practices when away
from the parental gaze, however, easily give rise to
anxieties about the behaviours of those charged with the
children’s care in public schools. Rose (1990, p. 123)
makes the point that
The upsurges of concern over the young—from
juvenile delinquency in the nineteenth century to
sexual abuse today—were actually moral panics;
repetitive and predictable social occurrences in which
certain persons or phenomena came to symbolize a
range of social anxieties concerning threats to the
established order and traditional values, the decline
of morality and social discipline, and the need to take
firm steps in order to prevent a downward spiral into
Since school boards, administrators, and classroom
teachers alike are the targets of projects of moral
regulation aimed at policing and monitoring eating
practices, these agents can be viewed as potential
offenders against such social discipline wherever the
safety of children is compromised through a real or
imagined failure to manage risk. As in so many other
fields the major response has been the introduction of
practices of credentialization and professionalization.
Yet the more extensive professionalization has become
the less it provides a sure guarantee and, as a result,
professionalization which once guaranteed autonomy
today elicits varying levels of suspicion. Nowhere is this
more evident than in the declining social status of
Today home-training and self-regulation are not
perceived as adequate responses to the risks confronted
by allergic children. Children are deemed to be less than
fully capable of their own self-regulation. In part, this is
a concomitant of a widespread infantilization of children
in developed societies. The social dangers which
confront children are perceived as being more numerous
and more dangerous and as a result children are subject
to a longer and more extensive period of parental
surveillance and regulation. Children tend not to be
allowed to gradually expand their encounters with the
outside world on their own, but rather are bussed to
school and transported to recreational activities. This
general reluctance to grant autonomy is compounded in
the case of the anxieties surrounding allergies because
the dangers to the safety of an allergic child stem from
unseen and mysterious allergens, ‘hidden’ in otherwise
innocent food products.
The response of caregivers seeking to alleviate the
social anxieties that are imposed upon them has been to
develop ever more elaborate regulations and guidelines
for their own conduct, while simultaneously attempting
to deflect responsibility back onto children and their
parents. The policy recommendations cited above
emphasize the responsibilities of both parents and their
allergic children. Such reciprocal responsibilization is
cast in the fashionable neo-liberal discourse of an
implied ‘partnership’ between parent and schools. Yet
the relationship between school officials and parents is a
peculiarly oppositional one; ‘negligent’ classroom tea-
chers and school administrators are often viewed by
parents as not taking adequate measures to ensure the
systematic management of risk. It is the presence of
sentiments that verge on the irrational that lend some
measure of legitimacy to notions of moral panic.
Goode and Ben-Yehuda (1994, p. 3) draw attention to
the limits of rationality in projects of moral regulation.
While much collective action is appropriate to the
task, goal, challenge, or threat at hand, not all of it
can be characterized as completely rational. Erro-
neous beliefs purportedly accounting for the events
of the day are often held, and strategy may be
pursued, which seem almost designed to defeat self-
professed goals. In a crisis, enemies may be desig-
nated who pose no concrete threat whatsoever.
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836 833

This seems to be precisely what is happening in efforts
to regulate the caregivers of children in the public realm.
Belief in the rational calculability of risks to the safety of
children, and in the practical inability of childcare
workers to govern the dietary habits of children so as to
manage largely incalculable risks are the underpinnings
of regulatory projects that are always in danger of
foundering upon their inherent limits. The pattern of
increasing governance over the care of children’s social
bodies in schools constitute a large-scale organization of
institutional capacities deployed to temper public
anxiety about the undermining of what has previously
been viewed as the secure and predictable social order of
the school. The increased governance of children in
schools undoubtedly involves projects of moralization
that aim to enforce an adequate management of a
distinctively generated set of risks.
Rationality is elusive; it implies a proper reason that is
supported by some form of calculability. As Beck
argues, the ‘social pillars of the calculus of risk’ may
be said to fail whenever the boundary between
‘predictable risks’ and ‘uncontrolled threats’ is trans-
gressed; in such an event, the notion of security
degenerates into one of mere technical safety (Beck,
1992b, p. 103). Parents and other concerned adults have
done a diligent job in drawing attention to the risks of
allergenic food ingestion; but the risks of children
ingesting allergenic food is notoriously hard to calculate.
The concern to promote the safety of children has
found expression in policies that have had three main
consequences. First, the policies structure the relation-
ship between ideologies of individualism and community
so as to blur the boundaries between public and private
dimensions of social life, between personal and institu-
tional responsibilities and duties. Second, these efforts
ensure that health promotion has been marginalized by
new set of discourses centered on prevention of risk.
Third, the very locus of responsibility for the security of
children’s social bodies has changed from private to
public social spaces, while enhancing the further effect of
the associated governmentality by encouraging the
development of self-governing bodies. This last effect
is among the defining characteristics of the duality or
double-movement between rules that address the risks
confronting allergic children while, on the other hand,
stimulating a heightened problematization of ‘normal’
eating habits. We now attend to this issue.
Children as self-governing subjects: Welfarism and
The responsibilization of children’s public caregivers
acquires a powerful symbolic meaning, one that implies
ideological assumptions about how children ought to be
governed. As the legislative and procedural strategies
that aim to regulate schools and classrooms become
more systematically organized, we find that caregivers,
allergic and non-allergic children alike, and the social
spaces themselves are being deconstructed as agents, and
re-assembled as collections of risk factors. Managerial
policies aimed at supervising risks become a strategy
wider than the objectives of the projects of regulation
themselves. Concrete progressive strategies aimed at the
restoration of the central idea of self-governing care-
givers and children alike, are displaced by an ideological
monolith constructed out of prevention, security and
The complex apparatuses of governance that seek to
monitor and regulate the environmentally sensitive
social bodies have increasingly operated within the
framework of a new form of welfarism. What had
previously been regarded as a parental responsibility for
the well-being of their own children, came to be
conducted either in partnership with or under the
tutelage of, first, the increasingly coordinated system
of general practitioners, health visitors and school
nurses and then of more general interventions by social
workers (Donzelot, 1979). While this model of welfare
has been under attack in recent decades, it has by no
means been displaced. Rather, it has been supplemented
by a new welfarism whose major characteristic is the key
role of social activists. The older statist institutions are
still present but the centre of gravity has shifted towards
an alliance between activist interest groups, and the local
managers of the social institutions, in our case school
principals and administrators. This generates as one of
its most significant political implications the responsibi-
lization of classroom teachers. This is particularly
evident in the Ottawa Board policy with the injunction
that teachers are to be responsible for food monitoring,
but this is only one of a plethora of new duties imposed
upon them; parallel policy innovations require teachers
to respond to such diverse issues as bullying, racism, and
sexual harassment. The potential tensions are revealed
by the fact while teachers are urged to enter into
partnership with parents at the same time they are
required to be vigilant about signs of physical abuse of
children by their parents.
From one perspective, these changes can be viewed as
an expansion of welfarist governmentality that is
committed to the all-round care of vulnerable members
of the population. Yet, at the same time, it reveals how
teachers in the front line of responsibility for the care of
school children, are gradually being supplanted in their
regulatory role by a community of ‘absent experts’ that
Rose (1999, p. 76) refers to as ‘the proliferating scientific
experts of the moral order’. It is these experts who
formulate the new policies and play a decisive role in the
creation of new rules and procedures. The expansionary
logic of moral regulation should be noted. Research
findings suggesting possible links between maternal
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836834

dietary practices and child allergies is such that, just as
consuming alcohol during pregnancy has figured in
medico-moral discourses, so the eating of peanuts has
fallen under a veil of disapproval. It is not surprising
that pregnant women themselves are active participants
in this inflation of self-regulatory projects (Lupton,
This shift in the form of welfarism creates a new
relationship between experts and administrators. Castel
(1991, p. 281) describes the process as one in which
displacement completely upsets the existing equili-
brium between the respective viewpoints of the
specialized professional and the administrator
charged with defining and putting into operation
the new sanitary policy. The specialists find them-
selves cast in a subordinate role, while managerial
policy formation is allowed to develop into a
completely autonomous force, totally beyond the
surveillance of the operative on the ground who is
now reduced to a mere executant.
A project of moral regulation with teachers, school
administrators and officials as the proclaimed targets is
thus born, and class-teachers are called upon to regulate
an ever-expanding range of aspects of school life that are
either unpredictable or directly contribute to the risk of
allergic children being exposed to hazardous allergenic
Such a system of regulation might be regarded as a
triumph of the modern welfare state in that the projects
of moral regulation that have sought to make public
caregivers responsible for the governance of children’s
bodies, have thus added new dimensions of the class-
room and playground as governable spaces within which
public surveillance of children is to be maintained. Yet
this picture is far from satisfactory.
Classroom teachers have become increasingly subject
to control by the bureaucratic-administrative machine.
Classroom and playground alike have become social
spaces simply waiting to be filled by official markers of
the new sanitary and preventive policies. The strategies
of prevention have significantly appropriated the pre-
viously occupied discursive place of health promotion.
The construction of mealtime rules and dietary guide-
lines formerly concerned to endorse ‘healthy eating’, has
been supplanted by much more interventionist policies
aimed at the avoidance of allergenic foods.
Such regulatory ‘markers’ on public spaces have,
perhaps all too predictably, taken the form of the classic
prohibitory slashed circle ‘No peanuts’ or ‘Allergen
free’. Such signing has become the ‘official graffiti’ of the
school that is part of a pervasive transformation in the
forms of regulation of social spaces (Hermer & Hunt,
1996, p. 463). These markers have four main features,
the first and most telling being that they intervene in the
governance of conduct. Second, they invoke an under-
lying discursive framework consisting of an implied
reader, and an implied author, namely, the absent
experts on risks. Third, such markers have a distinctly
public character, mapping out spatial dimensions in
which the dietary regimes of children are monitored.
Fourth, they have a mobile but nonetheless fixed
attachment to entrances, doors, walls, and other spaces
of social passage. The overall effect communicated is
that such signs of governance have come to acquire the
sort of permanence that renders them such a pervasive
part of the modern socialscape.
There is, as Donzelot (1979, p. 25) reminds us, an
important link between ‘the order of families and the
order of the state’. In its most developed form the family
state partnership was enshrined in the social-democratic
or welfarist ideology of ‘from the cradle to the grave’.
The fabrication of a child’s social self involves significant
connections with techniques promoted by the new form
of liberal welfarism. After a child’s birth and the
provision of post-natal services, entry into school is a
key moment in the linkage between family and state, one
which creates potential tensions in the family state
relation. One significant feature is the anxieties and
moral panics that arise from charges arising from any
laxness on the part of public caregivers as when social
workers fail to follow up on evidence of physical abuse.
Similar conflicts may arise in the policing of children’s
mealtime habits, where teachers or others are accused of
being lax in their surveillance. Such concerns are often
especially sharp because of the emotional energy
invested in parental social anxieties about their chil-
dren’s security in a dangerous world.
Protective strategies aimed at the management of risks
take the form of both community and individual
responsibilization strategies. However, as we have
suggested, there exists a double movement between
collective and individual responsibility. The ever-extend-
ing reach of this new governmental ideology can be both
liberating and constraining since it entails both an
increased space for individuals to acquire the capacities
of self-regulating subjects, and simultaneously legiti-
mates the capacity to govern possessed by the liberal
democratic state and its agencies. This feature is well
captured by Nicholas Fox (1994, p. 33).
Governmentality entails two often conflicting effects:
the reinforcement of the community and increasing
We have shown how allergic children are the objects
of specific moralizing practices around the security of
their social bodies. So also are public caregivers
inscribed with the mark of governmentality through
the diligence with which they approach the task of
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836 835

Nancy Chen

managing the risks of exposure to food allergens in that
quintessential public space, the school.
Althusser, L. (1971). Lenin and philosophy (pp. 162–170).
London: New Left Books.
Bauman, Z. (1992). Legislators and interpreters: Culture as the
ideology of intellectuals. In Intimations of postmodernity
(pp. 1–25). London: Routledge.
Beck, U. (1992a). Risk society: Towards a new modernity
[1986]. London: Sage.
Beck, U. (1992b). From industrial society to the risk society:
Questions of survival, social structure and ecological
enlightenment. Theory Culture, and Society, 9, 97–123.
Canadian School Boards Association (CSBA). (2001). Anaphy-
laxis: A handbook for school boards. CSBA: Ottawa.
Castel, R. (1991). From dangerousness to risk. In B. Graham,
G. Colin, & M. Peter (Eds.), The Foucault effect studies in
governmentality (pp. 282–298). London: Harvester-Wheat-
Dean, M. (1991). The constitution of poverty: Toward a
genealogy of liberal governance. London: Routledge.
Dean, M. (1998). Risk, calculable and incalculable. Soziale
Welt, 49(1), 25–42.
Donzelot, J. (1979). The policing of families. New York:
Random House.
Douglas, M. T. (1992). Risk and blame: Essays in cultural
theory. London: Routledge.
Ewald, F. (1991). Insurance and risk. In G. Burchell,
C. Gordon, & P. Miller (Eds.), The Foucault effect: Studies
in governmentality (pp. 197–210). Hemel, Hempstead:
Harvester Wheatsheaf.
Foucault, M. (1991). Governmentality [1978]. In G. Burchell,
C. Gordon, & P. Miller (Eds.), The Foucault effect: Studies
in governmentality (pp. 87–104). Hemel Hempstead: Har-
vester Wheatsheaf.
Foucault, M. (1997). The birth of biopolitics. In P. Rabinow
(Ed.), Ethics, subjectivity, and truth, Vol. 1: The essential
works of Michel Foucault (pp. 73–79). New York: New
Fox, N. J. (1994). Postmodernism, sociology and health.
University of Toronto Press: Toronto.
Freund, P. S., & McGuire, M. B. (1995). Health, illness, and the
social body: A critical sociology. Englewood Cliffs, NJ:
Giddens, A. (1991). Modernity and self-identity: Self and society
in the late modern age. Cambridge: Polity Press.
Goode, E., & Ben-Yehuda, N. (1994). Moral panics: The social
construction of deviance. Oxford: Blackwell.
Grieshaber, S. (1997). Mealtime rituals: Power and resistance in
the construction of mealtime rules. British Journal of
Sociology, 48(4), 649–666.
Hacking, I. (1986). Making up people. In T. C. Heller, M.
Sosna, & D. Wellbery (Eds.), Reconstructing individualism:
Autonomy, individuality, and the self in western thought
(pp. 222–236). Stanford: Stanford University Press.
Herner, J., & Hunt, A. (1996). Official graffiti of the everyday.
Law & Society Review, 30(3), 455–480.
Hunt, A. (1998). The great masturbation panic and the
discourses of moral regulation in nineteenth- and early
twentieth-century Britain. Journal of the History of Sexu-
ality, 8(4), 575–615.
Hunt, A. (1999). Governing morals: A social history of moral
regulation. Cambridge: Cambridge University Press.
Hunt, A. (2003). Risk and moralization in everyday life. In R.
Ericson, & A. Doyle (Eds.), Morality and risk (pp. 165–192).
University of Toronto Press: Toronto.
Lupton, D. (1999). Risk and the ontology of pregnant
embodiment. In D. Lupton (Ed.), Risk and sociocultural
theory: New directions and perspectives (pp. 59–85). Cam-
bridge: Cambridge University Press.
O’Malley, P. (1992). Risk, power and crime prevention.
Economy and Society, 21(3), 252–275.
Osborne, T., & Nose, N. (1997). In the name of society, or three
theses on the history of social thought. History of the human
sciences, 10(3), 87–104.
Ottawa-Carleton District School Board. (1998). Protocol for
creating safe and healthy schools for anaphylactic students.
Petersen, A. (1997). Risk, governance and the new public
health. In A. Petersen, & R. Bunton (Eds.), Foucault, health
and medicine (pp. 189–206). London: Routledge.
Poovey, M. (1995). Making a social body: British cultural
formation. Chicago: University of Chicago Press.
Putnam, R. D. (2000). Bowling alone: The collapse and revival of
American community. New York: Simon & Schuster.
Richardson, J. T., Best, J., & Bromley, D. G. (Eds.). (1991). The
satanism scare. New York: Aldine de Gruyter.
Rose, N. (1990). Governing the soul: The shaping of the private
self. London: Routledge.
Rose, N. (1999). The powers of freedom: Reforming political
thought. Cambridge: Cambridge University Press.
Symes, C., & Meadmore, D. (1996). Force of habit: The school
uniform as a body of knowledge. In E. McWilliam, &
P. G. Taylor (Eds.), Pedagogy, technology and the body
(pp. 171–191). New York: Peter Lang.
Turner, B. S. (1987). Medical power and social knowledge.
London: Sage.
Turner, B. S. (1995). Risk society and the new regime of disease.
In Medical Power and Social Knowledge (2nd ed.). London:
T. Rous, A. Hunt / Social Science & Medicine 58 (2004) 825–836836

Governing peanuts: the regulation of the social bodies of children and the risks of food allergies
The problem of children’s social bodies in a risk society
The social anatomy of a moral regulation project
Children as self-governing subjects: Welfarism and governmentality

Parsing the peanut panic: The social life of a contested food
allergy epidemic

Miranda R. Waggoner*

Princeton University, Office of Population Research, 228 Wallace Hall, Princeton, NJ 08544, USA

a r t i c l e i n f o

Article history:
Available online 6 May 2013

Peanut allergies
Food allergies
New epidemics
Disease classification

a b s t r a c t

As medical reports over the last decade indicate that food allergies among children are on the rise,
peanut allergies in particular have become a topic of intense social debate. While peanut allergies are
potentially fatal, they affect very few children at the population level. Yet, peanut allergies are charac-
terized in medical and popular literature as a rising “epidemic,” and myriad and broad-based social
responses have emerged to address peanut allergy risk in public spaces. This analysis compares medical
literature to other textual sources, including media reports, legislation, and advocacy between 1980 and
2010 in order to examine how peanut allergies transformed from a rare medical malady into a
contemporary public health problem. I argue that the peanut allergy epidemic was co-constructed
through interactions between experts, publics, biomedical categories, and institutions, while social re-
actions to the putative epidemic expanded the sphere of surveillance and awareness of peanut allergy
risk. The characterization of the peanut allergy problem as an epidemic was shaped by mobility across
social sites, with both discursive and material effects.

! 2013 Elsevier Ltd. All rights reserved.


Peanut allergies represent charged terrain in medicine and in
society. Deemed a population epidemic by some physicians and a
case of population hysteria by others, peanut allergies have become
the focus of much social activity and controversy. For instance,
during the last decade, schools have banned peanut butter, segre-
gated lunch tables based on the presence of peanuts, and evacuated
school areas when peanuts have been found (Christakis, 2008;
Kalb, 2007). This so-called peanut panic occurs in many educational
or day care settings (Kilanowski, Stalter, & Gottesman, 2006) and
has even extended to higher education in the form of nut-free
dormitories (Ahmed, 2008). Airlines and baseball parks have
instituted peanut-free zones; and, since legislation in the early
2000s, we can reliably expect in the U.S. to ascertain whether a
processed food product came into contact with peanuts during
manufacture, or whether it contains peanuts, by reading package
labels. Signage indicating the same is now regularly posted in food
vending spaces.

Peanut allergies are commonly referred to as an “epidemic.” A
simple review of media headlines and medical titles over the past
decade impresses the point that the population suffering from a
peanut allergy has expanded. Contemporary books and articles aim

to alert lay readers to the idea that an allergy to the peanut (a
legume, not a nut) is indeed a troubling epidemic (Fraser, 2011),
highlighting the vexing nature of its rise as a medical and public
problem (Groopman, 2011). Yet, how big is the problem?

The U.S. National Center for Health Statistics states that the
prevalence of reported food allergies among children rose 18% from
1997 to 2007 and that currently four out of every hundred children
have a food allergy (Branum & Lukacs, 2008). Medical experts claim
that cases of peanut allergies, in particular, doubled among children
around the turn of the twenty-first century (Sicherer, Munoz-
Furlong, & Sampson, 2003). However, the peanut allergy affects,
at maximum estimates, a little over 1% of children in North America
and the U.K. (Ben-Shoshan et al., 2010; Sicherer & Sampson, 2007).
Children often outgrow other types of food allergies, but the peanut
allergy appears to remain more stable and more severe than other
food allergies (Sicherer & Sampson, 2010). Furthermore, although
peanut allergies are not medically-contested in their extreme, or
“true,” form (an IgE-mediated allergic, or anaphylactic, reaction is a
clear immunologic response that can lead to shock, difficulty in
breathing, or death without an injection of epinephrine, or adren-
aline), it is difficult to diagnose a true allergy, and this is something
the medical establishment has wrestled with since the peanut al-
lergy phenomenon began its rise.

Undoubtedly, people with peanut allergies or sensitivities have
long existed; yet, the peanut allergy did not comprise a pronounced
medical research agenda prior to the 1980s, nor did it appear in

* Tel.: þ1 609 258 5514; fax: þ1 609 258 1039.
E-mail address:

Contents lists available at SciVerse ScienceDirect

Social Science & Medicine

journal homepage:

0277-9536/$ e see front matter ! 2013 Elsevier Ltd. All rights reserved.

Social Science & Medicine 90 (2013) 49e55

Nancy Chen

Nancy Chen

media headlines with much frequency. At that time, an allergy to
peanuts was considered a rare malady and presumably not infused
with as much social meaning as it is today. Some medical and
cultural commentators call the current public responses to peanut
allergies unnecessary and overstated (Broussard, 2008; Sanghavi,
2006), suggesting a case of “otherwise healthy people in a
cascade of anxiety” (Christakis, 2008: a2880).
This paper examines how a scarce illness became considered a
conspicuous public problem, even an epidemic, and the ways in
which this process inflected the tenor of social responses to peanut
allergies. I look at medical literature on, and social responses to,
peanut allergies both before and after they were considered a sig-
nificant public health issue. By using the characterization of the
peanut allergy “epidemic” as an analytic pivot point, I examine the
aggregation and deployment of new ideas about an emergent
health and social problem. By also analyzing the social activity
around the emergence of the peanut allergy as an epidemic phe-
nomenon, I show how reactions to this putative epidemic
expanded the sphere of surveillance and awareness of peanut al-
lergy risk.
New epidemics and the production of social order
Health fears in developed countries now focus more on chronic
disease than on infectious disease (Rosenberg, 2009). While health
epidemics are still usually thought of in terms of contagious dis-
eases, scholars have recently paid close attention to the social rise
of non-communicable chronic diseases deemed “epidemics,” such
as autism, obesity, or breast cancer (see, e.g., Eyal, Hart, Onculer,
Oren, & Rossi, 2010; King & Bearman, 2011; Lantz & Booth, 1998;
Paradis, Albert, Byrne, & Kuper, n.d.; Saguy & Almeling, 2008).
Paradis et al., in their analysis of the use of “epidemic” in the
medical literature, reveal an “epidemic of epidemics” during the
second half of the twentieth century; they argue that the invocation
of the term “epidemic” has, over time, served as a rhetorical strategy
to unearth symbolic struggles over disease attention (Paradis et al.,
n.d.). Boero (2007) uses the term “post-modern epidemics” for
contemporary medicalized phenomena that take on monikers of
more “traditional epidemics; ” as Rosenberg (1992: 278) writes, the
term “epidemic” is today used in a multiplicity of ways, often in a
metaphorical manner, “moving it further and further from its
emotional roots in specific past events.”
Much of this literature on the “new epidemics” focuses on the
emergence of new disease categories and how classificatory schema
are entrenched in institutional and methodological decisions about
relevant criteria and diagnoses. In this paper, I take these insights
from the history and sociology of medicine and blend them with the
rich literature in science and technology studies (STS) that focuses
on the complex interactions among experts, institutions, publics,
and other entities in the emergence of novel disease categories and
spheres of social awareness and surveillance. Taking such a theo-
retical and methodological approach can shed light on the social
processes at play in the emergence of new epidemics, as these ep-
idemics may reflect an intricate social course by which a disease
classification emerges within an interactive relationship among
medical categories, people, institutions, knowledge, and experts
(Hacking, 1999; 2007). The creation of knowledge about epidemi-
ology and the creation of new social practices in conjunction with
this new knowledge may be seen as co-producing (Jasanoff, 2004)
or co-constructing (Taylor, 1995) science and social order. How ex-
perts and publics interact vis-à-vis this new knowledge, and how
scientific knowledge percolates in the public arena, is also of critical
importance in the social life of new diseases or conditions that
impact public health (Epstein, 1996; Wynne, 1996; Yearley, 1999).
Scholars have shown that whenever new population health
imperatives emerge, there are credibility struggles that permeate
science and the public (e.g., Epstein,1996; Hilgartner, 2000). As new
ways of positioning and classifying diseases matter for what we
come to know as “normal” (Bowker & Star, 1999: 326), there are
potential material effects of the ways in which social processes,
social practices, and disease categories interact.
Meanwhile, several social scientists have paid express attention
to the analytical leverage provided by empirical analyses of food
allergies. Nettleton, Woods, Burrows, and Kerr (2009) call for a so-
ciological agendawith reference to food allergies and note that while
the epidemiology concerning food allergies is contested, “what is
certain is that there is growing media, public, scientific, commercial
and policy interest in food allergies and food intolerance” (2009:
648). Due to the debatable, and thus socially contingent, definitions
and categories with regard to food allergies, in addition to the
myriad social responses to them (Nettleton et al., 2009) and lack
of etiologic understanding of them, a high level of uncertainty
surrounds contemporary food allergies, in general, and peanut
allergies in particular (Lauritzen, 2004; Pansare & Kamat, 2009).
One of the only sociological examinations of the rise of peanut
allergies focuses on new regulatory measures in Canadian schools
that have resulted in a type of morality governance invading the
public space of the school system (Rous & Hunt, 2004). More
empirical and comprehensive work is necessary to unpack the so-
cial problem of peanut allergies. In this article, I am interested in
examining how a relatively rare ailment emerged as a conspicuous
public problem and how it sparked such social responses in the first
place. In doing so, I will highlight the evolution in characterization
of the peanut allergy as an “epidemic” and examine the complex
interactions between experts, publics, biomedical categories, and
institutions in the shaping of a population health problem.
In what follows, I focus both on the moment of emergence of the
peanut allergy phenomenon and on the subsequent or co-occurring
social reactions. I show when the peanut allergy phenomenon
emerged in the medical literature and how public, expert, and
institutional reactions to the emergent epidemic expanded the
sphere of social awareness and surveillance of peanut allergy risk. I
will argue that the category of the peanut allergy “epidemic” was
co-constructed and deployed through interactions among various
social worlds. Highlighting the social mobility around this con-
tested epidemic, including the calibration of public discourse and
the reorganization of social space, I consider the discursive and
material effects of the new phenomenon.
Data and methods
Focusing here principally on the period 1980e2010, I report on a
multi-site analysis of print materials, in which I follow the object of
the peanut allergy in salient social worlds (see Clarke, 2005). A key
component to this analysis is to examine the emergence and
meaning of responses to peanut allergies as a medical and public
problem, as revealed by medicine, media, advocates, parents, and
institutions (Nettleton et al., 2009). I began with a targeted litera-
ture search in the PubMed database for medical and clinical journal
articles with keywords of peanut* and anaphyl*, or peanut* and
allerg*, or peanut* and hypersens* for all years through 2010
(n ¼ 1345). I read article titles and abstracts of these results and
then conducted a LexisNexis Academic search for English-language
news with peanut* and allerg* in the headline between 1980 and
2010. I read headlines and lead paragraphs of newspaper reports
(n ¼ 779) and news broadcast transcripts (n ¼ 64). For recent social
discourse on peanut allergies, I analyzed the website of a major
trade association of the peanut industry, the American Peanut
Council, as well as the online materials of arguably the highest
profile food allergy organization in the U.S., the Food Allergy &
M.R. Waggoner / Social Science & Medicine 90 (2013) 49e5550

Nancy Chen

Anaphylaxis Network (known as Food Allergy Research & Educa-
tion since 2012).
Employing a specific case of a debate over the implications of
the rise in peanut allergies, I examined the U.S. federal Department
of Transportation’s 2010 proposed rule (DOT, 2010) on “Enhancing
Airline Passenger Protections,” which included an express objective
to increase access to commercial air travel for those passengers
who suffer from peanut allergies. On the site < > I
searched “peanut allergies” limited to the Department of Trans-
portation. This list returned comments from the airline industry
(n ¼ 17), advocacy groups (n ¼ 4), consumer groups (n ¼ 2), the
peanut industry (n ¼ 1), and individual citizens (n ¼ 1013). I
analyzed the small population of comments from groups and in-
dustry; I then sampled the comments from individual citizens by
taking every third comment (n ¼ 337), resulting in analysis of 361
public comments to the DOT proposed airline peanut regulations. I
also reviewed the Massachusetts Department of Early Education
and Care guidelines for food allergies, since Massachusetts was the
first state to publish statewide school guidelines for food allergies.
Finally, I reviewed an NIAID (National Institute of Allergy and In-
fectious Diseases, National Institutes of Health) report on food al-
lergies that was released at the end of this study period (Boyce
et al., 2010).
For all scripts, I analyze when, if, and how these print materials
framed a narrative around peanut allergy in social life. Because I am
concerned with the interaction of these sets of data, I paid attention
to overlap; for example, I took note when an author of a medical
article was affiliated with an advocacy organization and when a
medical article received attention in numerous media outlets. The
materials were also coded inductively to examine discursive
themes in the arena of peanut allergies as a social and medical
phenomenon (Bryant & Charmaz, 2010; Charmaz, 2006). Generated
themes included discourse about risk, responsibility, and disease
prevalence, which are central to the analysis in this paper.
The emergent epidemic
Prior to 1980, articles that mentioned peanut allergies were
mostly part of a broader discussion of food anaphylaxis. In 1976,
peanuts were listed in a physician’s journal as one of the common
food “offenders” and one that may sometimes cause a remarkably
severe reaction; but, food allergy in general was characterized not
as an imminent problem but rather as a source of “low-grade,
chronic illness” and something that is “seldom a threat to life”
(Speer, 1976: 106). Anaphylactic deaths due to peanuts were
formally reported in the medical literature in the late 1980s, and
one article warned that peanut allergies are “probably the most
common cause of death by food anaphylaxis in the United States”
(Settipane,1989: 271). As an example of the rising clinical attention
paid to peanut allergies, the British Medical Journal devoted several
pages for letters to the editor regarding the seriousness of peanut
allergies in 1990, and medical journal articles detailed more severe
and anaphylactic reactions to peanuts during this time period. Pe-
diatric Annals warned that “peanut allergy is the most worrisome
food allergy issue confronting the pediatrician today because of all
the potentially allergenic foods, peanut appears to be the most
dangerous” (Schwartz, 1992: 656).
Co-occurring during this time period was the formation of
advocacy organizations as more cases of severe allergic reactions to
foods were reported. Anne Munoz-Furlong started the U.S. advo-
cacy group Food Allergy & Anaphylaxis Network, regarded as one of
the leading food allergy advocacy organizations, in 1991 after her
daughter was diagnosed with egg and milk allergy. Anaphylaxis
Australia was launched in 1993, and The Anaphylaxis Campaign in
the U.K. was founded by David Reading in 1994 following the
deaths of four people, including his daughter, from allergic re-
actions to nuts (Jackson, 2006). Organizations continued to sprout
as food and peanut allergies became an increasing concern in the
developed world. Parent advocacy in New Zealand became official
in 1999 with Allergy New Zealand. Anaphylaxis Canada was foun-
ded in 2001.
The origin of these collectives occurred along with rising public
awareness of food allergies and their potential deadly reactions.
While not a popular topic of media coverage in the 1980s, by the
mid-1990s newspapers were not only reporting fatalities to peanut
allergies but were also reporting “almost deaths” to peanut al-
lergies. One article’s headline read “Nut Allergy Girl’s Terror; Girl
Almost Dies from Peanut Allergy” (Daily Mirror, 1995). The Wall
Street Journal ran a story in 1995 with the headline “Peanut Al-
lergies Have Put Sufferers on Constant Alert” (Chase, 1995). This
amplification of risk continued over the next decade, positioning
the risk of (deadly) peanuts in public spaces as quite pronounced.
During what I designate as the “pre-epidemic” phase of the
peanut allergy problem (i.e., the idea of the peanut allergy problem
as an epidemic had not yet gained salience in medicine, media, or
the public imagination), some researchers and clinicians remained
wary of the growing attention to peanut allergies and the potential
conflation of “intolerance” and “allergy.” Notwithstanding contes-
tation in the medical literature, it is clear that lay people were
attuned to the potential risk of peanut allergies. For example, one
media piece in The Times (U.K.) in 1994 covered the story of a
mother who saved her baby after she “guessed” that the baby was
allergic to peanuts (Milton, 1994). Given the emergence of report-
ing in the U.K. press of peanut allergy deaths and the proliferation
of advocacy groups, it may not be surprising that this mother was
aware of the possibility of the category of the peanut-allergic child.
The contemporaneous occurrences across these social worlds
affected discourse and material reality regarding children’s health
and the risk of peanut allergies.
Moving toward an epidemic
While in the early 1990s medical articles and media stories were
speculating about rising prevalence of peanut allergies, the first
confident statements from medical experts appeared in 1996.
Based on an analysis of 62 patients at one clinic in the U.K., an
article in the British Medical Journal made an epidemiologic claim
that the prevalence of peanut allergies was increasing (Ewan,
1996a). Hugh A. Sampson, a prominent food allergy researcher in
the U.S., concurred in an accompanying editorial that this state-
ment corresponded with American data and that “with this rising
number of individuals at risk for potentially lethal reactions,
aggressive intervention in both prevention and treatment is
essential” (Sampson, 1996: 1050). Sampson called for more infants
to be identified as “at risk” for peanut or nut allergy and for more
pressure to be put on government agencies to regulate food la-
beling, a clear example of discursive interaction among social
worlds. As food labeling laws did indeed materialize in the U.S.
(with the Food Allergen Labeling and Consumer Protection Act of
2004), the presence of peanuts and their associated risk entered the
public sphere to a greater degree, altering how individuals interact
with institutions, spaces, and products.
At the same time, professional discourse in the medical litera-
ture brought into focus the contested meaning of measurements
leading to the proclamation of the peanut allergy as a growing
problem. The British Medical Journal published claims-clashing
correspondence among physicians, in which some practitioners
expressed skepticism regarding the increased prevalence (Jones &
M.R. Waggoner / Social Science & Medicine 90 (2013) 49e55 51

Nancy Chen

Jones, 1996; Wilson, 1996). One letter noted that that the “sup-
posed” evidence given in support of the claim of increasing prev-
alence was faulty, resting on the author’s “impression that the
increased incidence of peanut or nut allergy is real” (Jones & Jones,
1996: 299e300). In a formal reply, the author of the initial report
conceded that heightened public awareness may have played a role
in the rise of clinic referrals during the early 1990s (Ewan, 1996b:
300). Indicating interaction among experts, publics, and disease
categories in the growing awareness of peanut allergies, this rise in
referrals occurred at the same time that the topic was gaining
traction among parent advocates and within the media and medical
literature. Published in the British Medical Journal soon after,
another paper on peanut allergy prevalence within families did not
mention the term “epidemic” in the abstract or body of the paper;
however, the language of “apparent epidemic” was included as a
key message of the paper in a sidebar text box (Hourihane, Dean, &
Warner, 1996: 521). The marginal marking of articles with the term
“epidemic” signals how discursive work may burnish the public
and medical idea of an epidemic.
The epidemic catches on
Amid debate in the medical community, more studies were
being conducted on the prevalence of peanut allergies. In publica-
tions, noticeable agreement emerged over the peanut allergy in-
crease, including recognition of a troubling decrease in the age of
onset in small children. The U.S. was considered to have an
“epidemic problem” of peanut allergies, according to some re-
searchers (Senti, Ballmer-Weber, & Wüthrich, 2000). In 1999, the
Journal of Allergy and Clinical Immunology issued a rapid publication
of a study of self-reported peanut and tree allergies in the U.S., in
which the authors estimated the prevalence at 1.1% of the general
population (Sicherer, Munoz-Furlong, Burks, & Sampson, 1999).
Another study out of the U.K. (Grundy, Matthews, Bateman, Dean, &
Arshad, 2002) showed an increase in peanut sensitization over time
and a strong trend, though statistically insignificant, in reported
peanut allergies over time. The allergy researcher Hugh A. Sampson
gestured to the “relative epidemic of peanut allergy” in a New En-
gland Journal of Medicine featured clinical practice article (2002:
1294). In March, 2003, that journal declared that the “prevalence of
peanut allergy is increasing” in an issue that included articles and
editorials on the phenomenon, indicating that legitimate medical
attention was being paid to the subject. Additionally, a 2003 study
by Scott Sicherer and colleagues found an increase in reported al-
lergy to peanut among U.S. children, from 0.4% in 1997 to 0.8% in
2002. While this finding received significant public play in the
media and elsewhere as evidence that peanut allergies recently
doubled among children, the data actually point to the doubling of
self-reported peanut allergies rather than clinical presentations of
true peanut allergies. By 2007, medical articles were using the term
“epidemic” in the title to refer to the rising prevalence of peanut
allergies (de Leon, Rolland, & O’Hehir, 2007; Sicherer & Sampson,
In short, in the 2000s, a set of academic physicians tended to
believe that “the rise in peanut allergy [had] been well docu-
mented” (Burks, 2008: 1538), thus lending expert knowledge to the
mounting belief of this epidemiologic “fact.” Medical reports of
rising prevalence were based on lay people’s reporting of their
reactions to peanuts; the ability to report identification as a peanut-
allergic person was perhaps based on social knowledge of a prob-
lem that was growing in popularity over this short time period. In a
move that put peanut allergies on the national research map, the
National Institutes of Health released a statement in 2005 that its
new food allergy consortium would focus on peanut allergies
(NIAID, 2005). The consortium would be led by Dr. Hugh A.
Sampson, and one of the main studies would be led by Dr. Scott A.
Sicherer, both prolific publishers on peanut allergies and whose
studies were and continue to be regularly cited in the media.
Feeding and flouting the fear
Amplifying and attenuating risk
Social responses to the rising problem were myriad. Experts in
the medical literature fueled knowledge and raised consciousness
about social situations deemed risky, such as accidental exposures
to peanut butter craft projects in classrooms (Sicherer, Furlong,
DeSimone, & Sampson, 2001), hidden peanut allergens in food
products (Schäppi, Konrad, Imhof, Etter, & Wüthrich, 2001), and the
problem of peanut residue as it relates to any social event like
playing cards (Lepp, Zabel, & Schocker, 2002). While the media did
not use the specific term “epidemic” often, they did use tactics to
signify a rising health problem. Representative is a headline such as
“Peanut Allergies Soar,” citing a study which claimed that the
number of children with peanut allergies tripled in the past decade
(CNN, 2010). During the 2000s, media also clearly amplified
coverage of the escalating risk posed by peanuts through employ-
ing provocative language and imagery; headlines included trigger
words like “lethal” and “scary” in depictions of peanuts and anal-
ogized peanuts to “bombs” in social spaces.
The broadcast media, in particular, used risk amplification
(Hooker, 2010; Pidgeon, Kasperson, & Slovic, 2003) strategies to a
large degree. For example, one ABC World News broadcast started
off a story on peanut allergies in this way: “There was a story that
caught our eye about peanuts, a nutritious snack for some, a po-
tential death sentence for others.” During the segment, the narrator
offered the sensationalist analogy that “living with peanut allergies
is like living in a minefield” and ended with a family’s wish for a day
“when their daughter no longer had to eat in fear” (ABC, 2007). One
prominent example of media hype over the risk of peanut allergies
occurred in November, 2005, when it was widely reported that a
Canadian teenager had died after kissing her boyfriend. She was
allergic to peanuts; he had just eaten a peanut butter snack. Despite
the subsequent autopsy report that revealed no connection be-
tween the young woman’s peanut allergy and her tragic passing,
the “kiss of death” story initially filled all major news outlets. The
discourse reverberating within and beyond media reports of pea-
nut allergies was filled with anxiety and fear, and this coexisted
with the activities of parent groups and the percolation of medical
studies documenting the rise of the problem.
By contrast, the peanut industry carefully worked to attenuate
the risk posed by the peanut allergy epidemic. As just one example,
the American Peanut Council, the trade association for the peanut
industry, has devoted for the past several years a full web page on
allergy indicating that the Council works closely with consumers
and other organizations to address growing concern:
Research indicates that all allergies, not just to food, are
increasing. It is difficult to determine, however, if the increased
reports of food allergies in general and peanut allergy in
particular are due more to actual increases in incidence or
reflect increased awareness among consumers and health pro-
fessionals. It is likely a combination of the two. Self-reporting
studies are the basis for the current high American prevalence
figures and these are inherently biased to over reporting (APC,
This statement invigorates the contested nature of the peanut
allergy phenomenon, pointing to whether the “epidemic” is actu-
ally because of increased prevalence or increased fear and aware-
ness. Certainly, there is a commercial interest present here in
M.R. Waggoner / Social Science & Medicine 90 (2013) 49e5552

allaying the fears of consumers, and fervent social debate about the
meaning of the epidemic and the risk of peanut allergies in public
spaces has taken place in other institutional settings, such as air-
lines and schools.
For example, the U.S. Department of Transportation (DOT) first
alerted airlines in 1998 to consider peanut-free zones on airplanes.
After pushback from lawmakers from peanut-producing states,
Congress nullified the measure (James, 1999). In 2010, citing
persistent public advocacy and awareness at the national level, the
DOT revisited the peanut problem within its proposed “Enhancing
Airline Passenger Protections” rule. Part of this proposed rule aimed
to address “greater access to air travel for the significant number of
individuals with peanut allergies” (DOT, 2010). On June 8, 2010, the
DOT formally requested public comments on the new rule to either
a) ban peanuts completely; b) ban peanuts when an allergic person
is on board; or, c) require buffer zones for medically-documented
allergic persons. A review of the formal open comments reveals
that two distinct groups emerged in the comment sample: those for
and those against airline accommodation of peanut allergies. Peo-
ple, mostly individual parents, advocating for airline accommoda-
tion described peanuts on planes in dramatic terms, appealing to
the sentiment of a spreading epidemic. Here is one example: “It’s
necessary for our family to travel by airplane sometimes; and it is
not without great fear. Please consider the growing number of
children who suffer from peanut allergies when voting on this ban.”
Other responses downplayed the risk to children and population
prevalence: “I don’t think it’s reasonable for an allergy that affects
so few in the population to result in the complete ban of a common
and popular food from all airplanes at all times.” Using the pro-
posed airline rule as one proxy for public debate about peanut al-
lergies, it is clear that both the notion of the peanut allergy
epidemic and its acute risk were contested, and contestable, topics.
By June 25, 2010, the DOT’s proposed rule was amended to clarify
that no action can be taken without a peer-reviewed scientific
study substantiating the risks of peanut allergies on airplanes,
formally delegitimizing the current population risk of peanut
products in shared spaces.
States in the U.S. also responded to fears about food products in
social spaces. For example, in 2002 Massachusetts became the first
state to enact guidelines for the management of food allergies in
schools through the Commonwealth’s Department of Education
guide for “Managing Life Threatening Allergies in Schools.” The
document advises that in some situations a “peanut-free” table
should be given as an option to students because the peanut is “an
extremely potent allergen and often a hidden ingredient”
(Massachusetts DOE, 2002: 16). Numerous schools and day cares
today now have specific policies pertaining to the presence of
peanuts. Rousing “parent wars” (Warner, 2007) have stemmed
from food bans, particularly on peanut products, in schools and day
cares in the last decade. These clashes are not only occurring among
parents; other recent media reports and medical studies have cited
cases of children in schools sabotaging or ridiculing the lunches of
their peanut-allergic peers (Landau, 2010; Lieberman, Weiss,
Furlong, Sicherer, & Sicherer, 2010). The rise of the putative
epidemic, and its corresponding association with public risk, has
initiated both discursive and material changes in social dialogue
and social spaces with regard to the presence of peanuts.
Reining in the contested epidemic
The early medical literature on peanut allergies focused on
anaphylactic and serious reactions to peanuts, not basic aversions.
But as experts sought to define population rates of peanut allergies,
they relied on self-reporting. In 2010, systematic clinical guidelines
for food allergies were organized and distributed to the medical
community (Voelker, 2011), bringing forward the debatable nature
of changes in reporting and criteria of food allergies. There was
much conflation of the terms “intolerance” and “allergy” within
both the medical community and the lay public in reported al-
lergies. RAND Health conducted a systematic review of all food
allergy literature as part of a commissioned report for the National
Institute of Allergy and Infectious Diseases. The report, which never
uses the word “epidemic,” revealed that the compounding confu-
sion over the prevalence and severity of food allergies is the
problem of “anecdotal self-reporting” (RAND, 2010: 15). It seemed
that in order to properly establish prevalence, the discursive effects
of the peanut allergy phenomenon had to be addressed. The report
found only two U.S. studies of peanut-allergy prevalence; these
were cross-sectional, not longitudinal, studies. The report also
identified two studies of prevalence changes in peanut allergies
over time, neither of which presented conclusive findings (RAND,
2010: 70). One of these studies (Grundy et al., 2002) has been
used repeatedly as evidence of the “epidemic” of peanut allergies
(Sampson, 2002; Sicherer & Sampson, 2007). The NIAID expert
panel thusly recommended new clinical guidelines that sought to
objectively confirm “reports” of parents and patients of food al-
lergies because “50% to 90% of presumed [food allergies] are not
allergies” (Boyce et al., 2010: 1111).
Following the social activity around the characterization of the
peanut allergy problem, those with food intolerances and those
with true allergies but with no reactivity were influencing allergy
prevalence numbers and fears of risk in public places, especially
when it came to peanut allergies. Experts here were attempting to
rein in the classification that they suspected had gone awry and
clinically standardize the confirmation of food allergy diagnosis (cf.
Timmermans & Almeling, 2009; Timmermans & Berg, 1997),
potentially downplaying the presence of an “epidemic” and its risk
to individuals and publics.
Discussion and conclusion
In this paper, I take up Nettleton et al.’s (2009) call for increased
scholarly attention on food allergies as a social phenomenon and
examine the intricacies of the emergence of the peanut allergy as a
contested epidemic. I find that the interaction among social worlds
in this arena eased the emergence of the very classification
“epidemic” and precipitated the subsequent social responses to the
problem. Indeed, the problem emerged and spread in a range of
interactive ways and materialized as a salient social problem given
the myriad routes through which it affected small parts of people’s
lives, from airline policies to segregated school lunch tables. The
idea of the peanut allergy as a population health risk and the social
organization of the response to this risk were co-produced
(Jasanoff, 2004), changing the way that people, particularly chil-
dren, interacted with, and were governed within, the familiar social
spaces of schools, airlines, and medicine. Moreover, experts, pub-
lics, and institutions interacted to a great degree, influencing the
evolution of definitions and classifications with regard to a specific
population health problem (cf. Epstein, 1996). The infusion of the
epidemic and risk discourses in various social worlds produced
new ways of interfacing with, and debating about, the condition’s
actual prevalence and attendant risks.
Although this paper’s narrative is presented in chronological
order, it is not in fact linear; rather, all of the activity was co-
occurring and interacting within social spaces and with other so-
cial discourses during this time. While media used risk amplifica-
tion tactics to promote the story of the peanut allergy phenomenon,
medical researchers hedged but still advanced the prevailing
narrative by using phrases like “apparent epidemic.” And while
airlines and schools, nudged by fearful parents, scrambled for new
M.R. Waggoner / Social Science & Medicine 90 (2013) 49e55 53

Nancy Chen

policies regarding peanuts in public spaces, corporations down-
played popular perceptions of risk. By 2010, the purported
epidemic and its responses seemed out of proportion, as experts
reined in the population definition and prevalence numbers of all
food allergies.
In highlighting the discursive mobility of a medical category, I
show that the peanut allergy phenomenon reveals the significance
of interactions across social sites over time in amplifying risk and
reconfiguring social worlds. For example, while previous scholar-
ship unearths how media magnify health risks and influence public
discourse about social problems (see, e.g., Boero, 2007; Conrad,
1997, 2001; Saguy & Almeling, 2008; Saguy & Gruys, 2010), the
current study demonstrates that media outlets represent but one
location where risk amplification takes place in the social life of
new epidemics. As an analytic angle, discursive mobility treats one
kind of site not in isolation but rather in dynamic conversation with
other significant sites. This interactional lens may help to reveal
why and how discourse about certain disease classifications, and
not others, becomes portable and mutable within and among
various social realms, garnering the attention of medicine, media,
legislation, lay advocacy, and other spheres. Thinking in terms of
discursive mobility e how meanings shift and interact across
multiple sites and over time e as a methodological approach may at
the same time yield new theoretical and comparative insights in
medical sociology about the social purpose of disease categories
that can fruitfully be applied to related phenomena, such as addi-
tional allergies or celiac disease, among other new epidemics.
A pressing and important question for future empirical inquiry
remains: why peanuts? While eight foods account for over 90% of
food allergy reactions, including milk, eggs, peanuts, tree nuts, fish,
shellfish, soy, and wheat, the peanut allergy has arguably received
the largest share of medical and social attention. For example, the
number of seafood allergies in America is almost double that of
peanut allergies (Christakis, 2008). Culturally, peanuts and peanut
butter have long served as a staple snack for kids, especially in the
U.S., and have been expectedly served in public spaces (e.g., on
flights, in schools, and at baseball games) on a regular basis. With
newfound awareness of peanuts as a health risk and problem,
particularly for children, the social characteristics of this “normal”
food are changing. A mundane food substance such as the peanut
garnering this much social attention may speak to broader anxi-
eties about food safety and risk in contemporary culture (Nestle,
Future studies should also examine whether individuals suffer
from stigma as a result of the social evolution of the peanut allergy
epidemic. In a recent study of families of a child with a peanut al-
lergy, researchers found that parents report being treated as “faddy”
or “neurotic” (Pitchforth et al., 2011). In one study that distinguished
those with food allergies and those with food intolerances,
Nettleton, Woods, Burrows, and Kerr (2010) found that respondents
without medically-defined symptoms realize their condition as
more of a social problem than those with a medically-conferred
diagnosis of allergy. Recent technological advances have been
made in molecular testing for peanut allergies; and, as availability of
diagnostic screening becomes more pervasive, especially for young
children, studies should scrutinize the uncertainty that this type of
screening presents for families (Timmermans & Buchbinder, 2010)
in a world in which the category of peanut-allergic child has gained
social purchase partly via its characterization as a category of
epidemic proportions in a risky world.
Moreover, social scientists could further investigate whether
and how the illness label of peanut-allergic, now infused with so
much discursive and social meaning, has material individual and
social consequences. For instance, the increase in reported peanut
allergies could be the result of what Christakis (2008) calls a
“feedback loop” or what Hacking (1999) calls “biolooping,” in which
the classification feeds back to change not only how individuals
identify with a category but their biological sensitivity to the
particular condition. Christakis (2008) argues that new social pol-
icies of peanut avoidance may have a counterproductive bio-effect,
in which more actual and reported cases of peanut allergies emerge
among children because widespread avoidance leads to greater
allergen sensitization at the population level. Social scientists could
examine more directly how social life interacts with this disease
and vice versa (cf. Timmermans & Haas, 2008).
In many ways, the peanut allergy phenomenon is an exemplar of
how an individual medical problem becomes a public problem. No
doubt a severe and serious individual health crisis when anaphylaxis
occurs, the peanut allergy as a population health problem has
become contested ground. More than a storyof panicked parents and
sensationalist media, peanut allergy discourse was co-constructed
by multiple actors and institutions over time, with a range of social
consequences. This discursive mobility serves as an illustrative case
for apprehending the evolution and application of disease categories
and perceptions of health and illness in the social sphere.
I wish to thank Peter Conrad for the support, advice, and
invaluable comments he provided throughout this project. I also
received extraordinarily helpful feedback on previous versions of
this manuscript from Janet Vertesi, Michaela DeSoucey, Norah
MacKendrick, Susan Markens, Vanessa Munoz, and Elana Broch.
Finally, I am grateful for the thoughtful questions and insights from
anonymous reviewers and Stefan Timmermans. Partial support for
this research was provided by a grant from the National Institutes of
Health (#5T32HD007163).
ABC News. (2007). Peanut allergies: Allergen-free peanuts in the works. in World
News with Charles Gibson, July 25.
Ahmed, F. (2008). Cazenove nut-buster Hilary Allen ’11 discusses the job, caz and food.
Wellesley, MA: The Wellesley News.
APC (American Peanut Council). (2013). Food allergy FAQs. Retrieved February 27,
Ben-Shoshan, M., Harrington, D. W., Soller, L., Fragapane, J., Joseph, L., St Pierre, Y.,
et al. (2010). A population-based study on peanut, tree nut, fish, shellfish, and
sesame allergy prevalence in Canada. Journal of Allergy and Clinical Immunology,
125(6), 1327e1335.
Boero, N. (2007). All the news that’s fat to print: the American “obesity epidemic”
and the media. Qualitative Sociology, 30, 41e60.
Bowker, G. C., & Star, S. L. (1999). Sorting things out: Classification and its conse-
quences. Cambridge, MA: The MIT Press.
Boyce, J. A., et al. (2010). Guidelines for the diagnosis and management of food
allergy in the United States: summary of the NIAID-sponsored expert panel
report. Journal of Allergy and Clinical Immunology, 126(6), 1105e1118.
Branum, A. M., & Lukacs, S. L. (2008). Food allergy among U.S. children: Trends in
prevalence and hospitalizations. NCHS Data Brief, No. 10. Hyattsville, MD: Na-
tional Center for Health Statistics.
Broussard, M. (2008). Everyone’s gone nuts: the exaggerated threat of food al-
lergies. Harper’s Magazine64e65. January.
Bryant, A., & Charmaz, K. (Eds.). (2010). The SAGE handbook of grounded theory.
London: SAGE Publications.
Burks, A. W. (2008). Peanut allergy. Lancet, 371, 1538e1546.
Charmaz, K. (2006). Constructing grounded theory: A practical guide through quali-
tative analysis. London: Sage Publications.
Chase, M. (1995). Peanut allergies have put sufferers on constant alert. Wall Street
Journal. March 24.
Christakis, N. A. (2008). This allergies hysteria is just nuts. British Medical Journal,
337(a2880), 1384.
Clarke, A. E. (2005). Situational analysis: Grounded theory after the postmodern turn.
Thousand Oaks, CA: Sage Publications.
CNN. (2010). Peanut allergies soar. CNN Newsroom. May 14.
Conrad, P. (1997). Public eyes and private genes: historical frames, news con-
structions, and social problems. Social Problems, 44(2), 139e154.
Conrad, P. (2001). Genetic optimism: framing genes and mental illness in the news.
Culture, Medicine & Psychiatry, 25(2), 225e247.
M.R. Waggoner / Social Science & Medicine 90 (2013) 49e5554

Nancy Chen

Daily Mirror. (1995). Nut allergy girl’s terror; Girl almost dies from peanut allergy.
November 18.
de Leon, M. P., Rolland, J. M., & O’Hehir, R. E. (2007). The peanut allergy epidemic:
allergen molecular characterisation and prospects for specific therapy. Expert
Reviews in Molecular Medicine, 9(1), 1e18.
DOT. (2010). Notice of proposed rulemaking: enhancing airline passenger pro-
tections. Federal Register, 75(109). June 8.
Epstein, S. (1996). Impure science: Aids, activism, and the politics of knowledge. Ber-
keley: University of California Press.
Ewan, P. W. (1996a). Clinical study of peanut and nut allergy in 62 consecutive
patients: new features and associations. British Medical Journal, 312(7038),
Ewan, P. W. (1996b). Peanut and nut allergy: author’s reply. British Medical Journal,
113(7052), 300.
Eyal, G., Hart, B., Onculer, E., Oren, N., & Rossi, N. (2010). The autism matrix. Cam-
bridge, UK: Polity Press.
Fraser, H. (2011). The peanut allergy epidemic: What’s causing it and how to stop it.
New York, NY: Skyhorse Publishing.
Groopman, J. (2011). The peanut puzzle (p. 26). The New Yorker. February 7.
Grundy, J., Matthews, S., Bateman, B., Dean, T., & Arshad, S. H. (2002). Rising
prevalence of allergy to peanut in children: data from 2 sequential cohorts.
Journal of Allergy and Clinical Immunology, 110(5), 784e789.
Hacking, I. (1999). The social construction of what? Cambridge, MA: Harvard Uni-
versity Press.
Hacking, I. (2007). Kinds of people: moving targets (British Academy Lecture).
Proceedings of the British Academy, 151, 285e318.
Hilgartner, S. (2000). Science on stage: Expert advice as public drama. Stanford:
Stanford University Press.
Hooker, C. (2010). Health scares: professional priorities. Health, 14(1), 3e21.
Hourihane, J. O. B., Dean, T. P., & Warner, J. O. (1996). Peanut allergy in relation to
heredity, maternal diet, and other atopic diseases: results of a questionnaire sur-
vey, skin prick testing, and food challenges. British Medical Journal, 313, 518e521.
Jackson, M. (2006). Allergy: The history of a modern malady. London: Reaktion Books.
James, J. M. (1999). Airline snack foods: tension in the peanut gallery. Journal of
Allergy and Clinical Immunology, 104, 25e27.
Jasanoff, S. (Ed.). (2004). States of knowledge: The co-production of science and social
order. London: Routledge.
Jones, S., & Jones, I. (1996). Peanut and nut allergy: study was not designed to
measure prevalence. British Medical Journal, 313(7052), 299e300.
Kalb, C. (2007). Fear and allergies in the lunchroom. Newsweek. November 5.
Kilanowski, J., Stalter, A. M., & Gottesman, M. M. (2006). Preventing peanut panic.
Journal of Pediatric Health Care, 20(1), 61e66.
King, M., & Bearman, P. (2011). Socioeconomic status and the increased prevalence
of autism in California. American Sociological Review, 76(2), 320e346.
Landau, E. (2010). Food allergies make kids a target of bullies. September
Lantz, P. M., & Booth, K. M. (1998). The social construction of the breast cancer
epidemic. Social Science & Medicine, 46(7), 907e918.
Lauritzen, S. O. (2004). Lay voices on allergic conditions in children: parents’ nar-
ratives and the negotiation of a diagnosis. Social Science & Medicine, 58(7),
Lepp, U., Zabel, P., & Schocker, F. (2002). Playing cards as a carrier for peanut al-
lergens. Allergy, 57(9), 864. 864.
Lieberman, J. A., Weiss, C., Furlong, T. J., Sicherer, M., & Sicherer, S. H. (2010).
Bullying among pediatric patients with food allergy. Annals of Allergy, Asthma &
Immunology, 105(4), 282e286.
Massachusetts DOE. (2002). Managing life threatening allergies in schools. Retrieved
May, 2011 .
Milton, C. (1994). Mother saves baby with peanut allergy. The Times. June 14.
Nestle, M. (2003). Safe food: Bacteria, biotechnology, and bioterrorism. Berkeley:
University of California Press.
Nettleton, S., Woods, B., Burrows, R., & Kerr, A. (2009). Food allergy and food
intolerance: towards a sociological agenda. Health, 13(6), 647e664.
Nettleton, S., Woods, B., Burrows, R., & Kerr, A. (2010). Experiencing food allergy and
food intolerance: an analysis of lay accounts. Sociology, 44(2), 289e305.
NIAID. (2005). New food allergy research consortium focuses on peanut allergy. News
Release, June 24, 2005
Pansare, M., & Kamat, D. (2009). Peanut allergies in ChildrendA review. Clinical
Pediatrics, 48(7), 709e714.
Paradis, E., Albert, M., Byrne, N., & Kuper, A. (n.d.). An Epidemic of Epidemics? A
Systematic History of the Term “Epidemic” in the Medical Literature, 1900e
2010. Author’s files.
Pidgeon, N., Kasperson, R. E., & Slovic, P. (Eds.). (2003). The social amplification of
risk. Cambridge, UK: Cambridge University Press.
Pitchforth, E., Weaver, S., Willars, J., Wawrzkowicz, E., Luyt, D., & Dixon-Woods, M.
(2011). A qualitative study of families of a child with a nut allergy. Chronic
Illness, 7(4), 255e266.
RAND. (2010). Prevalence, natural history, diagnosis, and treatment of food allergy: A
systematic review of the evidence (Working Paper prepared for the National
Institute of Allergy and Infectious Diseases).
Rosenberg, C. (1992). Explaining epidemics and other studies in the history of medi-
cine. Cambridge University Press.
Rosenberg, C. (2009). The art of medicine: managed fear. Lancet, 373, 802e803.
Rous, T., & Hunt, A. (2004). Governing peanuts: the regulation of the social bodies of
children and the risks of food allergies. Social Science & Medicine, 58, 825e836.
Saguy, A. C., & Almeling, R. (2008). Fat in the fire? Science, the news media, and the
“obesity epidemic”. Sociological Forum, 23(1), 53e83.
Saguy, A. C., & Gruys, K. (2010). Morality and health: news media constructions of
overweight and eating disorders. Social Problems, 57(2), 231e250.
Sampson, H. A. (1996). Managing peanut allergy: demands aggressive intervention
in prevention and treatment. British Medical Journal, 312(7038), 1050e1051.
Sampson, H. A. (2002). Peanut allergy. The New England Journal of Medicine, 346(17),
Sanghavi, D. (2006). Peanut allergy epidemic may be overstated. The Boston Globe.
January 30.
Schäppi, G. F., Konrad, V., Imhof, D., Etter, R., & Wüthrich, B. (2001). Hidden peanut
allergens detected in various foods: findings and legal measures. Allergy, 56(12),
Schwartz, R. H. (1992). Allergy, intolerance, and other adverse reactions to foods.
Pediatric Annals, 21(10), 654.
Senti, G., Ballmer-Weber, B. K., & Wüthrich, B. (2000). Nuts, seeds and grains from
an allergist’s point of view. Schweizerische Medizinische Wochenschrift, 130(47),
Settipane, G. A. (1989). Anaphylactic deaths in asthmatic patients. Allergy Pro-
ceedings, 10(4), 271e274.
Sicherer, S. H., Furlong, T. J., DeSimone, J., & Sampson, H. A. (2001). The US peanut
and tree nut allergy registry: characteristics of reactions in schools and day
care. The Journal of Pediatrics, 138(4), 560e565.
Sicherer, S. H., Munoz-Furlong, A., Burks, A. W., & Sampson, H. A. (1999). Prevalence
of peanut and tree nut allergy in the US determined by a random digit dial
telephone survey. Journal of Allergy and Clinical Immunology, 103(4), 559e562.
Sicherer, S. H., Munoz-Furlong, A., & Sampson, H. A. (2003). Prevalence of peanut
and tree nut allergy in the United States determined by means of a random digit
dial telephone survey: a 5-year follow-up study. Journal of Allergy and Clinical
Immunology, 112(6), 1203e1207.
Sicherer, S. H., & Sampson, H. A. (2007). Peanut allergy: emerging concepts and
approaches for an apparent epidemic. Journal of Allergy and Clinical Immunology,
120(3), 491e503.
Sicherer, S. H., & Sampson, H. A. (2010). Food allergy. Journal of Allergy and Clinical
Immunology, 125(2 Suppl 2), S116eS125.
Speer, F. (1976). Food allergy: the 10 common offenders. American Family Physician,
13(2), 106e112.
Taylor, P. (1995). Building on construction: an exploration of heterogeneous con-
structionism, using an analogy from psychology and a sketch from socioeco-
nomic modeling. Perspectives on Science, 3, 66e98.
Timmermans, S., & Almeling, R. (2009). Objectification, standardization, and
commodification in health care: a conceptual readjustment. Social Science &
Medicine, 69(1), 21e27.
Timmermans, S., & Berg, M. (1997). Standardization in action: achieving local uni-
versality through medical protocols. Social Studies of Science, 27(2), 273e305.
Timmermans, S., & Buchbinder, M. (2010). Patients-in-waiting: Living between
sickness and health in the genomics era. Journal of Health and Social Behavior,
51(4), 408e423.
Timmermans, S., & Haas, S. (2008). Towards a sociology of disease. Sociology of
Health & Illness, 30(5), 659e676.
Voelker, R. (2011). Experts hope to clear confusion with first guidelines to tackle
food allergy. Journal of the American Medical Association, 305(5), 457.
Warner, J. (2007). Mean grown-ups. The New York Times. April 19.
Wilson, J. A. (1996). Peanut and nut allergy: serious adverse reactions to adrenaline
are becoming more likely. British Medical Journal, 313(7052), 299.
Wynne, B. (1996). Misunderstood misunderstandings: social identities and public
uptake of science. In A. Irwin, & B. Wynne (Eds.), Misunderstanding science? The
public reconstruction of science and technology. Cambridge University Press.
Yearley, S. (1999). Computer models and the public’s understanding of science: a
case-study analysis. Social Studies of Science, 29, 845e866.
M.R. Waggoner / Social Science & Medicine 90 (2013) 49e55 55

Parsing the peanut panic: The social life of a contested food allergy epidemic
New epidemics and the production of social order
Data and methods
The emergent epidemic
Moving toward an epidemic
The epidemic catches on
Feeding and flouting the fear
Amplifying and attenuating risk
Reining in the contested epidemic
Discussion and conclusion

Fast, Feast, and Flesh: The Religious Significance of Food to Medieval Women

Caroline Walker Bynum

Representations, No. 11. (Summer, 1985), pp. 1-25.

Stable URL:

Representations is currently published by University of California Press.

Your use of the JSTOR archive indicates your acceptance of JSTOR’s Terms and Conditions of Use, available at JSTOR’s Terms and Conditions of Use provides, in part, that unless you have obtained
prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in
the JSTOR archive only for your personal, non-commercial use.

Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.

The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic
journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers,
and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take
advantage of advances in technology. For more information regarding JSTOR, please contact
Wed Aug 8 20:08:37 2007

Was the Taco Invented in Southern California?

Author(s): jeffrey m. pilcher

Source: Gastronomica , Vol. 8, No. 1 (Winter 2008), pp. 26-3


Published by: University of California Press

Stable URL:


JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at

Terms and Conditions of Use

University of California Press is collaborating with JSTOR to digitize, preserve and extend access
to Gastronomica

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������

All use subject to











Taco bell provides a striking vision of the future trans-
formation of ethnic and national cuisines into corporate
fast food. This process, dubbed “McDonaldization” by
sociologist George Ritzer, entails technological rationaliza-
tion to standardize food and make it more efficient.1 Or as
company founder Glen Bell explained, “If you wanted a
dozen [tacos]…you were in for a wait. They stuffed them
first, quickly fried them and stuck them together with a
toothpick. I thought they were delicious, but something
had to be done about the method of preparation.”2 That
something was the creation of the “taco shell,” a pre-fried
tortilla that could be quickly stuffed with fillings and served
to waiting customers. Yet there are problems with this
interpretation of Yankee ingenuity transforming a Mexican
peasant tradition. As connoisseurs of global street cuisine
can readily attest, North American fast food is by no means
fast. Street vendors in the least developed of countries can
prepare elaborate dumplings, noodles, sandwiches, or tacos
as quickly as any us chain restaurant can serve a nonde-
script hamburger, never mind the time spent waiting in line
at the drive-through window.

Moreover, this contrast between North American
modernity and non-western tradition assumes that the taco
has existed unchanged since time immemorial—a dubious
historical claim. In contemporary Mexico, the soft taco
is simply a fresh maize tortilla wrapped around morsels
of meat or beans. The tortilla has surely been used in this
fashion since it was invented thousands of years ago. By
contrast, the hard taco, a soft taco fried in pork fat, must be
a comparatively recent invention, because Spanish conquis-
tadors brought the pigs. Yet the puzzle remains of why an
everyday food with deep pre-Hispanic roots is called by a
Spanish name, in contrast to other Mexican dishes clearly
derived from indigenous words.3 An examination of diction-
aries, cookbooks, archives, and literary sources reveals that
the word “taco” has a surprisingly recent provenance, enter-
ing regular usage only at the end of the nineteenth century
in Mexico City. As cultural historians have shown, words

Was the Taco Invented in Was the Taco Invented in
Southern California?Southern California?

investigations | jeffrey m. pi lcher

literally shape social reality, and this new phenomenon
that the taco signified was not the practice of wrapping
a tortilla around morsels of food but rather the informal
restaurants, called taquerías, where they were consumed.
In another essay I have described how the proletarian taco
shop emerged as a gathering place for migrant workers from
throughout Mexico, who shared their diverse regional spe-
cialties, conveniently wrapped up in tortillas, and thereby
helped to form a national cuisine.4

Here I wish to follow the taco’s travels to the United
States, where Mexican migrants had already begun to create
a distinctive ethnic snack long before Taco Bell entered the
scene. I begin by briefly summarizing the history of this food
in Mexico to emphasize that the taco was itself a product
of modernity rather than some folkloric dish transformed
by corporate formulators. After describing the reinvention
of the taco by migrants in early-twentieth-century Los
Angeles, I examine how tacos gained a following among
mainstream audiences, with particular attention to the
geographical distribution of restaurants. From this evidence
I conclude that the first taco franchises succeeded not
by selling fast food per se but rather by marketing a form
of exoticism that allowed nonethnics to sample Mexican
cuisine without crossing lines of segregation in 1950s
southern California.


Before examining the taco’s migration northward, we must
first locate its origins in Mexico, which is no easy task.
Linguistic evidence of the edible taco is most notable for its
absence. The Spanish word “taco,” like the English “tack,”
is common to most Romantic and Germanic languages,
although its origins remain unclear. The first known refer-
ence, from 1607, appeared in French and signified a plug

Right: Paperboys eating tacos for brunch, ca. 1920.
col. sinafo-inah, inventory number 155025. courtesy of the instituto nacional de

antropología e historia, méxico.

gastronomica: the journ a l o f fo o d a n d culture , vo l.8, n o .1, p p .2 6–38, is s n 1 5 29-3 262. © 2008 by t h e regent s of t h e univ ersit y of cal if ornia. al l righ t s reserv ed. pl ease direct al l requests for permission to

photocopy or reproduce a rticle co n te n t thro ug h the un ive rs ity o f ca lifornia press’ s righ t s and perm issions web sit e, h t t p://www.ucpressjournal /reprint inf o.asp. doi: 10.15 25 /gf c.2008.8.1.26.

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to



This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to



used to hold the ball of an arquebus in place.5 Eighteenth-
century Spanish dictionaries also defined “taco” as a
ramrod, a billiard cue, a carpenter’s hammer, and a gulp of
wine—a combination recalling the English colloquialism, a

“shot” of liquor. Only in the mid-nineteenth century did the
Spanish Royal Academy expand the meaning to encompass
a snack of food, and the specific Mexican version was not
acknowledged until well into the twentieth century.6 Of
course, European definitions must be used with caution in
referring to a Mexican reality. Nevertheless, taco did not
appear in early Mexican dictionaries either, most notably
Melchor Ocampo’s vernacular volume, published in 1844
under the wry title “Idiotismos Hispano-Mexicanos.”7

Nineteenth-century cookbooks provide no more help
than dictionaries, which may come as no surprise given
the elite preference for Spanish and French cuisine over
indigenous dishes.8 The first and most influential cookbook
published in the nineteenth century, El cocinero mexicano
(The Mexican Chef, 1831), provided a long list of popular
dishes including quesadillas, chalupas, enchiladas, chila-
quiles, and envueltos. The envuelto (Spanish for “wrap,”
appropriately) comes closest to what would now be called
a taco, although it was something of a cross between a taco
and an enchilada, with chile sauce poured over the fried
tortilla. Most elaborate were the envueltos de Nana Rosa
(Granny Rosa’s wraps), stuffed with picadillo (chopped
meat) and garnished with “onion rings, little chiles, olives,
almonds, raisins, pine nuts, and bits of candied fruit.”9

Nineteenth-century costumbrista (local color) literature
provides further detail about Mexico’s rich tradition of
street foods. The first national novel, José Joaquín
Fernández de Lizardi’s El periquillo sarniento (The Mangy
Parrot, 1816), mentioned a lunch cooked by Nana Rosa

“consisting of envueltos, chicken stew, adobo (marinated
meat), and pulque (fermented agave sap) flavored with
prickly pears and pineapple.” In a footnote to the 1842
edition, the editor lovingly evoked the scene. “On the banks
of the irrigation canal on the Paseo de la Viga, there was a
little garden park where Nana Rosa, who lived to be nearly
a hundred, attracted the people of Mexico…charging them
stiffly for the good luncheon spreads she prepared; and even
today, the envueltos de Nana Rosa still figure in the cook-
books.”10 Another formidable gourmet and man of letters,
Guillermo Prieto, recalled plebeian restaurants at mid-
century serving enchiladas, gorditas, and frijoles refritos,
while the renowned geographer Antonio García Cubas
compiled an exhaustive zoology of ambulant vendors. 11 Yet
none of these acute observers of Mexican popular culture
recorded a gastronomical usage of taco.

Perhaps the first unequivocal reference to the Mexican
taco appears in Manuel Payno’s 1891 novel, Los bandidos
de Río Frío (The Bandits of Rio Frio). During the festival of
the Virgin of Guadalupe, the indigenous classes danced in
honor of the national saint, while feasting on “chito (fried
goat) with tortillas, drunken salsa, and very good pulque…
and the children skipping, with tacos of tortillas and avo-
cado in their hand.”12 Although this culinary meaning of
taco had no doubt been in common usage by the popular
classes for some time, with Payno’s benediction, it quickly
received official recognition in Feliz Ramos I. Duarte’s 1895
Diccionario de mejicanismos, which also attributed the geo-
graphical origin of the term to Mexico City.13

Unfortunately, these literary sources do not indicate
how this Spanish word, newly used for a generic snack,
became associated in Mexico City with a particular form
of rolled tortilla. Peasant women have long used such torti-
llas as a convenient package to send food to male relatives
working in the field or elsewhere, even if they called it
something other than a taco. Some speculation is necessary
to make the precise connection, but one possibility lies in
a peculiar eighteenth-century usage among the silver min-
ers of Real del Monte, near Pachuca, Hidalgo, to refer to
explosive charges of gunpowder wrapped in paper. While
this particular variant does not seem to have been recorded
in any dictionary, it derives from both the specific usage of
a powder charge for a firearm and from the more general
meaning of plug, since the silver miners prepared the blast
by carving a hole in the rock before inserting the explosive

“taco.”14 And with a good hot sauce, it is easy to see the simi-
larity between a chicken taquito and a stick of dynamite.

We cannot know exactly when the miners might have
brought their tacos to Mexico City, but nineteenth-century
civil wars and economic turmoil struck the silver districts
particularly hard, forcing many to migrate in search of work.
One of the first visual records of the taco, a photo from the
early 1920s, shows a woman selling tacos sudados (“sweaty
tacos”) to a group of paperboys (see photograph on p.27).
These treats were made by frying tortillas briefly, stuffing
them with a simple mixture, often just potatoes and salsa,
and wrapping them in a basket to stay warm, hence an
alternative name, tacos de canasta (“tacos from a basket”).
Both chronicler Jesús Flores y Escalante and early archival
sources confirm this connection with miners by pointing
out that tacos sudados originally carried the sobriquet
tacos de minero.15

However appealing this lineage may be, it is by no means
exclusive. The Mexican practice of wrapping bits of food
in tortillas is far too common, and the word “taco” has far

This content downloaded from
�������������128.114.34f:ffff:ffff:ffff on Thu, 01 Jan 1976 12:34:56 UTC

All use subject to




too many meanings in Spanish, and perhaps indigenous
languages as well, to allow for any definitive etymological
origin.16 Heaven knows there are already enough culinary

“just so” stories without adding another. At least this derivation
avoids the usual fallacy of attributing a popular-sector food
to an elite, male personage such as the Earl of Sandwich.

National Tacos

A brief survey of the emergence of the taco in Mexico
shows the contingent nature and constant innovation that
characterized this food. Informal taco shops offered a new
social space for the lower classes at the end of the Porfirio
Díaz dictatorship (1876–1911) and during the subsequent
Revolution of 1910. Although street foods have long been
popular in Mexico City, they acquired particular impor-
tance around the turn of the twentieth century with the
arrival of large numbers of labor migrants attracted by
incipient industrialization. The advent of revolutionary
fighting, in turn, brought soldiers, soldaderas (camp fol-
lowers), and refugees to the capital. As the colonial city
grew into a modern metropolis, overcrowded tenements
with inadequate kitchen facilities became a fact of life for
the masses, who numbered nearly half a million by 1910.17
Taquerías, whether an actual restaurant with kitchen and
tables or a poor woman standing on a street corner with a
basket of tacos, offered a space for newcomers to assuage
their nostalgia for the particular foods of their home towns.
A critical mass of these shops, serving up countless distinct
regional specialties in convenient and inexpensive mouth-
fuls, allowed the Mexican working classes to experience
directly an incipient national cuisine without the inter-
vention of elite cookbook authors. Culinary intellectuals
quickly discovered this trend and sought to appropriate
it for themselves, sanitizing the taco of its plebeian roots.
Through the clash of these rival cooking traditions, the taco
gradually acquired its modern forms—hard and soft, elite
and popular—even as it spread throughout the country and
became a truly national icon.

An archival sample of early taco shops, drawn from
citations issued by municipal inspectors from 1918 to 1920,
indicates the diversity of foods available in proletarian
neighborhoods of Mexico City. Unfortunately, most were
written up as anonymous taco and torta (sandwich) stands,
but a few notations hint at the variety of regional dishes. For
example, at least two shops specialized in pozole, a hominy
stew typical of Guadalajara, Jalisco, and another restaurant
called “La Jalisciense” also presumably offered dishes from
this state, whether pozole, birria (goat stew), or something

else. Plentiful seafood was likewise available to the work-
ing classes thanks to improved railroad links with the coast.
Thus, diners had their choice of several oyster shops, two
fried fish stands (one inexplicably called “El Torito,” the
little bull), a place called “Pescadería Veracruz” (Veracruz
Fish Restaurant) and another “Pesca de Alvarado” after the
Veracruz port town famed for its fresh seafood and sharp-
talking women. We can only wonder what other culinary
delights might have been recorded if the municipal govern-
ment had employed ethnographers instead of tax collectors.18

Elite culinary intellectuals quickly imitated this popular
innovation, but the upscale tacos they produced were care-
fully distinguished from the foods of the street. Newspapers,
with their daily deadlines and competitive demand for
novelty, may have been the first to print recipes for tacos.
Filomeno Mata, editor of the El Diario del Hogar (The
Daily of the Home) and pioneer of the women’s section
in Mexico, published a version of tacos de crema (cream
tacos) on June 2, 1908. This recipe began with directions for
making French-style crêpes, perhaps the first use of a now-
common tactic for gentrifying plebeian foods, for example,
crepas de cuitlacoche (crêpes with corn smut doused in
béchamel sauce). The author continued: “stuff them with
pastry cream or some dry conserves and roll like a taco. In
the same fashion make all of the tacos that you like; arrange
on a platter in the form of a pyramid, cover with meringue
and adorn with strawberries, orange blossoms, and violets.”19
Such an elaborate concoction clearly invoked the latest
street foods, but transformed them into socially acceptable
dishes through the use of European ingredients and cook-
ing techniques as well as perishable and expensive fruits
and edible flowers. More recognizable versions of tacos
followed in succeeding decades, but the quality of ingre-
dients helped to maintain social distinctions. By the 1960s,
restaurants in affluent neighborhoods served tacos al carbon
(grilled tacos) using expensive cuts of meat such as bifstek
(beefsteak) and chuletas (pork chops).

The tacos of the working classes likewise continued to
evolve, using whatever castoff bits of meat that cooks could
afford. Cookbook author Josefina Velázquez de León traveled
throughout Mexico at mid-century collecting such humble
dishes as San Luis Potosí tacos made with pork trotters
and potato, while also helping to diffuse such Mexico City
classics as taquitos de crema, tortillas rolled around strips of
green chile, deep fried, and topped with a spoonful of thick
cream and crumbled fresh cheese.20 Meanwhile, Ana María
Hernández’s pioneering home economics manual included
recipes for barbacoa and carnitas (barbecued haunches and
random chopped meat, both from an unspecified animal),

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to



brains (preferably sheep), and maguey worms (now an
expensive delicacy, but considered beyond the pale in
the 1930s). Hernández also explained that tacos should be
fried a deep golden color or left very smooth, depending
on whether they were hard or soft; both were served
with a lettuce salad and salsa.21 Shredded lettuce eventu-
ally became the usual accompaniment of tacos dorados
(“golden” fried), while soft tacos in contemporary Mexico
are generally garnished with chopped onions and cilantro.

Although cookbooks serve as important historical
records, the spread of proletarian taco culture took place
quite independently of these texts as workers traversed
the country, adopting recipes from their new neighbors.
Moreover, these migrants included many foreigners such
as the Lebanese who settled in Puebla in the 1920s. Their
gyros, cooked on a rotating vertical spit and served with pita
bread or its local counterpart, a wheat tortilla, came to be
known as tacos árabes. Mexicans borrowed the technique,
using the more abundant pork, flavored with a slice of
pineapple, and eaten with corn tortillas. This new innova-
tion, called tacos al pastor (shepherd’s tacos), quickly spread
throughout the country and beyond.22 The first decades
of the twentieth century witnessed not only widespread
internal movement, but also the beginnings of large-scale
Mexican migration to the United States, and these travelers
carried their new taste for tacos across the border.

Migrant Tacos

Foods provide an important example of “ethnic and racial
borderlands,” as historian Albert Camarillo has character-
ized the points of contact in pluralistic societies.23 They
police the boundaries between groups through dietary laws
and stereotypes and simultaneously offer an inviting port
of entry for those who wish to taste the unfamiliar. These
culinary borderlands become fertile sites of innovation, as
cooks borrow recipes and ingredients from their neighbors,
transforming them to produce a constant stream of “fusion”
cuisine.24 Yet they also express ethnic and racial conflicts
in a visceral fashion, when the vague threat of an outsider
suddenly assumes the physical force of food poisoning.
These ambivalent culinary encounters have been played
out repeatedly in California as alternating waves of migrants
came west from the United States and north from Mexico.

Newcomers of the nineteenth century were primar-
ily Anglos, drawn by the gold rush and the prospect of
easy land. Negative stereotypes predominated in the first
decades after the us invasion of 1846, as Mexican residents,
called Californios, were seen as lazy, dirty, and devious,

Above: Eating tacos during the festival of the Virgin of
Guadalupe, ca. 1950.
photo by nacho lópez. col. sinafo-inah, inventory number 374177. courtesy of the instituto

nacional de antropología e historia, méxico.

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to




unfit for the land that they possessed and that Anglos
coveted. Tamales provided an obvious culinary metaphor,
both potentially unsanitary and dangerously hot to the
taste of New England merchants and settlers. Yet palates
adjusted to chile peppers, and the growing predominance
of Anglos in California helped fears of Mexicans to recede.
Sensationalist charges of food poisoning remained a staple
of mass-market newspapers, but that did not stop all sectors
of society from buying tamales from street vendors.25 These
pushcarts were ethnic borderlands in more ways than one,
as Anglos, African Americans, and even Japanese began sell-
ing tamales alongside Mexicans.26 The pre-Hispanic pastry
eventually blended into the “fantasy heritage” of pastoral
life in the Southwest, which as Carey McWilliams pointed
out, provided a way of incorporating Spanish colonialism
into the national history while subordinating the Mexican
population. Helen Hunt Jackson’s 1884 novel, Ramona,
popularized this romantic vision of Old California, which
Anglos duly reenacted in tamalada picnics and luncheons.27

In the early twentieth century, when the old Californios
had been reduced to an insignificant underclass, a new
influx of Mexicans began to arrive in response to industri-
alization and revolution. These migrants included a few
wealthy Porfirian exiles, but most were ordinary folk in
search of jobs in agriculture, industry, and railroads, which
Los Angeles provided in abundance. Equally important,
the city’s Mexican population of perhaps 100,000 by 1930
offered the social comforts of an established ethnic com-
munity, including familiar foods. In migrating across the
border, tacos seem to have lost some of their lower-class
stigma. At El Veracruzano, a restaurant owned by the
Merino brothers, an order of meat, chicken, or brain tacos
cost fifty cents, which was more expensive than a plate of
chicken with mole or pipián (chile or pumpkin seed sauces),
a rib steak, or a shrimp salad.28 Early descriptions from
English-language cookbooks basically resembled Mexico
City tacos. In 1914, Bertha Haffner-Ginger, a domestic
columnist for the Los Angeles Times, included a recipe for
tacos dorados as an afterthought to some rather impractical
instructions for making tortillas at home.29 In 1929, Ramona’s
Spanish-Mexican Cookery, by home economist Pauline
Wiley Kleeman, juxtaposed plebeian tacos of pork snout,
ears, and jowls with more upscale cream cheese tacos.30

These resemblances notwithstanding, tacos were already
evolving in a distinctive fashion north of the border. Vicki
Ruiz and other historians have uncovered the innovative
strategies that Mexican American women used to mediate
the ethnic borderlands between Mexican family traditions
and us citizenship and consumer culture.31 The tacos these

women produced were soon as distinctive as their identity.
Although it is difficult to generalize about such a diverse
population, an ethnographic study directed in the mid-
1920s by anthropologist Manuel Gamio has left a wealth of
information about the life of migrants. In particular, most
informants felt that they could reproduce a Mexican diet
with foods available in the Southwest. Hasia Diner’s study
of European migrants around the turn of the century sug-
gests that newcomers ate far more abundantly in the United
States than they could in the homeland.32 Large numbers
of Mexicans would doubtless have agreed, especially those
who fled the ravages of revolutionary fighting. Moreover,
many adapted a North American diet or sampled different
ethnic foods—Italian, Chinese, Mexican, or Anglo—as the
mood took them. One assimilated youth, Carlos B. Aguilar,
complained that he got sick whenever he visited his parents
and they cooked Mexican. Yet most seem to have retained
their basic dietary preferences, and many cited the high
cost of living in the United States compared with Mexico.33

Important dietary changes resulted from the late-
nineteenth-century industrial revolution in food, but cul-
tural preferences mediated reactions to mass-produced
foods. We might suppose that abundant meat made pos-
sible by industrial slaughter and refrigerated transport
constituted a significant benefit of migration. In fact,
Mexicans often complained about the poor quality of meat
in the United States and were nostalgic for the taste of
freshly slaughtered meat from a local abattoir. By contrast,
many migrants added eggs and dairy products to their diet,
including fresh milk and local cheddar cheese. Another
surprise for newcomers from central Mexico was the preva-
lence of wheat flour tortillas, a regional variant common
only in the north. Industrial flour production in the United
States, combined with a scarcity of corn mills, made flour
tortillas cheaper than corn, a reversal of prices in Mexico.
One other notable change resulting from industrialization
was the increased availability of produce, whether fresh
iceberg lettuce or canned green chiles. Ramón Fernández
explained that local food “has no taste; the only thing I like
of the Americans are the salads, those they know how to
prepare well.” José Rocha acquired the nickname “panza
verde” (green belly) because he ate so many vegetables.34
Thus, many of the distinctive elements of the Mexican
American taco, including cheddar cheese, shredded lettuce,
flour tortillas, and anonymous ground beef rather than
distinctive pork products, were adaptations to foods avail-
able in the United States.

Other changes may have resulted from the interactions
of regional Mexican cuisines; indeed, the ethnic borderlands

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to



extended to new migrants from the central part of the coun-
try who came in contact with Mexican American traditions
of norteño (northern) origin. Little evidence remains of
such culinary exchange in Los Angeles, but New Mexico
provides a revealing point of comparison. The Spanish first
settled Santa Fe in 1598, and with fewer Anglo interlopers,
the local elite were better able to maintain their wealth and
culture. Hispanic doyenne and home economist Fabiola
Cabeza de Vaca Gilbert published a cookbook in 1949
revealing both the influx of new migrants and the process of
culinary innovation. “Tacos are definitely a Mexican impor-
tation,” she observed, “but the recipe given below is a New
Mexico adaptation.” Her basic formula of meat and pota-
toes had been published a decade earlier in Albuquerque
by Margarita C. de Vaca, but the New Mexico College of
Agriculture graduate suggested a novel twist, which became
almost universal north of the border, pre-frying the tortillas
into the characteristic “U” shape of a taco shell before add-
ing the filling.35

Mexican American inventors likewise began to experi-
ment with industrializing their own foods. In 1949, Joseph
Pompa of Glendale, Arizona, filed an application with the
United States Patent Office explaining that “heretofore tor-
tillas were fried by hand in deep fat and held in position by
hand as they hardened and turned crisp until they assumed
the folded position desired.” Pompa planned to increase the
efficiency of taco production by creating a deep fry basket
with horizontal rows of tortilla holders and a parallel frame
that could be folded down to hold them in place under
the oil. However, two years earlier New York restaurateur
Juvencio Maldonado had proposed a similar “form for fry-
ing tortillas to make fried tacos.” The invention consisted
of vertically stacked tortilla holders in a metal frame that
could be immersed in oil then unfolded to release the fried
tortillas (see illustration on p.33). References cited in the
applications indicate that both drew inspiration from other
ethnic food technology including donut fryers and sau-
sage makers. Maldonado, who received his patent in 1950,
proudly explained that his invention restored “peace after
open mutiny among his own cooks, who dreaded handling
the fried taco orders.”


As the example of Maldonado’s New York restaurant
indicates, tacos soon gained a following beyond the ethnic
community. This snack food presented mid-century diners
a new and seemingly more authentic version of Mexican
food, replacing chili con carne and tamales, whose novelty
and appeal had been eroded by fifty years of canned mer-
chandise. Moreover, the fried taco shell offered newcomers
a relatively easy introduction to that peculiarly Mexican

performance of eating with a tortilla. One guidebook
explained: “The Mexican’s dexterity with the tortilla is as
amusing to watch as the Italian’s business-like disposal of
spaghetti and the chop sticks of the Oriental.”37 Nevertheless,
ethnic restaurateurs seeking to build a mainstream clientele
in the postwar era found themselves on the wrong side of
sharpening lines of segregation.

Segregated Tacos

Dramatic new migrations reconfigured the ethnic and
racial borderlands in mid-twentieth-century Los Angeles.
Midwesterners came in great numbers, attracted by a
combination of industrial jobs, favorable climate, and
the enduring romance of the Spanish fantasy heritage.
Southerners likewise contributed to the city’s massive
growth, whether they were African Americans fleeing Jim
Crow discrimination or “Okies” and “Arkies” thrown off
the land by the Great Depression. Mexican numbers
declined briefly in the 1930s, when officials expelled
unwanted workers, including many who held us citizenship,
but greater numbers returned to find jobs in the wartime
economy, either informally or through the “bracero” guest-
worker program, founded in 1942. Los Angeles acquired
its sprawling suburban geography and combative racial
politics in the postwar era, as whites, blacks, and Mexicans
interacted within these social spaces. Although largely
unknown to migrants from the east, tacos quickly caught
on across the social spectrum. Taquerías might well have
become an open borderland, that is, a space that encour-
aged cross-ethnic proletarian alliances, as they had in
Mexico City. Instead, competition between Mexican
American and nonethnic restaurateurs to market the taco
closed off such opportunities and reinforced emerging
patterns of segregation.

The modern taco took shape at precisely the moment
when San Bernardino, California, restaurateurs Richard
and Maurice McDonald were transforming their carhop
from a teenage hangout into the prototype of the fast-food
industry. The origin of the McDonald’s system for selling
standardized, low-cost food in large quantities is a well-
known story. They started with the menu, eliminating all
but a handful of items, hamburgers, fries, and shakes, which
could be eaten without utensils. Next, they redesigned the
kitchen to produce these items efficiently and installed heat

Right: The original fast-food taco form, a patent issued to New York
restaurateur Juvencio Maldonado in 1950, when Glen Bell was still
flipping hamburgers in San Bernardino, California.
courtesy of united states patent and trademark office

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to



This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to



lamps to keep the burgers warm so they could be made
ahead of time. A standardized garnish of catsup, onions,
and two slices of pickle eliminated the inconvenience of
special orders, while the use of disposable paper bags, wrap-
pings, and cups allowed further economies. Lines formed
out the door when the McDonald brothers reopened the
restaurant in 1948, selling hamburgers for just fifteen cents,
or half their former price. Ray Kroc purchased the franchise
rights in 1954, and the fast food empire was born.


Glen Bell, in asserting his claim as the inventor of the
fast food taco, drew explicit connections with this mod-
ern-day creation myth. In 1948, he opened a hamburger
and hotdog stand in a Mexican neighborhood of San
Bernardino, across the tracks from McDonald’s original

restaurant. Rather than compete directly, he sought to
apply their industrial techniques to a new market niche,
the taco stand. Bell devised a taco fryer in 1951—unaware
that Maldonado had beaten him to the patent office—and
modified his chili-dog sauce to use as salsa. He then began
selling tacos and orders of refried beans for nineteen cents
each. Following a divorce, he opened two new restaurants,
called Taco Tia, in the western suburbs of Barstow and
Redlands. He sold those stores to a partner in 1957 and

Above: Taco shops, as indicated by 1950s telephone directories,
expanded from Mexican neighborhoods in central and eastern Los
Angeles into predominantly Anglo and African American suburbs to
the north and south.
illustration by jeffrey pilcher © 2007

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to




went into business with Los Angeles Rams football players
Charley Toogood and Harland Svare, who had sampled
Bell’s tacos near their Redlands training camp. From 1957
to 1961, they opened El Taco restaurants in downtown
Los Angeles and Hollywood, northeast in Pasadena and
Monrovia, and on the bay in San Pedro, Long Beach, and
Wilmington. Finally, with the 1962 opening of the first
Taco Bell in Downey, he established the chain that came
to dominate the Mexican fast food market.39

Although Bell trumpeted his technological innovations,
geographers have recently emphasized the importance
of spatial analysis in examining the development of fast
food.40 To follow the spread of taco restaurants through
Los Angeles County from the 1940s to the 1960s, it may be
helpful to begin with a map of Mexican-owned restaurants
(see illustration on p.34). The dots represent establishments
under Spanish surnames in the 1941 city directory and
correspond to patterns of Mexican residence at this time.
Large numbers of Mexicans lived in the downtown area,
including Sonoratown, as Anglos dubbed the site of the
original founding of Los Angeles in 1781. This district had
been restored about 1930 as the tourist center Olvera Street
and had no fewer than ten Mexican restaurants, including
the city’s most famous, La Golondrina (swallow). Others
were located across the river in Boyle Heights, and a string
of eateries ran down Brooklyn Avenue (now East César E.
Chávez Avenue) in Belvedere. Yet there were surprisingly
few such places given East Los Angeles’s future as the cen-
ter of the second largest Mexican community in the world.
George Sánchez has noted that Mexicans lived in almost
all parts of the city on the eve of the Second World War,
and their restaurants were scattered from West Hollywood
to the largely Jewish West Side and African American
neighborhoods of Southeast Los Angeles.41

Wartime racial tensions such as the Zoot Suit Riots, in
which servicemen clashed with Mexican youth, accelerated
a process of resegregation in Los Angeles.42 Taking advan-
tage of the housing boom of the 1950s, Anglos abandoned
the integrated neighborhoods of downtown and East Los
Angeles for distant suburbs around the periphery of the city,
ranging from the San Fernando Valley and Whittier in the
north to Lakewood and Orange County to the south. The
city’s minority populations grew even more rapidly during
this period, and African American and Mexican enclaves
likewise appeared in suburbs from Pasadena to Long Beach.
Yet despite a 1948 Supreme Court ruling against restric-
tive covenants, zoning laws and homeowners’ associations
helped ensure that these neighborhoods remained sepa-
rate, creating a pattern that historian Philip Ethington has

described as “segregated diversity,” and which persists in
present-day gated communities.43

The concept of segregated diversity can be useful for
analyzing not just housing but other forms of social interac-
tion such as dining. The rise of the taco shop in 1950s Los
Angeles exemplified this process, as nonethnics were able
to satisfy their tastes for Mexican cuisine or sample these
dishes for the first time without venturing into segregated
ethnic communities. A map of restaurants listed in the Los
Angeles Yellow Pages with the word “taco” in their names
shows a very different spatial distribution (indicated by
triangles in the illustration on p.34) from 1940. The strik-
ing absence of such shops in East Los Angeles—just two
out of fifty establishments—during this later period sug-
gests that Mexican restaurateurs avoided the word “taco”
when seeking to attract customers within the ethnic com-
munity. While some small restaurants may not appear for
lack of telephone service, only a few from the sample such
as Tacos de Oro (golden tacos) made proper use of the
Spanish language. El Taquitos, although grammatically
questionable, clearly appealed to Mexican migrants with
a specialty of “tortas estilo la Capital” (Mexico City-style
tortas) and may also have had considerable crossover
business from students at the neighboring campus of the
University of Southern California.

The most significant geographic shift in the two decades
around mid-century came from the expansion of taco
shops north into the white suburbs of Glendale, Pasadena,
and the San Fernando Valley, and south into the African
American community of Watts. Care must be taken in inter-
preting this map without exaggerating the degree of racial
separation in Los Angeles during the 1950s. Watts, in par-
ticular, had a substantial minority of Mexicans living among
African Americans. Nevertheless, using nhgis technology
to correlate these taco shops with tract-level racial profiles
from the 1960 census reveals a striking degree of segregation.
Of the fifty restaurants, twenty-seven were in majority white
neighborhoods, twelve in majority black neighborhoods,
and eight in majority Mexican neighborhoods. Only a third
of the restaurants operated in even the broadest definition
of a “racial borderland,” a neighborhood in which two or
more groups each constituted a minimum of 20 percent
of the population. These tended to be in the near north
or south, areas such as Lincoln Heights or Watts, or in the
business district downtown, where new taco shops opened
alongside or in place of existing Mexican restaurants, which
had a long tradition of serving a mixed-ethnic clientele.44

Moreover, the process of assimilating tacos should not be
oversimplified to the industrial logic of McDonaldization.

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to


Many taco shops doubtless had Mexicans in the kitchen
preparing the same foods they served at home. Bill’s Taco
House employed African-American hostesses Ann Hilliard,
Ozella Millner, and Willie Mae Stinson to greet their
Watts clientele, even though Hank Silva may have overseen
the kitchen.45 Other culinary border crossings may have
resulted from the local particularities of markets and mar-
riages. Lalo’s Tacos of El Sereno, for example, specialized
in pastrami tacos and burritos, a kosher alternative to the
usual pork carnitas and chorizo. Lalo’s personal story—
whether a Jewish-Mexican mixed marriage or a restaurateur
seeking to expand his clientele in this multiethnic neigh-
borhood—may be lost to history. Nevertheless, an offbeat
Los Angeles institution called the Kosher Burrito, founded
in 1946 by a Jewish man who married a Sonoran woman,
still exists on the corner of First and Main, although under
new management.46

These caveats notwithstanding, the taco clearly
demonstrates a culinary example of segregated diversity.
Anglicized names, appealing to customers who could not
speak Spanish, provide one indication of the taco’s growing
distance of tacos from its ethnic origins. Perhaps the first
such restaurant, called simply the Taco House, had opened
already in 1946 on Broadway downtown. This restaurant
inspired a variety of imitators, including Ernie’s Taco
House, “specializing in Mexican food orders to go,” which
had branch outlets in North Hollywood, Glendale, and
on Broadway north of downtown by 1953. Minor variations
on the Taco House theme included Alice’s Taco Terrace,
Bert’s Taco Junction, and Frank’s Taco Inn. More creative
names, Taco Kid and Taco Th’ Town, arose from the
African American neighborhood of Watts.47

The most detailed account of this process of assimilation
can be found in Glen Bell’s authorized biography, Taco
Titan. Having grown up in a family devoted to Ramona,
Bell marketed his restaurants on the fantasy heritage, care-
fully sanitized for Anglo sensibilities. When a consultant
suggested the name, “La Tapatia,” he changed it to a
nonsense Spanish phrase “Taco Tia” (Snack Aunt) in defer-
ence to English-speaking customers. Each new restaurant
celebrated its grand opening with an ethnic amalgam of
Mexican mariachi bands and straw sombreros juxtaposed
against dancing women wearing Spanish castanets. With
the founding of Taco Bell, he elaborated this Mexican
theme park image using faux adobe walls, a mission-style
bell tower, and an elaborate courtyard fountain, later dis-
carded. This strategy worked well for Anglo customers in
Southern California, but when the company expanded into
more established Mexican markets such as El Paso, Texas,

the modified chili-dog sauce had few takers. To satisfy more
knowledgeable customers, the franchisees began shopping
across the border in Ciudad Juárez. Yet such authenticity
has been the exception for the chain, and not until 1997 did
executives begin a major marketing campaign directed at
Hispanic consumers. More often, the company has alien-
ated the ethnic community with advertisements such as the
talking Chihuahua.48

The mass marketing of packaged Mexican foods in
grocery stores followed a similar pattern of nonethnic
corporations dominating the industry. Although Juvencio
Maldonado did a good business selling take-out taco shells
from his restaurant near Times Square in New York, Anglo
firms such as Patio Foods and Old El Paso brought the taco
shell to a national market in the 1960s. Historian Donna
Gabaccia has attributed the predominance of outsiders in
marketing ethnic foods to the longstanding hostility toward
ethnics of us corporations and to their skill at adapting
foods to mainstream tastes.49 Nevertheless, we should not
underestimate the importance of segregation, not only in
determining the success or failure of individual restaurants,
but also in shaping the social hierarchies that place conti-
nental cuisine above Mexican taco shops or Chinese take-out.

Malinche’s Tacos

Taco Bell has evolved so far from the contemporary
Mexican taco that it seems hard to believe that the two
share such a recent common ancestor, but in fact they
developed through a form of parallel evolution, being
invented and reinvented almost simultaneously in Mexico
City and Southern California. The story of the moderniza-
tion of Mexican food is as much about the movement of
people as about technological change. This is not to deny
the importance of McDonaldization, the logic of industrial
efficiency, in shaping the corporate taco. An anonymous
employee recently explained: “My job is I, like, basically
make the tacos! The meat comes in boxes that have bags
inside, and those bags you boil to heat up the meat. That’s
how you make tacos.”50 Nevertheless, as this essay has
shown, ethnic cooks created virtually all aspects of the
Mexican American taco except the central commissary.
Corporate hagiography notwithstanding, Glen Bell did not
make a better or faster taco; he just packaged it for a non-
ethnic clientele.

The history of the Mexican American taco also helps
to explain the seemingly paradoxical reception of Taco
Bell within the ethnic community. Although the corpora-
tion has largely ignored, if not outright offended Mexicans,

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to




there are nevertheless Taco Bells operating profitably on
Whittier Boulevard in the heart of East L.A. This does not
mean condemning the Chicanas who eat there as traitorous
Malinches, after the indigenous woman who facilitated the
Spanish conquest and became Cortés’s mistress. Instead,
we should recognize—and seek to transform—the structures
of modern life that make fast food appealing to harried
working families.

This reality prompts a final conclusion about the need
to place food history within a broad social context. Too
often culinary historians become so infatuated with elite
texts that we lose sight of the labor performed by anonymous
cooks—slave women, Mexican migrants, or just overworked
housewives—who provide the meals that bind families and
societies together. Food history offers a tremendous oppor-
tunity for uniting the academy and the educated public, but
this entails a responsibility to write meaningful and demo-
cratic narratives that foreground the centrality of kitchen
labor in producing gastronomic delights, including even
the humble Mexican American taco.g


Journalist Dave Roos inspired this essay with a probing question that took years
to answer. I am also deeply grateful to Donna Gabaccia, James Garza, Darra
Goldstein, José Luis Juárez, Jodi Larson, Victor Macías, Enrique Ochoa, Carla
Rahn Phillips, Tim Pilcher, Fritz Schwaller, and David Van Riper and his col-
leagues at the Minnesota Population Center.

1. George Ritzer, The McDonaldization of Society (Thousand Oaks, ca: Pine
Forge Press, 1993).

2. Quoted in “History,” Downloaded March 17, 2004.

3. Compare tamales, from the Nahuatl tamalli, pozole (hominy stew) from
pozolli, or mole sauce, from molli.

4. Jeffrey M. Pilcher, “¡Tacos, joven! Cosmopolitismo proletario y la cocina
nacional mexicana,” Dimensión Antropológica (forthcoming).

5. Joan Corominas, Diccionario crítico etimológico castellano e hispánico, 6 vols.
(Madrid: Editorial Gredos, 1991), 5:368.

6. Real Academia Española, Diccionario de Autoridades, edición facsímil, 3
vols. (Madrid: Editorial Gredos, 1964 [1737]), 3: 209–210; Esteban de Terreros y
Pando. Diccionario castellano con las voces de ciencias y artes y sus correspondien-
tes en las tres lenguas Francesa, Latina ó Italiana, 3 vols. (Madrid: Imprenta Vda.
de Ibarra, Hijos y Compañía, 1786–1788), 3: 569–570. The first mention of food
was in the Nuevo diccionario de la lengua castellana (Paris: Libreria de Rosa y
Bouret, 1853), 1119.

7. Melchor Ocampo, “Idiotismos Hispano-Mexicanos,” in Obras completas, ed.
Angel Pola and Aurelio J. Venegas, 3 vols. (Mexico City: F. Vázquez, 1900–1901),
3: 89–231.

8. For a discussion of the literature, see Jeffrey M. Pilcher, ¡Que vivan los tama-
les! Food and the Making of Mexican Identity (Albuquerque: University of New
Mexico Press, 1998), 45–70.

9. El Cocinero Mexicano o colección de los mejores recetas para guisar al estilo
americano y de las más selectas según el metodo de las cocinas Española, Italiana,
Francesa e Inglesa, 3 vols. (Mexico City: Imprenta de Galvan a cargo de Mariano
Arevalo, 1831), 1:178–88, quote from 183. Tacos were likewise absent from late-
nineteenth-century volumes, even La cocinera poblana y el libro de las familias,
2 vols. (Puebla: N. Bassols, 1881), assembled by the intrepid Catalan gourmet,
Narcisso Bassols, who was no stranger to street-corner kitchens.

10. David Frye, Lizardi’s English translator, kindly provided this citation along
with many other insightful suggestions. José Joaquín Fernández e Lizardi, The
Mangy Parrot: The Life and Times of Periquillo Sarniento, Written by Himself for
His Children, trans. David Frye (Indianapolis: Hackett Publishing, 2004), 408–409.

11. Guillermo Prieto, Memorias de mis tiempos, vol. 1 of Obras completas, ed.
Boris Rosen Jélomer (Mexico City: Conaculta, 1992 [1906]), 112, 118; Antonio
García Cubas, El libro de mis recuerdos, México, D.F., Editorial Porrúa, 1986
[1904], 202, passim.

12. Manuel Payno, Los bandidos de Río Frío, 24th ed. (Mexico City: Editorial
Porrúa, 2004), 31–32. In addition to Payno, Francisco Javier Santamaria’s
authoritative dictionary cites a vague reference in Luis Inclán’s 1865 bandit
novel Astucia. See, Diccionario de mejicanismos, 5th ed. (Mexico City:
Editorial Porrúa, 1992), 993.

13. Feliz Ramos I. Duarte, Diccionario de mejicanismos: Colección de locuciones i
frases viciosas (Mexico City: Imprenta de Eduardo Dublan, 1895), 469.

14. Doris M. Ladd, The Making of a Strike: Mexican Silver Workers’ Struggles in
Real Del Monte, 1766–1775 (Lincoln: University of Nebraska Press, 1988), 10.

15. Jesús Flores y Escalante, Brevísima historia de la comida mexicana (Mexico
City: Asociación Mexicana de Estudios Fonográficos, 1994), 232; Archivo
Histórico del Distrito Federal (hereafter ahdf), Mexico City, vol. 1981, “Vías
públicas,” exp. 1002.

16. For example, Hector Manuel Romero has suggested a Náhuatl derivation
from the word itacate—sort of a doggie bag. See his Vocabulario gastronómico
mexicano (Mexico City: Coordinación General de Abasto y Distribución del
Distrito Federal, 1991), 58.

17. John Lear, Workers, Neighbors, and Citizens: The Revolution in Mexico City
(Lincoln: University of Nebraska Press, 2001), 51–54.

18. ahdf, vol. 2405, “Infracciones taquerías,” esp. 2, 3, 7, 12, 17, 19.

19. El Diario del Hogar, 2 June 1908, p.4.

20. Josefina Velázquez de León, Cocina de San Luis Potosí (Mexico City:
Ediciones Josefina Velázquez de León, 1957), 76–77, idem, Los treinta menus,
4th ed. (Mexico City: Academia Veláquez de León, 1940), 34–35. For a discus-
sion of her career, see Jeffrey M. Pilcher, “Josefina Velázquez de León: Apostle
of the Enchilada,” in The Human Tradition in Mexico, ed. Jeffrey M. Pilcher
(Wilmington, Del.: Scholarly Resources, Inc., 2003), 199–209.

21. Ana María Hernández, Libro social y familiar para la mujer obrera y
campesina mexicana, 4th ed. (Mexico City: Tipografía Moderna, 1938), 66–67.

22. Martha Díaz de Kuri and Lourdes Macluf, De Libano a México: La vida
alrededor de la mesa (Mexico City, 2002), 200.

23. Albert Camarillo, Not White, Not Black: Mexicans and Racial/Ethnic
Borderlands in American Cities (forthcoming).

24. Donna R. Gabaccia, We Are What We Eat: Ethnic Food and the Making of
Americans (Cambridge, ma: Harvard University Press, 1998); Meredith E. Abarca,

“Authentic or Not, It’s Original,” Food and Foodways 12 (2004): 1–25.

25. Victor M. Valle and Rodolfo D. Torres, Latino Metropolis (Minneapolis:
University of Minnesota Press, 2000), 74–76.

26. See Los Angeles Times, 1 December 1899; 4 November 1904; 26 February
1907; 27 April 1910.

27. Carey McWilliams, North From Mexico: The Spanish-Speaking People
of the United States, new ed. (New York: Praeger, 1990); William Deverell,
Whitewashed Adobe: The Rise of Los Angeles and the Remaking of Its Mexican
Past (Berkeley: University of California Press, 2004). For nostalgic tamales, see
Los Angeles Times, 11 April 1902; 13 July 1904; 3 May 1906.

28. Manuel Gamio collection (hereafter mg), Bancroft Library, University of
California, Berkeley, banc film 2322, reel 3, “Preliminary Report on Mexican
Immigration in the United States,” appendix 8, appendix I, 1926.

29. Bertha Haffner-Ginger, California Mexican-Spanish Cook Book: Selected
Mexican and Spanish Recipes (1914), 42-45. Her rather fussy suggestion of sealing
the edges with beaten egg before frying sounds like a Germanic adaptation of a
Mexican original.

30. Her familiarity with these different traditions may owe as much to two decades
editing a women’s column in the Mexico City daily, El Universal, as to the foods
served in Los Angeles. See Pauline Wiley-Kleeman, Ramona’s Spanish-Mexican

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to


Cookery: The First Complete and Authentic Spanish-Mexican Cookbook in
English (Los Angeles: West Coast Pub. Co., 1929), 85–86.

31. Vicki L. Ruiz, From Out of the Shadows: Mexican Women in Twentieth-
Century America (New York: Oxford University Press, 1998), 51–67, 72–75.

32. Hasia Diner, Hungering for America: Italian, Irish, and Jewish Foodways in
the Age of Migration (Cambridge, ma: Harvard University Press, 2001).

33. mg, 2322, reel 1, page 413, “Conrado Martínez,” May 24, 1927; reel 1, page 364,
“Sr. Manuel Lomelí,” May 21, 1927; reel 2, page 395, “Carlos B. Aguilar,” April 8,
1927; reel 2, page 449, “Relato de Luis Aguñaga,” April 6, 1927.

34. mg, 2322, reel 1, page 410, “Vida de Ramón Fernández,” April 28, 1927; reel 2,
page 483, “Vida de Pedro Macías,” April 19, 1927; reel 2, page 437, “Vida del Sr.
José Rocha,” April 8, 1927.

35. Fabiola Cabeza de Vaca Gilbert, The Good Life: New Mexico Traditions and
Food (Santa Fe: Museum of New Mexico Press, 1982 [1949]), 71. Margarita C. de
Vaca, Spanish Foods of the Southwest (Albuquerque: A.B.C. Co., 1937).

36. United States Patent Office, No. 2, 506, 305, Juvencio Maldonado, “Form for
frying tortillas to make fried tacos,” filed July 21, 1947, patented May 2, 1950; No.
2, 570, 374, Joseph P. Pompa, “Machine for frying tortillas,” filed 5 January 1949,
patented 9 October 1951. Quote from “News of Food,” The New York Times, 3
May 1952, 24.

37. Elizabeth Webb Herrick, Curious California Customs (Los Angeles: Pacific
Carbon & Printing Company, 1935), 109.

38. John A. Jackle and Keith A. Sculle, Fast Food: Roadside Restaurants in the
Automobile Age (Baltimore: Johns Hopkins University Press, 1999).

39. Deba Lee Baldwin, Taco Titan: The Glen Bell Story (Arlington, tx: Summit
Publishing Group, 1999), 51–55, 62–65, 76–78.

40. Daniel D. Arreola, Tejano South Texas: A Mexican American Cultural
Province (Austin: University of Texas Press, 2002); David Bell, Consuming
Geographies: We Are Where We Eat (New York: Routledge, 1997).

41. George J. Sánchez, Becoming Mexican American: Ethnicity, Culture and
Identity in Chicano Los Angeles, 1900-1945 (Berkeley: University of California
Press, 1993), 72–76.

42. These riots have been interpreted as an attempt by Mexican youth to defend
their community against incursions by white servicemen. See Eduardo Obregón
Pagán, Murder at the Sleepy Lagoon: Zoot Suits, Race, and Riot in Wartime L.A.
(Chapel Hill: University of North Carolina Press, 2003).

43. Philip J. Ethington, “Segregated Diversity: Race-Ethnicity, Space, and
Political Fragmentation in Los Angeles County, 1940-1994,” available online
at Consulted
October 13, 2006. See also Mike Davis, City of Quartz: Excavating the Future in
Los Angeles (London: Verso, 1990), 165–169.

44. I am deeply grateful to David Van Riper of the Minnesota Population
Center for compiling this data from nhgis files of the 1960 census. Spanish
surnames were used as a proxy for Mexican population, and Anglo population
was calculated by subtracting Spanish surnames from total white population. As
a result, the predominance of taco shops in white neighborhoods may be even
greater, but on the other hand, segregation may be somewhat overstated by using
1960 data. Unfortunately, tract-level Hispanic population is not available for Los
Angeles in 1950.

45. The hostesses featured prominently in advertisements in the African-
American newspaper, the Los Angeles Sentinel, July 7, 14, August 4, 18, 1960.

46. Interview with the manager of Kosher Burrito, Los Angeles, January 31, 2001.

47. Los Angeles Yellow Pages Classified Telephone Directory, (1946), 968, (1952),
1429–41, (1953), 1388-1401, (1954), 1395–1408.

48. Baldwin, Taco Titan, 1–2, 71, 74, 100–105, 132–133, 141–142; “Taco Bell’s
Hispanic Strategy,” Advertising Age 68, no. 42 (October 20, 1997), 12.

49. Gabaccia, We Are What We Eat, 149–174.

50. “Day Job: Taco Bell Employee,” The New Yorker, 24 April 2000, 185.

This content downloaded from
������������� on Sun, 01 Dec 2019 22:30:07 UTC�������������
All use subject to

Cuisine and Identity in
Contemporary Japan

The Harvard community has made this
article openly available. Please share how
this access benefits you. Your story matters

Citation Bestor, Theodore C. and Victoria Lyon Bestor. 2011. Cuisine and
identity in contemporary Japan. Education about Asia 16(3): 13-18.

Citable link

Terms of Use This article was downloaded from Harvard University’s DASH
repository, and is made available under the terms and conditions
applicable to Other Posted Material, as set forth at http://


Food is all around us, yet remarkably elusive for something seemingly so concrete and mundane.People grow it, buy it, prepare it, eat it, savor it (or not) every day, everywhere, often without muchthought about food’s significance in larger social, cultural, or historical schemes. Food is profoundly
embedded in these frameworks, and shoku bunka (food culture) is a key concept for understanding the
day-to-day foodways of Japanese society. Today in Japan, foodstuffs and cuisine attract constant attention.
Culinary choices and their connections to lifestyle and identity are trumpeted in advertising, in the mass
media, and in restaurants and supermarkets across the country.

Culinary choices, lifestyles, and “distinction”—the linkages between aesthetic taste and economic class
standing, between social power and cultural prestige—are tightly packaged in contemporary Japan. Japan’s
modern relationship with itself and the world—the juxtaposition of Japan’s self-constructed sense of cul-
tural uniqueness and its simultaneous, almost constant incorporation and innovation of things foreign—
is clearly visible through food and foodways.

Imagining Japanese Cuisine
Cuisine is a product of cultural imagination and is thought to include the range of practices and preferences
that are shared broadly across the members of a society as they prepare and partake of food. This culinary
imagination reflects, therefore, a loose agreement on a common and sustained template of cuisine as some-
thing definable and distinctive, something with more-or-less known qualities
and boundaries.

In the case of Japan, this self-defined (or self-appreciated) template in-
cludes a key element: fresh or raw ingredients. Most cultures frame their ideas
about food culture around concepts such as the bounty of the land and the
changing seasons, the natural world. Food is nature transformed by culture, and
culture is a powerful force with which to fasten symbolism and meaning to the
mundane facts of life, such as cooking and eating. In the following sections, we
sketch some of the most significant aspects of cultural symbolism, ideas about
tradition, and other aspects of Japanese food culture, belief, and food lore seen—
as they so often are—as stable and relatively unchanging.

One of the most central of culinary things in Japan, in both practical
and symbolic terms, is rice. Rice cultivation is a hallmark of East and South-
east Asian agriculture, where seasonal monsoons provide the water neces-
sary for elaborate irrigation systems. Japanese civilization developed around
rice cultivation, made indigenous through the myths and rituals of Shinto re-
ligion that are closely tied to rice (as well as to the gods who gave mytholog-
ical rise to the Japanese imperial line). Many Shinto rituals are linked to the
calendar of rice production, and even the present-day emperor annually trans-
plants rice seedlings in a paddy inside the Imperial Palace at the center of

The emperor celebrates not only the ritual event of rice planting but also the flow of the agricultural
year: cuisine is constructed across calendars that reflect many dimensions, including concepts of season-
ality. Even in a globalized food system that delivers products from around the world without much regard
for month of the year, Japanese food culture places great emphasis on seasons. Seasonality defines varieties
of seafood, not just by availability and quality, but also by their essential characteristics. That is, fish of the
same species may be known by different names depending on the time of year they are caught, their size,
their maturity, or the location where they are taken (all of which may be closely interrelated).

Food, Culture, and Asia

Cuisine and Identity
in Contemporary Japan

By Theodore C. Bestor and Victoria Lyon Bestor

Editor’s Note: Portions of this article appeared in Victoria Bestor and !eodore C. Bestor,
with Akiko Yamagata, eds., Routledge Handbook of Japanese Culture and Society
(New York: Routledge Publishers, 2011).

May 2011, Heisei Emperor planting rice in the rice paddies of the Imperial Palace.

Even in a globalized food

system that delivers

products from around

the world without much

regard for month of the

year, Japanese food

culture places great

emphasis on seasons.

Foodways are the traditions, practices, beliefs, and rit-
uals surrounding food in a particular social or cultural
group. !ese are o”en implicit understandings of food
preferences; modes of preparation and consumption;
tastes and #avors; seasonal or celebratory dishes; food
etiquette and rituals; and other customary ideas about
foodstu$s, meals, menus, and so forth.

14 EDUCATIONABOUT ASIA Volume 16, Number 3 Winter 2011

This degree of concern over hyper-seasonality is most pro-
nounced in top-end restaurants and among professional chefs,
food critics, and travel writers. Culinary seasonality is comple-
mented by many other traditional contexts of Japanese culture
that mark divisions of the year through such things as well-
known poetic allusions, customary greetings, or color combina-
tions and patterns (of kimono, for example) that are appropriate
to and emblematic of the rapidly passing seasons.

Closely related to notions of seasons are so-called hat-
sumono (first things), the first products of a season: the first
bonito; the first apples from Aomori; the first tuna of the year to
be auctioned at Tsukiji.1 Stores, restaurant menus, and the mass
media trumpet the arrival of the “first” as a harbinger of the sea-
son. For true connoisseurs of Japanese cuisine, the first products
(of whatever kind) may be awaited with as much excitement as
wine-lovers (in Tokyo as much as in Paris) muster for the arrival
of a new vintage from an exalted vintner.

Culinary calendars also mark events, holidays, and festiv-
ities that occasion particular kinds of foods. The celebration of
the New Year has many food associations, ranging from the sim-
ple act of eating especially long noodles on New Year’s Eve to en-

sure long life and prosperity to the extremely elaborate banquets for the holiday itself. Many osechi (New
Year’s foods) have auspicious meanings based on color combinations (lobsters and crabs, for example, com-
bine celebratory red and white) or double meanings (the word “tai” for sea bream also means “congratula-
tions”). Osechi is served in elaborate sets of stacking and nesting lacquered boxes and trays, and the food
is prepared in advance, the folklore being that housewives should be spared from cooking during the hol-
iday. In the past, cooking fires were supposed to be extinguished during the first days of the New Year.

Other times of the year also have food associations. In mid to late summer, for example, food lore in-
structs one to eat unagi (broiled eel) to fortify the body against the heat on very specific dates determined
by traditional almanacs. Other celebratory dishes are not tied to specific holidays or seasons but are con-
sumed throughout the year, such as the auspiciously red-and-white combination of sekihan (red beans and
sticky rice) that is common at festivals, family celebrations, weddings, and other occasions. Twice each year
there are seasons for extensive gift giving—och!gen in July and oseibo in December—which prominently
include many fancy and ordinary foodstuffs, heavily promoted by manufacturers, department stores, su-
permarkets, and specialty food purveyors.

Domesticating Foreign Cuisines
The culinary imagination of a unified and stable Japanese cuisine does not exist in a vacuum but is formed
in contrast to the many things Japanese eat that are not considered “Japanese.” Of course, much of the tra-
ditional diet of the country fundamentally resembles that of environmentally similar regions of Asia that
were part of the extended zone of Chinese civilization. Many of the central foodstuffs of Japanese cuisine
(e.g., rice, soybeans, tea, sesame oil); methods of cultivation or preparation (irrigating rice paddies, fer-
menting soy beans into soy sauce, making tofu or noodles, etc.); and styles of utensils, cooking techniques,
and flavorings come from the Asian mainland and mark significant parallels with the various national
cuisines of East and Southeast Asia.

The identification of dishes as part of a distinctive “traditional” Japanese cuisine does not imply his-
torical stasis. Like all other aspects of “tradition,” food culture constantly evolves. The exposure of Japan-
ese foodways to foreign, and in particular Western, influences that fundamentally changed the Japanese diet
took place in several distinct historical periods since the medieval period. In the sixteenth century, Japan
had its first contact with Western sea powers, primarily the Portuguese and the Dutch. Drawing distinctions
between Japanese cuisine and other foodways undoubtedly accelerated as Western contact brought not
only exposure to Europe but also to the many other regions of the world already enmeshed in European
trading empires, including South and Southeast Asia (with an abundance of spices unfamiliar to Japan).

The so-called “Columbian Exchange”—the transfers of peoples, plants, animals, and diseases in both
directions between the Old and New Worlds following the voyages of Columbus in 1492—rapidly affected
East Asia. Foodstuffs from the New World that made their way to Japan during the sixteenth century in-
cluded sweet potatoes, potatoes, and capsicum (red) peppers (and a non-food item: tobacco). Japanese
foodways were also affected by the cooking of the European explorers, missionaries, and traders following
the 1549 arrival of the Jesuit priest Francis Xavier in Nagasaki. Tempura is generally regarded as a culinary
innovation stimulated by Portuguese influence in Ky%sh%, and many new foodstuffs arrived, either directly

Food, Culture, and Asia

Tuna auction at Tsukiji !sh market in Tokyo.


from European contact or indirectly from other Southeast and East Asian countries. European words (or
adaptations of them) entered the Japanese language as well: k”h# (coffee), tempura (from a Portuguese
term), piripiri (hot, spicy, from a Swahili term for red peppers brought from the New World to Iberia, then
to East Africa and on to East Asia by Portuguese traders), kasutera (an Iberian pound cake), and pan (bread,
from Portugal).

From the seventeenth through the nineteenth centuries, Japan maintained self-imposed isolation. The
culinary influences occasioned by sixteenth-century contacts with the Portuguese and the Dutch were
largely confined to the new crops (including New World crops) that took root in Japan; the level of actual
trade between Japan and the rest of the world was modest, and foodstuffs played little part in it.

During the two and a half centuries of Tokugawa rule, Japan was at peace. Despite periodic massive
famines, agriculture was generally productive, and many innovations expanded the range of rice cultiva-
tion. Despite the political unification of the country, Tokugawa policy restricted travel in many ways, and
because contacts among different regions were limited, local foodways and specialties were strongly main-
tained. Official travel to and from Edo (as Tokyo was known until the 1870s), however, was mandatory for
local lords and higher-ranking samurai from each of the fiefs, so Edo became a melting pot into which
metropolitan tastes and flavors—in literature, fashion, art, politics, and cuisine—were created and dissem-
inated to the provinces with the comings and goings of the elite. Guidebooks provided detailed descriptions
(and rankings) of the culinary delights of the capital, and famous restaurants were often depicted in wood-
block prints (the souvenir postcard of the day).

The peace and prosperity of the period also enabled the development of regional food processing in-
dustries that had extended geographic reach. Sake brewers, the producers of soy sauce, and the manufac-
turers of rice vinegar, for example, in some cases became regional rather than merely local. A number of
prominent food companies active today can trace their origins to the proto-industrial production of the
Tokugawa period; Kikkoman, the soy sauce company, dates to several families active in the trade near Edo
in the mid-seventeenth century; Mizkan, the producer of rice vinegar, began in 1804 in a port city near
Nagoya, astride the trade routes linking Edo

and Osaka.

And the dietary needs of large cities like Edo and
Osaka were met by local agricultural production, as well as large-scale interregional trade in basic foodstuffs
such as the Osaka-to-Edo rice trade and the Hokkaid&-to-Osaka fish trade.

Following the “opening” of Japan by the American naval officer Commodore Matthew Perry in
1853–54, Japan experienced an accelerating flood of foreign influences across every aspect of life, includ-
ing the culinary. The Meiji period (1868–1912) saw a flood of imported products, and the upper and upper-
middle classes especially experimented with new tastes and menus, both at home and in restaurants. In
the 1870s, there was a boom in consumption of beef, emulating European tastes for red meat (officially

Food, Culture, and Asia

Woodblock print by Utagawa Hiroshige (1797–1858). “Yaozen Restaurant at Sanya” from Grand Series of Famous Tea Houses of Edo (ca. 1839–1842).
Japan, Edo period. Source: Honolulu Academy of Arts website at

Kikkoman, the soy sauce

company, dates to several

families active in the trade

near Edo in the mid-seven-

teenth century; Mizkan,

the producer of rice vinegar,

began in 1804 in a port city

near Nagoya, astride the

trade routes linking Edo

and Osaka.

16 EDUCATIONABOUT ASIA Volume 16, Number 3 Winter 2011

long-forbidden by Buddhist proscriptions) in the form of a traditional kind of dish simmered with soy
sauce—a dish now internationally known as sukiyaki. Particularly in Tokyo and the treaty ports where
Westerners were allowed to settle (such as Yokohama and Kobe), restaurants provided introductions to
European cuisines for urban sophisticates. Wax models of food were first displayed to visually explain for-
eign dishes to diners unfamiliar with them; such models (in plastic) remain very common in restaurant dis-
plays today.

The food purveyor Meidiya, established in 1885 (named for the Meiji era, retaining an archaic Eng-
lish transliteration), is an example of the companies that developed to meet the demand of imported West-
ern foodstuffs. Businesses and restaurants helped promote the boom in Western-style consumption for
Meiji-era elites and led the way for other foreign and domestic specialty stores. The early twentieth century
saw the development of department stores as centerpieces of urban modernity. By the 1920s, department
stores in large urban centers had assumed premier roles in defining middle and upper-middle class con-
sumption, including foodstuffs. Department stores developed food floors (generally the basement) that
featured the finest products, both domestic and foreign. Such department store food halls continue to be
arbiters of high-level cuisine.

In the twentieth century, Japan’s extensive colonial empire throughout East and Southeast Asia also in-
fluenced the development of Japanese domestic food life. Dishes and tastes from elsewhere in Asia became
standard components of Japanese consumption (e.g., Chinese restaurants, the introduction of spicy kim-
chee from Korea, or the wide popularity of ramen noodle soup from north China).

An extremely important aspect of Japan’s culinary transformation was the impact of the
Japanese military on dietary norms, as food anthropologist Katarzyna Cwiertka has argued.2
From the creation of a mass conscript army in 1873 through Japan’s defeat in 1945, the Japan-
ese military was one of the major institutions shaping national life. With a huge conscript base,
the military faced the challenge of creating a nutritionally solid military diet that had to be rel-
atively easy to prepare in standardized ways for large numbers of people. Since the promotion
of national unity was also of great importance, aspects of Japanese cuisine that traditionally re-
flected sharp regional or class differences needed to be avoided. Perhaps surprisingly, mili-
tary nutritionists adopted many dishes from European countries to become standards in the
military diet, including curry rice, pasta dishes, soups, and stews. The British naval diet, for
example, with its relatively large portions of beef, was seen as a model for building the stam-
ina of Japanese soldiers and sailors.

After Japan’s defeat, the Allied (primarily American) Occupation of Japan launched an-
other wave of culinary innovation and adaptation. Some Japanese foods were adapted to the
tastes of the occupiers. (Large amounts of meat cooked on a steel griddle became the now-stan-
dard dish teppanyaki.) American forces brought with them a diet rich in dairy products, meat,
and animal fats of all kinds; this had a major impact on Japanese food consumption and tastes
during the postwar period.

The war and its immediate aftermath brought near starvation to millions and permanently
severed Japan from its previous colonial sources of food supply. The postwar economic recov-
ery of the 1950s and 1960s—the so-called economic miracle—focused primarily on the devel-
opment of heavy industry and export industries but also created entirely new lifestyles for many
Japanese. From the 1950s onward, the urban population exploded, and rural areas (and their
foodways) declined; smaller nuclear families became the norm, and shopping, cooking, and eat-
ing habits changed. Large-scale food manufacturers took over production in many segments of

the food industry, and local or regional producers suffered. Increasing proportions of the food consumed in
Japan were imports, and Western foodstuffs became commonplace in many urban diets—toasted bread with
mixed green salad and coffee for breakfast, curry rice for lunch, perhaps spaghetti

for dinner.

In the 1970s, Japan emerged from its high-speed growth years as a full-fledged economic powerhouse,
with a prosperous urban middle class that looked to Europe and the US for models of consumption. The
1970s and 1980s were a period of hyper-consumption, and Western-inflected food fads flourished at both the
high end of fine European imports and on the mass level. (The first McDonald’s in Japan opened in the Ginza
district of Tokyo in 1971; it was an instant success.) The last quarter of the twentieth century and the first
decade of the twenty-first have seen a commercial transformation of the world of food in many ways. Japan-
ese imports of food from overseas have continued to soar. Vast empires of fast food chains saturate most urban
areas. Home dining and food preparation account for an ever-smaller proportion of food-related expenditures.
Supermarkets and Konbini (convenience stores) have driven out of business many of the small specialized
local food stores that previously dotted the urban landscape, and the stock-in-trade of konbini are highly
processed prepared foods that are themselves transforming the nutritional standards of the Japanese diet.

Food, Culture, and Asia

Western foodstuffs became

commonplace in many

urban diets—toasted bread

with mixed green salad

and coffee for breakfast,

curry rice for lunch,

perhaps spaghetti

for dinner.

One of over 3,500 McDonald’s restaurants in Japan.


At the same time, the level of interest in food at the high end continues to sustain a gourmet boom fo-
cused on the finest ingredients and styles of preparation, whether domestic or foreign. On the domestic
front, what we have called the “gentrification of taste” has resulted in a revival of regional dishes, local pro-
ducers, or styles of preparation that had been fading away as old fashioned. These are now touted for their
authenticity and often lauded for sustainability, local roots, and other “slow food” characteristics.

One can look at changes in the Japanese diet since the nineteenth century as incremental innovations
and stylistic shifts along a chronological sequence. Another way to think about Japanese consumers’ access to
a vast array of both domestic and cosmopolitan foods is as a consequence of the transformations in the Japan-
ese diet brought about by “the industrialization of food.” As part of a new global food system, this entails a
macroscopic and multifaceted set of transformations in which the entire character of a society’s sustenance—
selections of food resources, methods of production and processing, techniques of distribution, daily rhythms
of eating, and the creation of entirely new
foodstuffs—is adapted to and shaped by in-
dustrial, capital-intensive production.

Clearly, from the late nineteenth cen-
tury onward (and in some cases from
much earlier), Japanese foodways have
been increasingly industrialized. The nine-
teenth- and twentieth-century transforma-
tion of Japanese foodstuffs reflected the
introduction of new foods from the West
and its colonial empires, new techniques
and technologies for processing food, and
new modes of cooking and dining. Indus-
trialized food production promoted both
the standardization of foodstuffs and the
mass marketing of products such as
canned fish and meat products, vegetables,
and fruits that became common com-
modities in the early decades of the twen-
tieth century.

Typically, industrialization of food
changes the repertory of goods available
to consumers, increasingly substituting
highly standardized, processed, and man-
ufactured foodstuffs for widely varied, lo-
cally produced, raw, and semi-processed
ones. This affects consumers, of course, but
the transformations are fundamentally
propelled by changes in the economic, po-
litical, and social institutions that produce, process, and distribute foodstuffs.

Industrialization of food can also define or redefine what is traditional.

Many dishes and delicacies now

widely regarded as hallmarks of Japanese cuisine are of relatively recent introduction

or invention.

For ex-
ample, even the basic form of nigiri-zushi, a thin slice of fish atop a compact oblong block of vinegared rice—
the style characteristic of Tokyo’s cuisine and now the world’s de facto sushi standard—was an innovation of
the mid-nineteenth century. Many of its contemporary features, including exquisitely fresh fish rather than
various kinds of pickled or salted seafood, only became possible in the twentieth century with the advent of
mechanical refrigeration and ice manufacturing.3

The Branding of “Cool Japan”
The prestige associated with being relatively omnivorous and attuned to connoisseurship exists in Japan as
it does in many other prosperous middle class societies. To discern and savor many styles of Japanese cui-
sine, as well as to appreciate the finer points of high-status foreign foods, is to secure a claim as a sophisti-
cated Japanese and a cosmopolitan “citizen-of-the-globe.” This juxtaposition of the local and the global,
the domestic and the transnational, has been an important aspect of larger Japanese identity politics since
the high-speed economic growth era of the 1960s.

The culinary dimensions of social distinction are also, importantly, products of the vast media atten-
tion paid to food in all its forms, which has exploded over the past generation. Commentators on cultural

Food, Culture, and Asia

Bent!: Portable Meals
One of the most common ways that Japanese consume food outside
the home is bent& (sometimes more formally called o-bent&, the “o”
being an honorific). Bent& are box lunches, carefully packed into
compartmentalized containers, with small portions of cooked meat,
poultry, or fish; one or two vegetable dishes; and always a good serv-
ing of boiled rice.

In the past, bent& were served in elaborate lacquered boxes; today
the lacquer is reserved for special occasions. Workers and students
carry homemade bent& in plastic or metal containers on their daily
commutes or buy bent& in disposable packages from retailers in sta-
tions, at specialized take-out shops, or at the ubiquitous konbini.

Bent& are available everywhere and are the midday meal for millions of students and workers. Travel-
ers also pick them up as a treat for a journey and may bring bent& home as gifts containing famous local
culinary specialties.

Homemakers often lavish enormous amounts of attention on the ingredients and cooking techniques, as
well as on the appearance of the bent& itself, which reflects well on her skills. Some go to extraordinary lengths

to craft bent& in the image of cartoon characters and other fanciful
scenes (examples of which are easily found on the Internet).

Although some schools allow bent&, many public schools instead
serve hot lunches in the classroom. This assures that all students eat
equally nutritious meals, social status is not highlighted in the rich-
ness or meagerness of food brought from home, and students shoul-
der responsibility by serving food themselves and cleaning up
afterward. —Theodore C. Bestor and Victoria Lyon Bestor

Bent” deluxe. Source:

Bent” for children. Source:

Many dishes and delicacies now

widely regarded as hallmarks

of Japanese cuisine are of

relatively recent introduction

or invention.

18 EDUCATIONABOUT ASIA Volume 16, Number 3 Winter 2011

production often use the term “culture industry” as shorthand to refer to the complex influences and con-
nections that, in industrial capitalist societies, link creators, the content or meanings of the goods or serv-
ices they create, and the tastes and preferences of those who receive or consume them (creation and
production: content and distribution: reception and consumption). Many argue that industry is the prime
mover; others argue that the linkages are more fluid and multidirectional and that consumer tastes and
preferences (and many social and cultural trends external to industry) shape industry as much as vice versa.
Typically, those who write about culture industries focus on mass popular culture and its many media—
music, film, television, magazines, comics, digital games, and fashion—but it is not far-fetched to think of
food culture in similar terms: as an extremely complex system of culinary production, a vast marketing
and distribution system, selling items that take shape in many symbolic and social ways, promoted by
celebrity chefs, supported by extensive advertising and the doting coverage of mass communication out-
lets devoted to culinary matters, and presented to consumers whose choices are shaped both by media cov-
erage and by individual impulses for self-fulfillment and social standing—distinction—expressed in
culinary fashion. Of course, some of this “food culture industry” is quite specifically media-based—the en-
tertainment value of food as expressed in movies like Itami J%z&’s noodle farce, Tampopo; in the televised
fantasy food competitions of Iron Chef; or in the manga series Oishinbo by Kariya Tetsu and Hanasaki
Akira, about an investigative reporter engaged in a never-ending quest for culinary authenticity and con-

In broader terms, Japanese foodways have become a focus of contemporary discussions among jour-
nalists, business leaders, diplomats, and other government officials about “Cool Japan” or “Japan’s Gross Na-
tional Cool,” a term coined by the American journalist Douglas McGray.4 “Cool Japan” is the product of
Japan’s so-called “content industries”—such as anime, manga, video games, fashion, music, Hello Kitty, and,
yes, cuisine—that have generated highly popular (and highly profitable) markets for things Japanese out-
side of Japan, all the more noteworthy during the past couple of decades in which the Japanese economy
as a whole has only stuttered along. The “content industries” (or culture industries) are the beacons of “Cool
Japan” and are officially promoted by Japan’s Ministry of Foreign Affairs. Domestically, Japanese cuisine is
very well branded as a cultural product—an icon of national cultural identity. Internationally, for both for-
eign observers and food tourists, Japanese cuisine is part of the enticement of Japan’s “soft power” (the abil-
ity to project attractive cultural influence without international coercion). Clearly “soft power” works! A
2008 Japanese government survey revealed that the leading reason foreign tourists gave for visiting Japan
(64.5 percent) was “to eat Japanese cuisine.” 

1. Theodore C. Bestor, Tsukiji: The Fish Market at the Center of the World (Berkeley: University of California Press, 2004).
2. Katarzyna J. Cwiertka, Modern Japanese Cuisine: Food, Power, and National Identity (London: Reaktion Books; Chicago: Uni-

versity of Chicago Press, 2007).
3. Theodore C. Bestor, “Kaiten-zushi and Konbini: Japanese Food in the Age of Mechanical Reproduction,” in Fast Food/Slow

Food: The Cultural Economy of the Global Food System, ed. Richard Wilk (Lanham, MD: Altamira Press, 2006), 131–144.
4. Douglas McGray, “Japan’s Gross National Cool,” Foreign Policy 130 (2002): 44–54.

Bestor, Victoria Lyon, and Theodore C. Bestor, with Akiko Yamagata, eds. Routledge Handbook of Japanese Culture and
Society. London and New York: Routledge, 2011.
Japan Ministry of Foreign Affairs. Kids Web Japan: Visual Culture, Japanese Box Lunches at
Rath, Eric C., and Stephanie Assman, eds. Japanese Foodways, Past and Present. Urbana: University of Illinois Press, 2010.
Sand, Jordan. “A Short History of MSG: Good Science, Bad Science, and Taste Cultures.” Gastronomica 5 no. 4 (2005): 38–49.

VICTORIA LYON BESTOR directs the North American Coordinating Council on Japanese Library Resources (NCC) which
creates access services to Japanese print and digital resources and advocates for the needs of library users, especially those
at smaller institutions. She is an Associate of Harvard’s Reischauer Institute of Japanese Studies, co-editor of the Rout-
ledge Handbook on Japanese Culture and Society, Doing Fieldwork in Japan, and a forthcoming series of modules for the Vi-
sualizing Cultures online project looking at twentieth-century transformations of the city of Tokyo. She has written
extensively on Rockefeller philanthropy and worked in K-12 outreach at Columbia University, where she was also Asso-
ciate Director of the Donald Keene Center of Japanese Culture.

THEODORE C. BESTSOR is Reischauer Institute Professor of Social Anthropology at Harvard University, chairs Harvard’s
anthropology department, and is Vice President (President-elect) of the Association for Asian Studies. His principal research
focuses on Tokyo and the interpersonal networks that underpin neighborhoods and markets. He is the author of Tsukiji:
Fish Market at the Center of the World and Neighborhood Tokyo. He is currently researching the impact of Japan’s March 2011
disaster on food production, consumption, and distribution, and he is one of the founders of Harvard’s Digital Archives of
Japan’s 2011 Disasters (

Food, Culture, and Asia

Internationally, for both

foreign observers and food

tourists, Japanese cuisine

is part of the enticement

of Japan’s “soft power.”

Cover of Oishinbo, Vol. 1.
Source: The VizManga website at

Proceedings of the Nutrition Sociery (19Y4). 53, 271-280 27 1

Social aspects of meat eating

Department of Social Anthropology, The Universiiy of Edinburgh, Adam Ferguson Building,

George Square, Edinburgh EH8 9LL

The present paper explores the way that culture conditions our dietary behaviour, and
also, conversely, how what we choose to eat can inform us about our own identities as
members of Western society and as human beings. Understanding the value system that
underpins our society (however little we may be conscious of it in our daily lives) is
necessary to illuminate a true social science of food, to explain why we endow animal
flesh with its unique status and, thence, to suggest whether our rapidly changing view of
this singular substance is a fashion or a trend. But first, a brief history of the social
science of food may be useful. The influence of diet on human health and behaviour has
been discussed since time immemorial. However, only relatively recently has a
systematic body of theory emerged that considers the culture of food selection.
acquisition, preparation, and consumption as a significant social phenomenon.

Engels (1844) did write at length one and a half centuries ago on how poor diet
contributed towards the condition of the working class in England, but really only as an
indicator of what was to him a more significant economic analysis. By and large, all t h e
classic sociologists such as Marx, Durkheim, and Weber tended to mention food habits
only in the context of their deeper concerns, although Hcrbert Spencer (1898-1YcH))
considered such issues as religious and military aspects of foods, and corpulence as a sign
of affluence. Veblen (1899) took this further in The Theory ofrhe Leisure Class, detailing
how food and drink are routinely employed for conspicuous consumption.

Simmel’s (1910) essay on the sociology of thc meal stressed the use of food in ritual
and religious ceremony, and the meanings of communal cating. Perhaps it is here that we
begin to see the first real appreciation of how both the form and content of a meal can
denote much more than its actual food value, although W . Robertson Smith (1889) had
commented several years earlier that ‘thosc who eat and drink together are by this very
act tied to one another by a bond of friendship and mutual obligation‘.

As the 20th century progressed, appreciation of food’s centrality to human bchaviour
advanced rapidly. Radcliffe-Brown (1922) proposed that for the Andaman Islanders ‘by
far the most important social activity is the getting of food‘. and within a decade Audrey
Richards (1939) was producing pioneering work in which she placed the food and
nutrition of her African peoples i n thcir social and psychological contexts. One of the
most interesting publications, however, is Norbert Elias’s (1939) treatise on Western
table manners. The Civilising Process, not least for his discussion of how the scrving of
meat in European society has developed in a consistent dircction since the Middle Ages.

For many years, however, most analyses remained largely descriptive and utilitarian,
scrutinizing the functions and curiosities of ‘other’ people’s feeding, but without
recognizing that our own ‘normal’ habits can be equally revealing. This tendency is
strikingly exemplified in a genre which typically treats vegetarians and other non-
orthodox eaters with barely disguised suspicion, as if their subversive beliefs and
behaviour threaten more than just conventional nutritional wisdom (which, I argue, is

272 N. F I D D E S

true; they challenge their society’s basic cosmology). The discourse often misrepresents
the most extreme practices or short-term dietary treatments as if they were typical, and is
characteristically laced with terminology of ‘crazes’ and ‘faddism’ OK even ‘psycho-
pathology’ that ideologically marginalizes the subjects. But, most regrettably, it fails to
apply the same scepticism to the mainstream diet that is, presumably, enjoyed by the

The flourishing of theory came from the likes of Mary Douglas (Douglas, 1966, 1970,
1973, 1975, 1978; Douglas & Nicod, 1974; Douglas & Ishenvood, 1980), Claude
LCvi-Strauss (1963, 1966, 1967, 1969, 1970, 1973, 1978, 1987), and the semiologist
Roland Barthes (1975), and their recognition of the extent to which taste is culturally
conditioned, and governed by patterned rules. Uvi-Strauss’s real concern, in fact, lay
with the universal structures of the human mind, but his intricate unpicking of the
‘language’ of food in different cultures bequeathed a rich heritage of analysis, summed
up in his celebrated ‘culinary triangle’ in which he related the transfoirmation of food
between the poles of the raw and the cooked (and the rotten) to categories of nature and
culture. Douglas (1975) is particularly famed for Deciphering a Meal in which she
revealed the unwritten codes of our society’s cuisine, but her work on purity and taboo
and a raft of other writings on food, has contributed equally to our understanding.

Recent theory has been concerned less with description than with interpreting society
through the symbolic meanings attached to foods and food habits, the practical functions
of the eating rituals in which we all participate, the economic and ecological conditioning
of dietary habits, and food’s use for expressing social or spiritual identity. Since the
1980s, particularly, a lively debate has evolved in the social sciences, supported by an
explosion of ethnographic detail.

These interests are by no means mere abstract theory. Rather, they reflect a
realization that to study food habits divorced from their entire social context is futile;
once we have met basic survival needs, eating becomes a matter of love and belong-
ingness, of self-esteem, and of self-actualization. Indeed, social conditioning can easily
override even basic hunger, as when poverty-stricken families give of their last reserves
to entertain a guest or, more dramatically, when some victims of the Andes air crash
starved themselves rather than resort to cannibalism.

Unfortunately, a tendency to presume a rationality in traditional Western food
selection continues to characterize collaboration between social scientists and nu-
tritionists to this day. For example, when a consumer panel reports that they dislike red
meat because it is too fatty, or too tough, or too cruel, that is commonly accepted as
sufficient explanation, when it should be only a starting point. All-too-easily we presume
that it is sensible to avoid fat, or not to want to chew too hard, or even to be squeamish
about animals’ sensibilities, when we could be asking why these things are of greater
concern than 2, or 20, or 200 years ago. Similarly, individual consumers might believe
that the high cost of meat explains their avoidance. To the market researcher, requesting
the motive for their abstention, the explanation may likewise seem reasonable. But there
is always room for further enquiry, as an interview of mine once illustrated:
‘So why don’t you eat meat?’
‘Oh, I can’t afford it. My boyfriend and I are both living on grants and we just can’t
manage it.’
‘Is that really the only reason?’
‘Yes. Absolutely. We just don’t have the money. It’s far too expensive.’

T H E R O L E O F M E A T I N T H E H U M A N D I E T 273

‘So you still enjoy eating it if you’ve been invited to dinner by friends or something then?’
‘Well, no, I still prefer not to really.’
‘What, even if you’re not paying?’
‘Well, yes. I don’t really know why. I just prefer not to. I know it’s silly, but we’ll usually
ask if we can have something else.’
‘You must have some idea why, surely? Give me a clue?’
‘I don’t know. I just don’t like the taste.’
‘Do you mean you’ve grown out of liking it after not eating it for so long?’
‘No, I don’t think it’s that. I suppose . . . it’s something to do with not liking the thought
o f . . . I don’t know. Just not liking the idea of the animal being . . killed . . . so that I
can eat it. It’s horrible.’

Price alone cannot explain the existence of vegetarianism, yet ‘rational’ reasons
are commonly accepted at face value. Whilst never reliable, the economist’s view can be
useful for purely market-oriented studies of people’s food habits within strictly delimited
spatial and temporal boundaries. But it is next to useless for looking more deeply or
more broadly at why people eat or drink as they do, and why long-term preferences
change. People may use the language of ‘not liking the taste’, or of ‘health’, or of ‘the
price of meat’, because this is the discourse with which most of us feel most comfortable:
but the reasons why meat is suddenly being seen as less healthy than it used to be, by
scientists as well as by housewives, are much more fundamental. As Mary Douglas
(1978) has argued: ‘Nutritionists know that the palate is trained, that taste and smell are
subject to cultural control. Yet for lack of other hypotheses, the notion persists that what
makes an item of food acceptable is some quality inherent in the thing itself. Present
research into palatability tends to concentrate on individual reactions to individual items.
It seeks to screen out cultural effects as so much interference. Whereas . . . the cultural
controls on perception are precisely what needs to be analysed.’ What we choose to eat
depends as much on our cosmology as on our physiology. We really are what we eat. Just
as Buddhist vegetarianism, or Lapp reindeer-hunting, can be understood only with
reference to their communities’ cosmological complexes, explanation for the high rank
meat has enjoyed in the Western classification of potential foodstuffs, and why the
market is currently turbulent, must be sought in social history as well as contemporary

It is relatively easy to notice oddities in the behaviour of others. For example, in a
classic work on the classification of animals in Thailand, Tambiah (1973) demonstrated
that ‘values and concepts relating to social relations are underpinned to rules about
eating animals [and] we have to inquire for the society in question why the animals
chosen are so appropriate in that context to objectify human sentiments and ideas’,
showing that understanding indigenous geography, mythology, taxonomy, and religion
were all prerequisites to deciphering why the forest rat was edible but the civet cat
forbidden to pregnant women. To the villagers he studied, of course, their rules would
have been self-evident, a matter of taste and tradition: but the anthropologist as
‘stranger’ can develop an analytical perspective which is hard to achieve from ‘within’.

It is, however, more challenging for us to perceive that our own tastes can be equally
non-rational. As does every society, we develop rationalizations to explain our habits as
‘normal’, and we each grow up with little need to question them; it is difficult to perform
the role of strangers in the context of our own culture. So let us ask: what might a team of
little green observers from the planet Mars make of Western eating habits, if they had

274 N . F I D D E S

been studying us all these years? I shall consider briefly three areas of the Western ‘meat
system’ from a Martian perspective: history and prehistory; our definitions of inedible
species; and notions of health and nutrition, t o reveal culture’s part in each. Thereafter, I
shall suggest some additional meanings of our culinary tastes.

The aliens’ tentacled eyes would first have watched our emergence somewhere in the
tropics o r sub-tropics, gathering easily available fruits and vegetable matter, sup-
plemented by only the occasional taste of meat from small animals. This is still the way of
our closest primate relations. However, over millennia, the Martians would have seen us
evolve into societies which control 1.28 billion cattle alone, cattle which use nearly 24%
of the planet’s landmass and whose combined weight exceeds that of the human
population on earth (Rifkin, 1992) but where meat still is regularly consumed by only a
minority of the world’s people.

We have colonized the globe as far as the polar reaches, where the habitat affords little
else but animal produce. But in most regions, even amongst self-proclaimed ‘hunters’,
vegetable foods provide at least 80% of nutrition, as has probably always been the case.
So-called ‘hunter-gatherer’ peoples should properly be called ‘gatherer-hunters’, since
flesh is typically the minor component. W e can subsist on widely varied diets. Thus,
surely the aliens would be intrigued to note that, in common currency, one routine
explanation of our liking for the taste of animal flesh is that we are hunters by origin and,
therefore, carnivores by nature. They might reasonably conclude that this says more
about our preferred self-image than about historic actuality. Similarly, the Martians
might wonder why popular history depicts medieval Europeans as exceptionally
carnivorous when, in fact, animals were reared mainly for dairy, wool, and traction and,
for most people, meat remained a rare luxury. Flesh was eaten in quantiity only amongst
the rich and powerful, although their consumption could indeed be staggering.

Today, the diversity of ways in which humans eat, and say we eat, is remarkable. Some
groups o r individuals subsist on little but meat, but claim it a rare ablerration. Others
seldom consume flesh in practice, yet call themselves hunters. Some profess vegetarian-
ism, yet eat flesh on occasion; not a few Western ‘vegetarians’ will eat chicken, or bacon,
or even hamburgers. We do not by any means always do as we say, for meat’s value is
symbolic as well as nutritional.

Vegetarianism has long characterized large parts of the world, commonly for religious
or economic reasons. But, in the West, whereas once only such eccentrics as Pythagoras
o r George Bernard Shaw (or particular sectors of society such as mortks) advocated a
meat-free diet, today it has become the voluntary preference of millions, correlated
particularly with women, with urban dwellers, with the young, and with the highly
educated. The influence of culture is unmistakable. And yet, our Martians would hear
countless millions, including academics, continue to maintain that we are somehow
‘meant’ to eat meat because of our dentition o r the form of our guts; because men
especially ‘need’ it to grow up strong and healthy; o r because it is somehow instinctive.
Surely thle aliens would decide that there was something more going on here than

Another aspect of our foodways which extraterrestrial anthropologists might find
curious is how we categorize which animals we can eat, and which we should not, They
would not be particularly surprised to find that we civilized Westerners do not eat
members of our own species; after all, avoiding cannibalism is common amongst animals.
But what they might find curious is why we believe that so many other peoples indulge in

T H E R O L E O F M E A T I N T H E H U M A N D I E T 275

the barbaric act, or have done so in the past. From their vantage point, the aliens might
have seen no ‘adequate documentation of cannibalism as a custom in any form for any
society’, but that the ‘idea of “others” as cannibals, rather than the act, is the universal
phenomenon. The significant question is not why people eat human flesh, but why one
group invariably assumes that others do’ (Arens, 1979).

More curious to the Martians than that, perhaps, would be the way in which the
cannibalism taboo seems to permeate our society’s categories of edible and inedible. For
example, species such as dogs and cats which we have found amenable as companions
and houseguests, we tend not only not to eat, but to find abhorrent when we hear of
other societies elsewhere who rather enjoy their flavour. Similarly, the idea of eating
monkey meat would fill most of us with a deep-seated unease, presumably because other
primates remind us too much of ourselves. But this is not a ‘rational’ taboo: i t is an
arbitrary cultural association. Other peoples eat monkeys without hesitation.

Perhaps the Martians would be amused that in the British Isles the idea of eating horse
meat has become distinctly unacceptable, over a period when horses have become
perceived more as social companions than as farm animals, whilst just across a narrow
stretch of water their flesh is still enjoyed within another culture. This is hardly rational,
but it is understandable.

The disinterested extraterrestrials, therefore, would be unsurprised to find that aspects
of our nutritional beliefs are also touched by inconsistency. Indeed, for evidence they
would hardly need to look further than the enormous disjunction between physiological
needs and ‘expert’ dietary advice on the one hand, and the contents of the average
supermarket trolley on the other, overflowing with nutrient-free diet-colas and desserts
concocted from indigestible fats, and a range of colourful packages filled with permu-
tations of fats, sugars, starches, and chemicals. Foods provide pleasure, almost
regardless of nutrition. But to the Martians, this would come as little surprise. They
would already have seen how meat, which had always generally been seen as a positive
boon to good health, suddenly became much more so as the scientific era flourished.
Baron Justus von Liebig, in particular, popularized the ‘protein myth’, by glorifying meat
as the essential source of material to replenish muscular strength (Liebig, 1846, 1847).
H e gave new scientific status to notions that animal food was somehow more nutritious
than mere vegetables, and his prestige soon endowed meat with near-magical properties.
Liebig (1846, 1847) endorsed the erroneous 19th century view that muscle was destroyed
by exercise, and could be replaced only by more protein, or in other words meat, despite
his pioneering of artificial fertilizers for plants, and his knowledge of vegetarian

The Martians might reflect how this mythical nutritional quality invested in meat
seems to operate through a magical process as much as by ‘protein’. I t is as 5.f we eat
animals’ muscle and lifeblood in the hope of making ourselves strong, just as certain
American Indians believed that a person who eats venison is swifter and wiser than one
who eats ‘the slow-footed tame cattle, or the heavy wallowing swine’ (Adair, 1775). This,
for example, might be why the boxer Lloyd Honeyghan reports he is ‘so hungry for
success’ he’s been ‘living on raw steak’ (Massarik, 1987).

Our aliens would have seen how in countless communities throughout history, the
consumption of meat was generally regarded as an unrivalled route to ruddy good health,
not least because it tended to make one fat: a certain sign of prosperity. Then they would
have seen how, in the late 20th century, fat suddenly became unpopular, just at a time

276 N . F I D D E S

when affluence was becoming commonplace. Slimness is a cultural, as much as a medical,
injunction. Then they would be struck that, however much it was particular fats in excess
that nutritionists condemned, in popular discourse this more often than not seemed to be
discussed in terms of meat. So inconsistent is the public that the ‘healthier’ Vegetarian
Society launched its Cordon Vert Cookery courses with a Brazilian Bake that derived
77% of its energy from fat, and had a content of saturated fat per kg three times that of
lean beef.

Then the aliens would have seen how a society which had always placed implicit trust
in the beneficence of technology, suddenly started to fear its fruits, expressed in a
seemingly endless spate of ‘health scares’ concerning chemicals, or diseases, or 6-year-
old meat. ‘Expert’ reassurances about the benign processes or compourtds fell on deaf
ears; consumers seemed to see contagion or contaminants at every turn. The Martians
would surely wonder whether all of these ‘health’ issues might not be in some way
expressions of other cultural trends, perhaps, for example, related to the ever more
widely circulating concern that technological industrialism might already have gone too
far in its impacts upon the natural envrionment. All these things are affected by culture:
and if the Martians knew one thing that was entirely predictable about any human
culture they had ever seen, it is that nothing stayed the same for long.

I began by stating that understanding the value system that underpins our own culture
is necessary to illuminate a true social science of meat. Explanation for imeat’s peculiar
status does exist: however, it rests not in people’s conscious motivations and eluci-
dations, but at a far deeper level: our culture’s cosmology, tacit assumptiolns, philosophi-
cal premises, spirituality. As with the food systems of every society on earth, we can
understand our own diets only with reference to the deep-rooted beliefs by which we are
all taught to see the world. In particular, it is to our relationship with the natural world
that we must look for clues to meat’s social identity. Our belief system in this context can
be traced back at least to Aristotle (1984) who said that ‘other animals exist for the sake
of man’. But throughout much of Western history, our basic cosmology has been
represented by interpretations of the Christian message. This diffused an image of the
world as a God-given resource for our chosen species to utilize at our will, with a duty to
worship the Higher Being, but with few if any responsibilities towards nature. More
fundamentally, perhaps, it established a view of humanity as qualitatively separate from
the rest of the natural world, just one rung below the angels on the hierarchy of Great
Chain of Being, as one of the ‘most potent and persistent presuppositions in Western
thought’ (Lovejoy, 1936). This could be, and was, widely taken to legitimize our right to
use the earth with scant consideration for consequences.

Controlling the multiple threats of ‘wild’ nature must always have beten a necessary
human goal to some extent, but in recent centuries this need has been elevated to the
status of holy grail. What, after all, is the demand for predictability in science, if not the
desire for control, for power? Certainly, the notion of human control of nature as an aim
as well as a right can be traced through the philosophy of Aquinas, Leibniz and Spinoza,
the science of Copernicus, Kepler and Bacon, and the art of Milton, Pope, and Victor
Hugo (Lovejoy, 1936). As the historian Keith Thomas (1983) puts it: ‘Myan’s dominion
over nature was the self-consciously proclaimed ideal of early modern scientists’.

During the Enlightenment, the Industrial Revolution, the Scientific Revolution, this
approach was not abandoned, but was refined and retranslated into the new rationalist
idiom. Descartes, perhaps above all, was responsible for defining the mechanistic ethos

T H E R O L E O F M E A T I N T H E H U M A N D I E T 277

which still prevails throughout much of Western science, by which animals other than
humans were claimed to be mere automata analogous to clockwork mechanisms, whilst
only humans had eternal souls. (In today’s digital age, of course, his mechanical
metaphor has been updated to the computer jargon of genetic ‘programming’.) This is
the ideology which has prevailed in politics, economics, and science, not to mention
religion and philosophy, throughout the modern period. To most of us, its premises are
so self-evident that we would never think to recognize, let alone challenge, them.
Perhaps only by being made aware of how unique modern Western society is, amongst
human communities ever studied by anthropologists or understood by historians, in the
severity of the spiritual divorce we ordain between ourselves and the world that supports
us, can we begin to appreciate this cosmology for what it is: just one world view amongst
many possible. The point, here, is that this is what gives meat its power.

Meat’s essential value, not just as any old food, but as the food above all others,
derives directly from its capacity to represent to us most tangibly our power over the rest
of the natural world (Fiddes, 1991). What, after all, is meat? It is the bodies, ideally the
muscle, of those beasts we can safely designate our prey. At a cultural level, we like to
eat animals in order metaphorically to authenticate our power over them, and by
extension over the world at large. Only by understanding this can we make sense of the
rest of the meat system. It explains why meat (as well as related activities such as
hunting) has long been most highly valued by the wealthy and powerful tlites in society,
for whom it has served as a means of demonstrating authority. It explains, too, why those
who have enjoyed less control over their own lives, such as the poorer social strata, and
also women as a group, have been denied access to the same quantities of meat as their
more powerful peers. It also explains why those who have chosen to shun earthly power
in favour of spiritual control, such as monks and ascetics, have so often chosen abstaining
from meat as a central symbol of their voluntary simplicity. Above all, it explains the
patterns of meat’s supply and demand. It explains why, as the ethos of technological
control rose to prevalence, meat consumption rose in parallel: and it explains why, now,
that pattern is changing.

The pattern of demand is changing, because the underlying ethos is changing. The
assumption of technological control of nature as an unquestioned ‘Good’ is now being
challenged on many fronts. On the one hand, Chaos and the New Physics is saying that
predictability is ultimately an impossibility, and control an illusion. From another angle,
philosophers are arguing that reductionism is all well and good, but that so much that
matters in life cannot be fitted into its strictures, so ‘science’ as currently constituted
should be confined to areas where it is appropriate. But perhaps most influential is the
challenge of environmentalism which suggests, quite simply, that control of nature may
have gone too far; that its price has been too high in terms of damage to our own habitat,
and that we must find a way of redefining our place in nature, if we are to stand any
chance of healing our social structures, and ultimately in order to survive.

It is this sea change which is redefining the entire terms of our debate. This is why so
many individuals are choosing either not to eat meat, or not to eat the red meats which
traditionally have been most strongly associated with power and control. Meat remains
the symbol of control as much as ever, but what has changed is that the brute power over
nature it embodies is now no longer universally seen as an unqualified good.

This is perhaps most obvious amongst the more radical ‘New Age’ element of the
population, many of whom can phrase at least some of these arguments semi-coherently,

278 N . F I D D E S

expounding an explicit rejection of much of the scientific-industrial ideology (though,
conveniently, not necessarily of all the technology), and a spiritual system which harks
back to animism. A diet of demi-vegetarianism, vegetarianism or veganism is, of course,
almost ubiquitous in such circles. But such groups cannot easily be marginalized and
discounted, because the ideas which they represent most forcefully are still achieving
ever wider currency and influence. And, as environmental issues precipitated by an
excess of industrial insensitivity inevitably continue to hit the headlines, there is little
likelihood of this trend discontinuing.

We live in a period of rapid change: social, ecological, spiritual, and dietary; change at
a rate that is probably unprecedented on a historic time-scale, yet not always obvious in
our day-to-day existence. Yet change is not new. Our views of meat, and of all foods, are
different from those of our great grandparents; theirs differed radically from those
current in the Middle Ages; and doubtless in 50 years time, dietary paradigms will be
different again. By its nature, change cannot be predicted from current expectations.
That is why so many ‘market’ studies of food habits are unsatisfactory; they assume that
the future will continue present patterns. It never has, so why should it now?

The future cannot be predicted with confidence; but trends are evident, at a social and
ecological as well as a dietary-behavioural level, and they are ‘rational’ in their own
terms, if not necessarily always in the terms of policy-makers and scientists. If
nutritionists as well as social scientists wish to keep up as progress unfolds, it is vital to
understand people’s own agendas, rather than to impose an inappropriately rationalist
model. Otherwise, the very real danger is that an ever-expanding sector of the
population will devise its own new nutritional wisdom, and shop and eat on that basis,
with industry and science ill-equipped to respond with other than impotent cries of
‘you’re wrong!’.

All-too-often, the pattern is for ‘authorities’ to produce foods in new ‘more efficient’
ways, only to be shocked by the inevitable consumer backlash of ‘irrational’ scares and
resistance. But normative prescriptions as to nutritional ‘facts’ are useless if they fail to
relate to the population’s beliefs and feelings. Food matters to people at a very basic
level. A prime example may be the rapidly-advancing field of genetic manipulation of
food animals and processes, which is being hastened into operation with scant public
information or debate, largely for the private interests of technologists and industrialists.
Like any scientific development, the public at large has been slow to realize its full
implications, but the technology nonetheless encapsulates much that is feared at a deeply
emotional level. It is an archetypal example of the sort of issue which will fuel public fury
once the first major ecological or public health disaster occurs. If I can make only one
prediction, it is that here is an ideological time-bomb, ticking away.


Adair, J. (1775). A History of the American Indians, p. 113. London: Dilly.
Arens. W . (1979). The Man-eating Myth, pp, 21 and 139. New York: Oxford University Press.
Aristotle (1984). Politics. In The Complete Works of Aristotle: The Revised Oxford Translat.ion, pp. 1993-1994

Barthes, R. (1975). Towards a psychosociology of contemporary food consumption. I n European Diet. From

Douglas, M. (1966). Purify and Danger. London: Routledge & Kegan


Douglas, M. (1970). Natural Symbols. London: Barrie & Rockliff.
Douglas, M. ( e d . ) (1973). Rules and Meanings. Harmondsworth: Penguin.

[J. Barnes, editor]. Guildford: Princeton University Press.

Pre-indusrrial to Modern Times, pp. 47-59 [E. Forster and F. Forster, editors]. New York: Harper.

T H E R O L E O F M E A T I N THE H U M A N D I E T 279

Douglas, M. (1975). Deciphering a Meal. In Implicit Meanings, pp. 249-275. London: Routledge & Kegan

Douglas, M. (1978). Culture. In Annual Report of the RusseNSage Foundation, pp. 55-81. New York: Russell

Douglas, M. & Isherwood, Baron (1980). The World of Goods. Harmondsworth: Penguin.
Douglas, M. & Nicod, M . (1974). Taking the biscuit: the structure of British meals. New Society 19,744-747.
Elias, N. (1939). The Civ
Engels, F . (1844). The condition of the working class in England. In Karl Marx, Frederick Engels: Collected

Fiddes, N. (1991). Meat: A Natural Symbol. London: Routledge.
LCvi-Strauss, C. (1963). Structural Anthropology. London: Basic Books.
LCvi-Strauss, C. (1966). The culinary triangle. New Society 166,937-940.
LCvi-Strauss, C. (1967). The Structural Study of Myth and Totemism. A S A Monographs 5 . London: Tavistock.
LCvi-Strauss, C. (1969). The Elementary Structures of Kinship. London: Eyre & Spottiswoode.
LCvi-Strauss, C . (1970). The Raw and the Cooked. London: Cape.
Levi-Strauss, C. (1973). From Honey to Ashes. London: Cape.
LCvi-Strauss, C. (1978). The Origin of Table Manners. London: Cape.
Levi-Strauss, C. (1983). Anthropology and Myth. Oxford: Blackwell.
Liebig, J. (1846). Animal Chemistry, 3rd ed. London: Taylor & Walton.
Liebig, J. (1847). Researches on the Chemistry of Food. London: Taylor & Walton.
Lovejoy, A. (1936). The Great Chain of Being, p. vii. Cambridge, Massachusetts: Harvard University Press.
Massarik, J. (1987). Rusty Bruno cautious over Tyson fight. Guardian 26 March, 28.
Radcliffe-Brown, A. R . (1922). The Andaman Islanders, 1964 ed., p. 227. New York: Free Press.
Richards, A. (1939). Land, Labour and Diet in Northern Rhodesia. London: Oxford University Press.
Rifkin, J . (1992). Beyond Beef. New York: Penguin.
Robertson Smith, W. (1889). The Religion of the Semites, p. 247. Edinburgh: Black.
Simmel, G . (1910). Soziologie der mahlzeit (Sociology of the meal). Der Zeitgeist. Supplement to Berliner

Tageblatt 19 October. Reprinted in Simmel, G. (1957). Bruche und Tur, p p . 243-250. Stuttgart: K. F.


Sage Foundation.

ng Process, 1978 ed. New York: Urizon.

Works, vol. 4, 1975 ed. London: Lawrence & Wishart.

Spencer, H. (1898-1900). The Principles of Socrology, 3rd ed. New York: D. Appleton.
Tambiah, S . J. (1973). Classification of animals in Thailand. In Rules and Meanings, p. 165 [M. Douglas,

Thomas, K. (1983). Man and the Natural World, p. 29. Harmondsworth: Penguin.
Veblen, T. (1899). The Theory o f t h e Leisure Class, 1959 ed. London: Allen & Unwin.

editor]. Harmondsworth: Penguin.

Printed in Great Britain

Gandhi’s Body, Gandhi’s Truth: Nonviolence and the Biomoral Imperative of Public Health
Author(s): Joseph S. Alter
Source: The Journal of Asian Studies, Vol. 55, No. 2 (May, 1996), pp. 301-322
Published by: Association for Asian Studies
Stable URL:

/stable/2943361 .
Accessed: 12/03/2011 01:02

Your use of the JSTOR archive indicates your acceptance of JSTOR’s Terms and Conditions of Use, available at . JSTOR’s Terms and Conditions of Use provides, in part, that unless
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you
may use content in the JSTOR archive only for your personal, non-commercial use.

Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at . .

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact

Association for Asian Studies is collaborating with JSTOR to digitize, preserve and extend access to The
Journal of Asian Studies.

Gandhi’s Body, Gandhi’s Truth:
Nonviolence and the Biomoral
Imperative of

Public Health


It is easier to conquer the entire world than to subdue the enemies in our body. And,
therefore, for the man who succeeds in this conquest, the former will be easy enough.
The self-government which you, I and all others have to attain is in fact this. Need
I say more? The point of it all is that you can serve the country only with this body.

(Letter to Shankarlal Banker, 1918, CW 15:43)

It is impossible for unhealthy people to win swaraj (self rule). Therefore we should
no longer be guilty of the neglect of the health of our people.

(“Implications of Constructive Program,” 1940, CW 72:380)


There are literally hundreds if not thousands of scholarly works which have
analyzed and reanalyzed Mohandas Karmachand Gandhi’s epic life and work from
numerous angles. In spite of this focused attention, or perhaps on account of it, the
Mahatma remains something of an enigma: a genius, to be sure, and one inspired by
a kind of transcendental moral conviction, but an enigma nevertheless on account of
how he conceived of morality as a problem in which Truth and biology were equally
implicated. As he put it, “morals are closely linked with health. A perfectly moral

Joseph S. Alter is Visiting Assistant Professor of Anthropology at the University of Pitts-

The research upon which this article is based was funded by the National Endowment for
the Humanities and the American Institute of Indian Studies. I am grateful to both these
institutions. I would like to thank my colleagues at the University of Pittsburgh, in particular
Robert Hayden and Fred Clothey for their comments and suggestions. Nicole Constable read
a number of drafts and has significantly improved the end product. Finally, I am indebted to
Anand Yang and two anonymous reviewers for their insightful criticisms and very helpful

Quotations from the collected works of Mohandas K. Gandhi are cited in the text by the
abbreviation CW. Complete bibliographical information is given in the

List of References

under the entry Gandhi.

The Journal of Asian Studies 55, no. 2 (May 1996):301-322.
C) 1996 by the Association for Asian Studies, Inc.



person alone can achieve perfect health” (CW 2:50). Following a statement such as
this, my purpose in this essay is to work toward an analysis of Gandhi’s genius by
focusing on that which appears most enigmatic about his program of sociopolitical
action: his somatic concerns and what I am calling his faith in the biomoral imperative
of public health.

A number of early scholars, most notably Joan Bondurant ([19581 1965, 12),
took for granted that Gandhi’s concern with satyagraha (militant nonviolence) was
quite distinct from his personal preoccupation with diet, sex, and hygiene (see also
Ashe 1968, 94-95; Payne 1969, 465). Following on this, many studies have focused
on politics, ethics, and morality while only a few relatively marginal texts have been
concerned primarily with sex (Gangadhar 1984; Paul 1989; van Vliet 1962). Almost
none deal with questions of health. The problem, however, is that in reading Gandhi’s
autobiography, among any number of other primary texts, one is immediately struck
by the fact that a distinction cannot be made between his personal experiments with
dietetics, celibacy, hygiene, and nature cure and his search for Truth; between his
virtual obsession with health, his faith in nonviolence, and his program of
sociopolitical reform.

Recognizing this, a number of scholars have worked toward what might be called
a “resynthesis” of Gandhi’s life by means of psychoanalytic and symbolic
interpretations (Erikson 1969; Kakar 1990; Nandy 1980, 1983; see also Wolfenstein
1967; Lorimer 1976). In his book Intimate Relations: Exploring Indian Sexuality, for
example, Sudhir Kakar provides a psychoanalytic reading of Gandhi’s sexuality (1990,
85-128). Kakar’s analysis is noteworthy, for he explains Gandhi’s preoccupation with
celibacy in terms of a Hindu psychology of sublimation which is congruent with
Freudian theory (1990, 118). While there can be little doubt that Kakar is right about
the symbiotic relationship between Gandhi’s passionate self-discipline and his desire
to desexualize women by feminizing himself, his focus on symbolism-both Hindu
and Freudian-leads to a mistaken conclusion about the relationship between
nonviolence and sexuality (see also Nandy 1983). Kakar’s psychoanalytic reading
presents particular problems with regard to the critical issue of Gandhi’s
experimentation with food, which is interpreted by him as a symbolic displacement
of his preoccupation with genital sexuality.

Page after page, in dreary detail, we read about what Gandhi ate and what he did
not, why he partook of certain foods and why he did not eat others, what one eminent
vegetarian told him about eggs and what another, equally eminent, denied. The
connection between sexuality and food is made quite explicit in Gandhi’s later life
… [and] . . . we must remember that in the Indian consciousness, the symbolism of
food is more closely or manifestly connected to sexuality than it is in the West.

(1990, 91; my emphasis)

While Kakar is right about the symbolic significance of food, the structure of his
argument reinforces a false dichotomy between the “dreary detail” of nutrition on the
one hand and the expressive power of analogic meanings on the other-a structural
logic which shifts attention almost immediately away from the colonial context of
embodiment and power to a clinical search for the psychological truth about Truth.

Even less explicitly psychoanalytic studies seem to favor psychology or spirituality
as the best analytic medium through which to make sense of Gandhi’s more enigmatic
experiments (Parekh 1989). Baldly put, the logic is something like this: the only way
to reconcile an obsession with sex and food with religion and politics, even in a cultural
rather than purely biographical context, is by getting inside the man’s head.


Alternatively, the argument goes, Gandhi’s enigmatic genius only makes sense in
terms of a symbolic interpretation, or a deep cultural reading, of the specific-and
problematically authorized-social and historical contexts in which his ideas
developed. For example, it is clear that Bhikhu Parekh regards brahmacharya as a
spiritual project with only derivative political value, rather than as a physical exercise
in biomoral reform. This leads him to arrive at the following judgmental conclusion:
“Gandhi’s theory of sexuality rested on a primitive approach to semen. Much of what
he said about its production and accumulation is obviously untrue. By itself, semen
has no ‘life-giving power’ either, and Gandhi was wrong to mystify it” (1989, 182).

The problem with this statement is that it betrays an underlying analytic faith
in an epistemology wherein that which is physical becomes powerful and meaningful
only through the agency of metaphysical transformation; a transformation in which
Gandhi’s gross body, and all it denotes-particularly with regard to extreme
experiments (Parekh 1989, 190-91)-can only be read either as profound asceticism,
unique biography, or a modern political farce.

While exceptional, Susanne and Lloyd Rudolph’s classic analysis of Gandhi’s life
also places too much importance on the psychology of desire and power in Hinduism
and not enough on the biomorality of health in early-twentieth-century India ([19671
1983; see also Morris 1985). More recently Caplan (1987) and Kishwar (1985) take
a similar perspective on the modernity of tradition with regard to gender in the
Gandhian project. While subtle, sympathetic, and clearly attuned to questions of
power, these readings of Gandhi’s sexuality do not adequately take account of the fact
that along with religious traditions, questions of morality in colonial India also
denoted a particular logic of modern public health.

However valuable these psychological and sociopsychological interpretations
are-and my intellectual debt to Kakar, Nandy, and Susanne and Lloyd Rudolph
should be clear-my argument is that they are predicated on a false assumption about
the relationship between physiology, biography, tradition, and social action. To
understand why Gandhi was preoccupied with the problem of celibacy, dietetics, and
health, one must first of all take seriously the notion that eating and sex do not require
meta-interpretation. In the context of colonialism there is a direct relationship
between self-control and politics rather than one mediated either by subconscious
symbols or some other set of cultural meanings encoded in myth, ritual, and
spirituality on the one hand or early childhood on the other. To be sure, the cultural
environment in which Gandhi lived is still all important. As Richard Fox has shown
(1989), however, this culture-as with all others-is a context wherein shifting
meanings were encoded in the practice of everyday life (see also Nandy 1980, 71, 83).
Gandhi’s Truth is, therefore, essentially transnational. His experiments were explicitly
syncretic, with specific reference to the work of Havelock Ellis (1910; [19381 1946),
Bertrand Russell (1928), Henry David Thoreau (1895; CW 12:24-25) and Paul
Bureau (1920; CW 31:103-5, 135-40, 183-86, 218-62, 286-88, 309-12), among
many others.

What I am arguing is that Gandhi’s concern with his body (CW 1:82-86, 166;
11:494, 501-10; 12:79-80, 97) cannot simply be understood as an obsessive
compulsion to exercise self-control in the interest of public service by tapping into
the spiritual power of shakti. A reading of Gandhi’s writing on health in general, and
such specific topics as smoking on the one hand (CW 4:427-28; 5:105; 6:270; 11:480;
19:285)-his astonished outrage at hearing that someone was making and selling
“Mahatma Gandhi Cigarettes,” for example (CW 19:216)-and temperance on the
other (CW 1:166; 4:338; 11:480; 18:400-1; 19:260-61, 285, 450, 462, 468, 470,


480, 555-56) shows that nonviolence was, for him, as much an issue of public health
as an issue of politics, morality, and religion. To read ahimsa simply as practical
philosophy, political theory, ethical doctrine, or spiritual quest is to misunderstand
the extent to which Gandhi embodied moral reform and advocated that reform’s
embodiment in terms of public health-a kind of health which may be understood
as inherently political, spiritual, and moral in the context of late imperialism.

The Key to Health
I had involuntary discharge twice during the last two weeks. I cannot recall any
dream. I never practised masturbation. One cause of these discharges is of course my
physical weakness but I also know that there are impure desires deep down in me. I
am able to keep out such thoughts during waking hours. But what is present in the
body like some hidden poison, always makes its way, even forcibly sometimes. I feel
unhappy about this, but I am not nervously afraid.

(CW 40:312; in Parekh 1989, 186; cf. CW 33:414; 34:196-97, 372-74)

Reading a “confession” such as this, written in a letter to an anonymous
correspondent in 1928, one is made aware of the remarkable extent to which Gandhi’s
eminently public persona was worked out in terms of what appears to be private,
personal self-reflection (CW 30:142, 319).1 While working toward reform on a
national scale, Gandhi often delineated the problem of action in terms of a discrete
microphysics of self-discipline required of those involved. Even when writing about
national and international events he seems to have been preoccupied with himself,
with his subjectivity in the context of dramatic sociopolitical change. If not always
autobiographical, Gandhi’s writings are almost always self-centered, if I may use the
term nonpejoratively.

It is important, however, to take a step back from candor and intimacy and look
at the larger picture, and to this end it is relevant to consider one of Gandhi’s few
“book-length” publications, Key to Health ([19481 1992; CW 76:411-12; 77:1-48).
Written in Yeravda jail between September and December 1942, this book was a
shortened version of his 1913 collection of essays entitled “General Knowledge About
Health” (reprinted as The Health Guide 1965). First published in South Africa,
translated into a number of Indian and European languages, it became, as the author
himself put it, somewhat incredulously, “the most popular of all my writings” (1992,

However genuine the Mahatma’s surprise may have been, it forces one to recognize
the inherently public, missionary nature of his advocacy for national and, indeed,
international health. Even though many of Gandhi’s experiments were conducted on
himself, many more were implemented as what amounts to small-scale public health
measures in his ashrams (CW 11:128, 131, 157-58, 191; 12:269-71; 31:156; 32:51-
52; 54:2, 213, 301-2, 321; 55:161).2 In other words, the picture which emerges from

‘Along with his candid discussions of night discharge, Gandhi wrote publicly and frankly
about the failure of his intestines, for example (CW 26:144) and, when suffering from appen-
dicitis, malaria, and piles, his biomoral compromise with Western medicine (CW 15:73;
23:191, 262; 29:211; 30:126, 316-17). In fact, given his definition of Truth, there was
nothing about his life, or his body functions, that was private.

2Immediately after a lengthy summary of the development of his thinking on dietary
experiments, Gandhi wrote in his History of the Satyagraha Ashram, “the reader has perhaps


the “dreary details” is not so much one of pedantic obsession as one of complex reform
strategy, for the key, in the Key to Health, does not unlock the mysteries of a great
mind so much as the potential of a great nation. This is equally true of the very
popular, three-edition, often reprinted Self-restraint vs. Self-indulgence ([19271 1958;
cf. CW 33:184-86). Despite its nominal suffixal priorities, the work inscribes
sexuality onto a public rather than private domain, where the problem is demographic
and cumulative rather than biographic and reflexive. Even the journal title Young
India-from which many of the articles in the volume are taken-denotes an imagined
celibate nation.

It is noteworthy that in his preface to the second edition of Self-restraint vs. Self-
indulgence, Gandhi expresses “joy” not only in the fact that the first edition was sold
out one week after publication, but that it spawned enough correspondence from
interested readers to warrant a second printing. Gandhi was interested in the success
of his own experiments primarily to the extent that others might learn from them
and subscribe to a regimen of self-discipline. He wanted to engage young Indians on
a level that would lead to self-control rather than mandate institutional reform
through policy. He wanted to persuade people to change their way of life in order to
rebuild India. The extent to which Gandhi took this project seriously-and that it
was taken seriously by many readers-cannot be doubted.

Let young men and women for whose sake Young India is written from week to week
know that it is their duty, if they would purify the atmosphere about them and shed
their weakness, to be and remain chaste and know too that it is not so difficult as
they may have been taught to imagine.

(1958, 31)

This remark comes, notably, in Gandhi’s extended discussion of Paul Bureau’s book
L’indisciPline des moeurs (1920), translated by Dr. Mary Scharlieb as Towards Moral
Bankruptcy (1925), which ends with a strong appeal for French moral nationalism.
Gandhi punctuates his comments with a transnational admonition: “Let the Indian
youth treasure in their hearts the quotation with which Bureau’s book ends: ‘The
future is for the nations who are chaste’ ” (1958, 40).

Gandhi’s obvious admiration for Bureau, and also for William Loftus Hare, whose
treatise on the enervating physiological effects of sex, entitled “Generation and
Regeneration” (1926; CW 31:311) is reprinted at the end of Self-restraint vs. Self-
indulgence, stems in part from the fact that what they said about the body was scientific.
Bureau’s and Hare’s biologically based moral theories provided Gandhi with the same
kind of authoritative argument for celibacy that Henry Salt’s A Plea for Vegetarianism
(1886; see also 1899), Howard Williams’s The Ethics of Diet (1883), and Anna
Kingsford’s The Perfect Way in Diet (1881; see also 1912) provided for not eating meat
(Gandhi 1949, 8-12).

Gandhi’s attitude toward the West-as quite distinct from what he thought
about “modern civilization” which had come to characterize the West (CW 9:479;
19:178; 40:125)-is of particular relevance to understanding how he came to imagine
the problematic relationship between sex, national identity, and the moral politics of

now seen that the Ashram set out to remedy what it thought were defects in our national life
from the religious, economic and political standpoint” (CW 50:192). Significantly, Gandhi
sought to implement a program of dietary reform and healthy living in villages through his
constructive program (CW 75:41-4).


nonviolence. Referring in one instance to “the strong wine of libertinism that the
intoxicated West sends us under the guise of new truth and so-called human freedom”
(1958, 39), Gandhi was often explicitly critical of certain aspects of “civilized” Anglo-
European culture, as in his mercilessly sarcastic account of an American boxing match
(CW 10:294-95).3 However, he wrote:

the West is not wholly what we see in India…. Throughout the European desert
there are oases from which those who will may drink the purest water of life. Chastity
and voluntary poverty are adopted without brag, without bluster and in all humility
by hundreds of men and women, often for no other than the all-sufficing cause of
service of some dear one or of the country.

(1958, 31)

Referring to a range of “eminent” “sober voices” from the West (1958, 39), and
choosing to locate an incipient biology of pragmatic social justice demographically in
the margins of Europe, Gandhi then proceeds to criticize the classical Hindu
spirituality of ascetics as “an airy nothing” (1958, 31; cf. 46; see also CW 27:152-
53, 288). In this regard I think it is of great importance to note the precise impact
that Henry Salt’s book had on Gandhi’s vegetarianism. Before reading Plea for
Vegetarianism, Gandhi’s vegetarianism was purely personal.

I had all along abstained from meat in the interests of truth and of the vow I had
taken, but had wished at the same time that every Indian should be a meat-eater, and had
looked forward to being one myself freely and openly some day, and to enlisting others
in the cause. The choice was now [after reading Salt’s book] in favour of vegetarianism,
the spread of which henceforward became my mission.

([19491 1987, 5; my emphasis)

There is, in other words, more to vegetarianism than meets the eye, more than personal
choice involved, and also something quite different from the brahmanical rationale
for purity and spirituality (CW 29:418-20). Through his affiliation with the London
Vegetarian Society (CW 1:25-37, 64-67, 81-89; see also 50:191) and his association
with Dr. Josiah Oldfield among others (CW 6:23, 33), Gandhi came to a rather unique
realization that a science of diet provided the means by which to effect moral change
on a large, demographic scale (CW 48:326-29). In this regard it is interesting to
note that one of Gandhi’s earliest experiments with the biomorality of public health
pitted the biology of race against a dietetics of vegetarianism. Confronted with racial
prejudice in South Africa, Gandhi set about trying to convert meat-eating boorish
school children into “civilized” vegetarians whose subsequent reverence for life and
compassion for living things would break down racial prejudice (Devanesan 1969,
321-22). A similar logic may be seen at work in his role as South African agent for
the London Vegetarian Society, and his professed “missionary zeal” toward the
introduction of vegetarianism in Natal so as to bring British Whites “closer” to
Indians (CW 1:87-89, 164-67, 180-86, 288).4

3It is also worth quoting Gandhi’s response to an “ignorant,” “virulent,” and “offensive”
racist attack against Asiatic morals written by a Western commentator whose “very civilization
… makes for ignorance, inasmuch as its exacting demands upon the frail physical frame render
it well-nigh- impossible for any dweller therein to have any but the most superficial knowledge
of things in general” (CW 11:192-93).

40n a smaller scale the connection between morals, ethics, and health comes across clearly,
albeit inverted, when Gandhi, studying a book written by an American on eyesight disorders,


One may move directly from this point to a consideration of Gandhi’s advocacy
for universal celibacy, for he did not believe, simply, as Parekh suggests, that “a few
score brahmacharis like him would be capable of transforming the face of India” (1989,
181). Gandhi’s imagination-like that of Bureau and Hare-was at once much more
utopian and also much more pragmatic. He wanted nothing less than a nation of
sober celibates who would embody a new moral order, and not just a cadre of “great
souls” who might inspire contingent enthusiasm (1958, 112, 143). In a pivotal article
published in Young India in 1920, Gandhi-inspired, as he often was, by numerous
letters on the subject-pointed out that he must raise the issue of celibacy in public
at “this the most critical period of national life” (cf. CW 30:235). He then proceeds
to make a case for why it is necessary to make celibacy an integral part of national

We have more than an ordinary share of disease, famines and pauperism-even
starvation among millions. We are being ground down under slavery in such a subtle
manner that many of us refuse even to recognize it as such, and mistake our state as
one of progressive freedom in spite of the triple curse of economic, mental and moral

(1958, 70)

In Gandhi’s view there was an intimate connection between colonial economic and
military policy and health, since the former “reduced [India’s} capacity to withstand
disease” (1958, 71). Writing in the context of the debate over birth control (CW
26:299, 544), two statements by the Mahatma mark out the biosocial parameters of
reform, and clearly indicate the scope of his vision. Writing in 1906, he riles against
traditional Hindu family values:

We sing hymns of praise and thanks to God when a child is born of a boy father and
a girl mother! Could anything be more dreadful? Do we think that the world is going
to be saved by the countless swarms of such impotent children endlessly multiplying
in India and elsewhere?

(1958, 54)

In 1913, he targets rapid postpartum intercourse:

thanks to the prevailing ignorance about this state of affairs, a race of cowardly,
emasculated and spiritless creatures is coming into existence day by day. This is a
terrible thing indeed, and each one of us needs to work tirelessly to prevent it.

(CW 12:136)

Then, in 1920, he articulates, if one may adapt Hare’s title, a regenerative alternative
to kinetic sexual degeneration.

I have not the shadow of a doubt that married people, if they wished well of the
country and wanted to see India become a nation of strong and handsome well-formed
men and women, would practice self-restraint and cease to procreate for the time
being …. it is our duty for the present moment to suspend bringing forth heirs to
our slavery.

(1958, 73)

finds a “potent sentence” that reads “a lie heats the body and injures eyesight.” Gandhi
comments on this by saying, “[iut is true if you would give an extended meaning to the term
‘lie’…. but the body is injured in every case” (CW 54:56). Hence, telling the truth is not
just right, it is essential to good health; un-truth is embodied.


In advocating this kind of radical abstention in order to build up “strength and
manliness” through a struggle against desire (cf. CW 33:433), Gandhi found an ally
in William R. Thurston, a major in the United States Army, who, “through personal
observation, data obtained from physicians, statistics of social hygiene, and medical
statistics,” showed that unrestrained sexual intercourse caused women to become
“highly nervous, prematurely aged, diseased, irritable, restless, discontented and
incapable of caring for [their] children … [and] … drainfed] [men] of the vitality
necessary for earning a good living” (CW 37:305-7; cf. 315-17).5

Public Health

Without denying the contingent legitimacy of analyses which seek to rationalize
the Mahatma’s radical program in terms of a psychological reading of both biography
and culture, I think it is possible to better understand the implications of Gandhi’s
personal convictions by looking at his experiments in the context of colonialism’s
impact on subject bodies.

While more than receptive to Western “fads” such as vegetarianism and nature
cure, Gandhi was dogmatically critical of allopathic medicine and regarded
biomedicine as dangerous, in part because he saw it providing violent, symptomatic
cures for specific illnesses rather than holistic therapies to remedy poor health (CW
9:479; 11:435, 449; 12:51, 97; 19:357). Gandhi’s conviction was clearly apparent in
his criticism of the public health policy of smallpox vaccination (CW 12:110-12,
115-17; 30:356; 42:471).6 As he put it, rather caustically, in a letter to Maganlal

What service will an army of doctors render to the country? What great things are
they going to achieve by dissecting dead bodies, by killing animals, and by cramming

5Gandhi did not seem to have very much to say on the enervating effect of sex and
reproduction on women. In a letter to K. S. Karnath, however, he wrote “In the male the
sexual act is a giving up of vital energy every time. In the female that giving up commences
only with parturition” (CW 34:196). He did point out, however, that menstruation was a
period of time during which it was possible for women to regain strength. “A woman who
spends the period in the right manner gains fresh energy every month” (CW 54:388; cf.
55:210). Nevertheless Gandhi’s conception of the physiology of self-control was male by im-
plication, if not in fact. Although he clearly meant to include both men and women in his
program of moral reform-and made the point explicitly numerous times-only on relatively
few occasions did he make note of female celibacy per se (CW 50:423), and then mostly with
regard to widows (cf. CW 23:523; 33:47; 79:133). In a telling comment, when asked directly
about what the physiological differences between men and women might be with regard to
the kind of work people were asked to do, Gandhi pointed out, in effect, that the differences
were only skin deep. “Whatever differences you see can be seen, as it were, with the naked
eye…. Are these differences not plain enough to be clear to you?” (CW 50:256).

61t should be noted that although Gandhi was fairly strict in his resistance to vaccination,
he was, in other respects, a pragmatist. For example, when explaining to Akbarbhai Chavda
how to deal with an epidemic and treat people for diarrhoea and fever, he emphasized the
importance of natural therapy and hygiene, but wrote: “To meat-eaters you may unhesitatingly
give meat soup…. This is not the time for doing our religious duty of propagating vegetar-
ianism. Soup is bound to be useful where milk is not available” (CW 78:374).


worthless dicta for five or seven years? What will the country gain by the ability to
cure physical diseases? That will simply increase our attachment to the body.

(CW 10:206)

Gandhi was also critical, therefore, of a lifestyle which depended on medical
intervention-an undisciplined lifestyle of gastronomic excess in particular, but also
erratic habits in general which in his view caused illnesses (CW 4:373). In “General
Knowledge About Health,” published serially in 1913, he writes: “To)ur subject is
not how to exist anyhow, but how to live, if possible, in perfect health” (CW 11:465).
As a system in tune with the natural order of things, nature cure came to be regarded
as a preemptive form of public health, in addition to being a science of healing.7
Claiming that he himself had never had “the time to make a systematic study of the
science [of nature cure] (CW 55:98), Gandhi’s expansive, and expanding, ideal for
public health was reflected in his “recruitment” of Hiralal Sharma, a nature-cure
physician, to work at the ashram in the early 1930s. Writing to Dr. Sharma, he said:

I would like to find in you a kindred spirit given up wholly to truthful research
without any mental reservations. And if I can get such a man with also a belief in
the Ashram ideals, I would regard it as a great evenr….

I would ask you, therefore, to approach the Ashram with the set purpose of
discovering the means of preserving or regaining health in the ordinary Indian

(CW 54:292)

In his recent book Colonizing the Body: State Medicine and Epidemic Disease in
Nineteenth-Century India (1993), David Arnold makes passing reference to Gandhi’s
radical position on health care, correctly situating his criticism of Western medicine
in the context of colonial hegemony (1993, 285-88). Arnold makes the important
point that Gandhi understood the connection between medical systems and political
freedom in terms of a scientific discourse about subject bodies, and not just as a
struggle for control of health care policy as such. However, Arnold does not take full
account of the fact that Gandhi’s explicit “ideological” critique of hospitals and
doctors emerged from a pragmatic, scientific sociobiology, the scope and procedure
of which was itself implicated in the colonial project at large. Even though doctors
were often “the main target of Gandhi’s attack” (Arnold 1993, 287), as we have seen
he was not particularly sympathetic toward a population who collectively ate and
drank its way into hospitals and into a kind of slavery which extended beyond
medicine. The question, then, is what was Gandhi’s alternative to both traditional
and modern medicine in the context of colonial public health? I think it is fair to
assume that Gandhi’s response had to be on par with the degree of medicine’s somatic
penetration, its increasing pervasiveness in the empire, and the degree of biomoral
degeneracy in India; the response had to be worked out in terms of a discrete, modern,
scientific sociobiology. This is, in effect, what the Key to Health is about.

Since the key to health was nature cure, Gandhi was profoundly skeptical of
traditional ayurveda (CW 19:358) because it placed the agency of healing outside the
reach of every man; because it had become an elite, upper-caste urban system of

7Although Gandhi’s own experiments were conducted mostly on himself while living in
ashrams, he was at various times and to various degrees under the care of Dr. Dinshaw Mehta.
In 1944 Gandhi encouraged Dr. Mehta to establish a nature-cure clinic with inpatient facilities
on the same terms for rich and poor alike (CW 77:335-36; 78:34-36).


medicinal healing (CW 11:434; 26:388-89; 27:222-23; 35:458); and because, as he
put it rather cryptically to the physician Vallabhram Vaidya, “Ayurveda has not yet
become a science. In a science there is always room for progress. Where is any progress
here?” (CW 76:257)8 What Gandhi wanted, above all, was a system of health care
which was eminently “public” in the somewhat new way in which that term had
come to signify the homespun nation as a rural whole.9

In many ways Key to Health anticipates McKim Marriott’s inspired analysis of
India through Hindu categories (1990). As with Marriott’s cubic scheme of elemental
integrated property flow, Gandhi was primarily concerned with the balanced
integration of the five elements: earth, water, ether, fire (or sun), and air (1948, 30-
45). It is ironic, however, that Gandhi’s scientific theory of healing was not derived
from Hindu therapeutics at all, but-with yet another refreshingly Occidentalist
reading of the West-from Juste’s naturopathy and Kuhne’s hydrotherapy (CW
11:493; 12:73-75, 79-81). It is worth looking in some detail at what he said about
the healing properties of various elements in the context of his ideas about public
health and nonviolence.

Gandhi advocated the use of an earth poultice to cure snake bites, headaches,
constipation, boils, skin rashes, and typhoid fever (CW 35:447, 450, 460). He went
into considerable detail on how to prepare a poultice. The cloth had to be sterilized,
of certain dimensions, and of a fine, soft weave. The earth itself could be neither sticky
nor gritty; it could not come from a manured field. The best earth was fine-grained
alluvial clay and had to be sterilized by heating. It could be used again and again.
Earth could also be eaten in order to relieve constipation. The dosage was small,
however, and Gandhi cautioned that his advocacy was based on Juste’s claims and not
on personal experience (1948, 33).

Quite apart from the symbolic meaning of earth in Hindu cosmology-and earth
in ayurvedic pharmacology-is the fact that in his serial essay “General Knowledge
About Health” (CW 12:79-81) Gandhi developed a rational justification for earth’s
applied use as a grassroots therapy for self-healing. Earth therapy was a home remedy
as intrinsically natural-and, I would argue, as inherently important to him and his

8After about 1925 or so, Gandhi’s sharply critical perspective on ayurveda seems to have
been tempered somewhat (CW 30:12; 33:290; 34:199; 72:275; 76:201). In 1944 he also
decided to try allopathic medicine on himself for the treatment of hook-worm and amoeba
(CW 77:295-96, 400), but due to a very adverse reaction, he went immediately back to nature
cure (CW 77:400; 79:1, 2, 6, 9, 17, 42). In the same year he reflected on his own health
saying “I wish I could have faith in homeopathy and biochemic medicine, but I do not” (CW
77:295; cf. 54:305, 431). Earlier he pointed out: “Personally I would prefer homeopathy any
day to allopathy. Only I have no personal experience of its efficacy” (CW 55:56). Even so, in
the early 1930s, Gandhi seems to have changed his perspective somewhat and spoke of sub-
suming allopathy within the purview of nature cure (CW 54:306). But then in 1945 he wrote,
somewhat ambivalently, to G. D. Birla, “I have not got myself involved in ayurveda in an
unscientific way. Such as it is it is all we have. It would be well if we could take ayurveda to
the villages” (CW 79:16-17).

9For example, following on his criticism of Vallabhram Vaidya, Gandhi wrote:

Identifying of plants for its own sake is not part of dharma. Therefore, render what
service you can through such knowledge…. You should show, if you can, that
indigenous medicine is simple, inexpensive and capable of giving relief to 99 patients
out of a hundred. If you feel that this cannot be done, then you should give up the

(CW 76:161-62)


national ideals-as the principle of self-government, self-reliance, and home rule.
Swadeshi, as he pointed out, in the larger context of serious debates about the diet of
incarcerated satyagrahis in South African prisons (CW 8:121, 155; cf. 12:239), “means
a reliance on our own strength … the strength of our body, our mind and our soul”
(CW 9:118; 30:15).

Although it may seem surprising, Gandhi regarded air as the most important
element in the natural pharmacopoeia (CW 11:454; 12:62-63), as more important
to health and strength than either food or water. Writing in 1913, he made an explicit
correlation between the need for fresh air and the development of good Indian
character in South Africa (CW 11:430, 465). On the question of breathing, he placed
greatest emphasis on the fact that air had to be fresh and taken in through the nose
(CW 12:62-63).1o Inasmuch as possible, he argued, one should live and work in a
well-ventilated environment where poisonous atmospheric gases get dissipated (CW
11:131). Because air comes in close contact with blood in the lungs, one must learn
the “art of breathing” through the nose since this both filters and warms the air before
it comes into contact with blood. “It follows,” Gandhi pointed out, “that those who
do not know how to breathe should take breathing exercises” (1948, 4). By this he
meant basic, simplified procedures of yogic pranayama, “which are as easy to learn as
they are useful” (1948, 4; CW 11:449; cf. 30:551; 31:188, 353). These exercises,
rather than gross muscle building, he pointed out, were responsible for creating the
kind of expansive physique of men like Sandow, the famous British physical culturist
(CW 11:464), the “natural” physique of robust Zulus (CW 29:12), and, closer to
home, the “Herculean” physique of Professor Ram Murti Naidu (CW 28:181). It is
significant, in this regard, that the critical point of exercise was not to build up
strength per se, but to stimulate normal breathing and establish control over the
senses. Whereas organized sports-and wrestling in particular-were regarded as
somewhat contrived and frankly excessive (CW 12:22, 23; 34:99), agricultural work,
manual labor, and walking were considered to be highly efficacious (CW 11:131;
12:23, 24-25; cf. 33:378) as “work for the sake of the body” (CW 32:211), which
helped in the development of brahmacharya (CW 32:159).” In other words, health
was quite different from strength (CW 24:117), and it was to the end of better health
and greater self-control that he admonished children in the ashram to breathe plenty
of fresh air, practice pranayama, and get regular, moderate exercise (CW 54:435).
Responding to questions on the place of celibacy in education, he said “though a body
that has been developed without brahmacharya may well become strong, it can never
become completely healthy from the medical point of view” (CW 36:457). Speaking
to the members of the Rashtriya Yuvak Sangh (National Association of Youth) in
1942, and half-jokingly accusing the young boys present of having bodies like his,
“completely devoid of muscle,” Gandhi put it this way:

‘0ln advising Balbhadra on how to exercise, Gandhi wrote, for example, “[w1hen you go
out for a walk, run for some time. While you do so, keep your mouth shut and breathe through
nostrils” (CW 44:245).

“Gandhi’s notion of bread labor in fact brings the issue of diet, exercise, and celibacy into
a single, unified frame of reference. “Bread labour [which in its pure form is agricultural work
alone] is a veritable blessing to one who would observe non-violence, worship Truth, and make
the observance of brahmacharya a natural act” (CW 44:150). “The law of bread labour [is] that
that man [is] entitled to bread who worked for it . . . . and if this was literally followed there
would be very little illness on earth and little of hideous surroundings on earth” (CW 48:415).
Speaking on the ideal of agricultural labor, he pointed out that “[ilf a farmer so desires, he
can with the slightest effort become a yogi” (CW 42:131).


Try to follow my ideals as far as you can. For that we should have a good physique.
We have to build up our muscles by regular exercise. But that should not be done
to indulge in violence. To become a Sandow is not our ideal…. Our ideal is to
become tough labourers, and our exercises should be toward that end.

(CW 76:158)

It is interesting to note that although Gandhi was not in the least concerned with
“brute strength” and did, in fact, juxtapose the concept of soul force with physical
might (CW 18:58; 19:285; 40:271; 71:72; 74:82; 75:258), he became, particularly
after 1918, increasingly critical of the effete, passive, impotent nonviolence of religious
“sentimentalism,” and aware of the need to define “militant” nonviolence in terms of
“manliness,” “virility,” and “a strong physique” (CW 18:505; 24:118).12
Inaugurating a modern school of Indian physical education in Amravati, Gandhi
wrote: “I have travelled all over the country and one of the most deplorable things I
have noticed is the rickety bodies of young men” (CW 32:444). The moral work of
nonviolent reform, he said more than once, required “bodies of steel” (CW 15:55;
26:143; 76:76) and not “feeble physiques” (CW 12:24; 32:444). Thus, breathing was
of critical importance in effecting proactive, nonviolent self-control (CW 22:392;
31:67), and was far more important than, and indeed a subtle alternative to, the kind
of gross, “might-is-right” physical strength which he felt was being developed in
some regional gymnasiums (CW 24:529; 25:135; 26:144). He made this point a
number of times when inaugurating gymnasiums (CW 34:411; 71:135), and
installing the stalwart image of Lord Hanuman therein, by drawing cautionary
attention to the fact that the patron deity’s physical strength was primarily a
manifestation of his devotion to Ram and a derivative consequence of celibacy, not
an end in itself. As to the ends, he wrote in 1927: “May you therefore be like
[Hanuman] of matchless valour born out of your brahmacharya and may that valour
be dedicated to the service of the Mother Land” (CW 33:142). In 1928 he reiterated
this theme, emphasizing Hanuman’s association with the wind, saying, “t[wle
therefore worship Hanuman and instal him in gymnasiums because though we do
physical exercise, we are going to become servants-servants of India, servants of the world,
and through these means, servants of God” (CW 36:182, my emphasis).

Since the specific mechanics of breathing fresh air-of pranayama-were integral
to building “matchless valor,” it followed that in his discussion of less vigorous public
health regimens Gandhi advocated sleeping naked under a sheet-and a blanket in
cold weather-outdoors. In no case was one to cover one’s head while sleeping. If
one’s head got cold, a separate covering could be worn, but in no case was the nose
to be covered so as to avoid breathing stale, contaminated air (1948, 5).

Along with his advocacy for simplified pranayama, Gandhi placed great faith in
Khune’s method of hydrotherapy (CW 12:67-75; cf. 50:38 1; 54:32, 228) and claimed
that Khune’s book on naturopathy had been translated into a number of Indian

‘2Lest there be any doubt, Gandhi never regarded the body as an object of beauty or
something upon which to place any positive moral value. In the Key to Health (1948) and
“General Knowledge About Health” ([19131 1958-80) the body is described with
unambiguous loathing as a bag of filth and the incarnation of hell, among other things (CW
12:166). As Gandhi pointed out in a speech to the All-India Teachers Training Camp in 1944,
“[plhysically [man] is a contemptible worm” (CW 78:321). For Gandhi the body was simply
a tool: a very useful and valuable tool which could be “used for its own destruction” (CW
34:543), in order to achieve greater things. To do this, however, it had to be kept strong,
healthy, and under control.


languages and enjoyed great popularity in Andhra (1948, 33).13 He felt, based on a
“fairly large scale” experimental population of at least 100 patients, that hip baths
proved very effective in treating constipation, hyperpyrexia, and general fever (CW
12:97-99). As he did with earth and air therapies, Gandhi went into considerable
detail on the elemental mechanics and constituent properties of water therapy:
temperature of water, how to get the water to the right temperature, depth of water,
position of tub, length of time to sit in the tub, how to position one’s feet outside
the tub, and the necessity of keeping the extremities warm while the hips were
submerged. All of these rules are based on scientific experimentation and rational
proof (CW 12:73-74). Most significant, however, in the context of the present
discussion, is what Gandhi had to say about Kuhne’s advocacy for the sitz or friction

The organ of reproduction is one of the most sensitive parts of the body. There is
something illusive about the sensitiveness of the glans penis and the foreskin.
Anyway, I know not how to describe it. Kuhne has made use of this knowledge for
therapeutic purposes. He advises application of gentle friction to the outer end of
external sexual organ by means of a soft wet piece of cloth, while cold water is being
poured. In the case of the male, the glans penis should be covered with the foreskin
before applying friction…. This friction should never cause pain. On the contrary
the patient should find it pleasant and feel rested and peaceful at the end of the
bath…. Insistence on keeping the sexual organ clean and patiently following the
treatment outlined above will make the observance of brahmacharya comparatively

(1948, 36)

As in the case of fresh air therapy, the relationship between maintaining good
health and celibacy are clearly articulated here. It is also important to note, however,
that in Gandhi’s view not only health, as such, but common ailments like constipation
in particular (CW 12:103) were directly linked to the physiology of sensual arousal.
Moreover, this logic worked both ways making it possible for Gandhi to attribute his
bouts with pleurisy, dysentery, and appendicitis to “imperfect celibacy” (CW 24:117).
Even though Gandhi expressed some ambivalence about expounding the purely health
benefits of brahmacharya (CW 12:45; 22:391-92; 36:456-57), and most certainly did
not regard it as just physical (CW 22:43; cf. 10:205; 50:211-12), it is clear that good
health was a necessary condition for self-control (CW 80:62) and that there was
scientific justification for this argument (CW 26:449; 31:353) as well as empirical
evidence by way of references to bodily labor, pranayama and brahmacharya in the Gita
(CW 32:150, 159, 242). Writing to Krishnachandra in 1940 he pointed out that
“brahmacharya and ahimsa would have no meaning in absence of the body” (CW

13Most likely Gandhi had in mind the work of Hanumanthrao, a reformer who was “trying
to popularize [nature cure] in villages” (CW 30:171).

14In a letter to Darbari Sadhu, Gandhi explained in some detail why it was important to
embody celibacy in terms of action, even while recognizing that brahmacharya itself was a
transcendental ideal:

Just as man dissipates his physical strength through ordinary incontinence, so he
dissipates his mental strength through mental incontinence, and, as physical weakness
affects the mind, so mental weakness affects the body…. You seem to believe in
the heart of your hearts that physical activity prevents or hinders us from watching
the progress of our inward purification. My experience is the opposite of this.

(CW 50:410)


Recognizing the integral relationship between celibacy, nonviolence, and health,
Gandhi turned, in conjunction with pranayama, to yoga asans as “a possible cure [for]
the evil habit of self-abuse among students” (CW 33:243; cf. CW 32:242-43, 244,
245, 268-69). Primarily through correspondence with S. D. Satavalekar (CW 33:215,
223, 236-37; 34:42) and Swami Kuvalyananda (CW 31:427; 33:454, 484; 34:16-
17, 69, 71-72, 100, 250; 36:472), two experts in the field, he experimented on the
health value of various exercises. He came to the conclusion that although yoga was
not a panacea-and could, in fact, be harmful if all it succeeded in doing was check
the flow of semen and not develop strength to resist the violence of desire (CW
33:215)-it proved effective as a regimen of general fitness (CW 52:82; cf. 73:359),
was helpful in treating some diseases (CW 10:317; 52:208), and was a practical means
by which young men in particular could exercise self-control (CW 30:193; 54:435).
Gandhi also came to regard it as a nonviolent means of physical training which would
enable satyagrahis to tolerate extreme cold and heat, stand guard for hours, withstand
beatings, and nurse others (CW 73:67-69).

Significantly, as Gandhi’s experience with yoga evolved and his understanding of
ahimsa developed, he became more convinced that nonyogic forms of physical fitness
were at least incipiently violent. Writing to Prithvi Singh, a reformed revolutionary
who founded the Ahimsak Vyayam Sangh (Nonviolent Exercise Association) (CW
71:98; 72:276, 371-72; 73:235; 74:43) in order to “train strong and vigorous bodies
for nonviolence,” Gandhi pointed out that for a satyagrahi who was willing to die for
what he believed “there [would] be no need for exercise or any other kind of training.
The training in exercise is for those who have not freed themselves from fear” (CW
74:82; cf. 72:328). Yet Gandhi, at other times, seems to have had an underlying
admiration for certain aspects of regimented training, as in his commendation of
Manikrao’s program of mass drill exercises, and his translation of P.T. (physical
training) drill terminology into Gujarati (CW 71:54).15 In light of Gandhi’s rather
ambivalent attitude toward physical fitness, it is noteworthy that, quite apart from
the utilitarian exigencies of “bread labor” and manual work mentioned above, he
seemed to regard spinning as a kind of pure, practical, productive form of exercise
which was described, at least in one instance, as involving drill-like regimentation
and self-control (CW 25:189). He also prescribed spinning as a form of therapy for
young men who found it difficult to abide by brahmacharya (CW 34:372-74; 35:414).
In a letter to Harjivan Kotak in 1927 he wrote with telling, but uncharacteristic,
hyperbole: “Fix your thoughts exclusively on khadi; countless men may be wedded to
her and yet she always remains a virgin. And a man who takes her alone as a wife will
still be an inviolate brahmachari” (CW 35:325).

In keeping with his approach to celibacy, vegetarianism, yoga, and spinning,
Gandhi’s discussion of mud packs, hip baths, friction baths, and bed clothes is worked
out as a rational science of moral health. It is precisely the scientific, experimental
nature of naturopathy which makes it possible for Gandhi to develop a rational plan
for health which is, at once, national-and, indeed, transnational-and also strictly
self-oriented. It is a plan which works on the logic of what might be called a sociology
of individual increments, and elemental configurations, in which the geopolitical state
of the nation gets reimagined one patient at a time.

As with the individual, so with society. A village is but a group of individuals and

‘5During imprisonment in South Africa Gandhi also reflected on the health value of
regimented exercise for prisoners (CW 8:159-60; cf. 9:203).


the world, as I see it, is one vast village and mankind one family. The various functions
in the human body have their parallel in the corporate life of society. What I have
said about the inner and outer cleanliness of the individual, therefore, applies to the
whole society.

(CW 78:320-21)

This perspective is reflected nowhere more clearly than in Gandhi’s discussion of food
and of food’s intimate relationship to self-control. As he pointed out numerous times,
controlling one’s palate was intimately associated with controlling desire (CW 40:67;
44:79-81; cf. 50:209; 54:213) and-standard vegetarianism aside-a moderate,
unspiced, minimally cooked, and quickly prepared meal of simple, unprocessed,
natural food was the dietary basis for brahmacharya (CW 15:46; 34:92; 35:394), and
probably the single most important variable in redefining the scope of public health.16
As he put it, reflecting on the nation as a whole from the perspective of his diet in
jail in the early 1930s, “I am convinced that if we plan our diet on a scientific basis
and eat moderately, nobody would fall ill” (CW 52:36). Writing only a few days later
he pointed out that “[tihose who understand the value of self-control will find nothing
but interest in the experiments about diet” (CW 52:&3).17 Characteristically, Gandhi
paid close attention to detail in his discussion of food, pointing out, for example, that
because people tend to use bread to sop up lentil gravy they get lazy about proper
mastication. Since the digestion of starch begins in the mouth, he argued, starches
should be eaten dry so as to ensure vigorous mastication and the proper flow of saliva
(1948, 11).18

Gandhi’s elaborate treatment of dietetics has been collected in a volume entitled

‘6For example, in reply to a question posed by one of the ashram inmates regarding the
time it should take to eat a meal, Gandhi wrote. “Ordinarily, twenty or thirty minutes should
be regarded enough for those who have good teeth and who eat rotis, dal, rice and vegetables”
(CW 50: 17-23).

17In a striking example of the extent to which food production, consumption and
elimination were all equal parts of Gandhi’s project, an extract from a letter to Raojibhai M.
Patel is noteworthy.

I have often explained that care of the latrine and of the kitchen are aspects of the
same task. If either of them is imperfectly done, bodily health would suffer. I have
also shown that scavenging and cooking involve important moral and scientific
principles. A cook doing his or her duty religiously will not only cook the food well
but will also observe the principles of good health, that is, of brahmacharya. And a
scavenger doing his or her duty religiously will not merely bury the night-soil but
also observe the stools passed by each and inform each person about the state of his
or her health. We have with us neither such an ideal scavenger nor such an ideal
cook, but I have no doubt that the Ashram should produce a crop of them.

(CW 42:103; see also 12:6; 29:415; 78:320; 79:158)

Clearly these ideas were also directly linked to concerns with sanitation and public hygiene
(cf. CW 11:428, 469; 73:378; 75:156).

“8The extent to which Gandhi was concerned with the precise details of health was not
limited simply to diet, nature cure, yoga, and other more or less clearly defined domains of
health. He responded to a question from Chhotubhai Patel on dental hygiene, for example, by
saying: “One should not brush the teeth with a babul stick after a meal, but one must clean
them with a finger and gargle well” (CW 54:3).

Similarly, he gave elaborate advice-almost 600 words worth in one case-on how to
wash clothes in order to get them perfectly clean (CW 54:382).


Diet and Diet Reform ([19491 1987), published posthumously by the Navajivan Trust.
Reading through this volume, which puts together Gandhi’s own writings on the
subject with those of various correspondents to Young India and Harijan between 1929
and 1946, one gets a clear sense that Gandhi was looking for the key to national
nutritional health by reprinting, for example, the League of Nations Health
Committee report on minimal daily requirements of energy, fat, protein, minerals
and vitamins ([19361 1949, 94-96); evaluating the health value of vegetable lard and
olive oil in relation to pure ghee ([19461 1949, 76-78; CW 30:332, 466); criticizing
the adulteration of the latter with the former ([19461 1949, 79); commenting on the
pros and cons of skimmed milk in the context of selling whole milk diluted with
water ([19401 1949, 68-69); republishing favorable reports on the nutritional
composition of peanuts ([19351 1949, 56-57), neem leaves, tamarind, lemon seeds,
guavas, and mangos ([19351 [19461 1949, 61-64), rice, wheat and gur ([19341 1949,
43-45); and criticizing the practice of polishing rice ([19351 1949, 45), the
displacement of gur with refined sugar ([19351 1949, 47), the use of commercialized
buffalo milk ([19351 1949, 67-68), among numerous other things (see CW 11:493-

I think it is clear that Gandhi was in search of a reformed national diet that would
be “regulated scientifically” (CW 75:411), such that “[elveryone would get pure milk,
pure ghee, sufficient fruit and vegetables” (CW 75:6). In an article entitled “National
Food,” he lamented the fact that Tamils, Gujaratis, Bengalis, and Andhrans did not
take to each others’ “mode of cooking,” but concluded that

it is necessary, therefore, for national workers to study the foods and methods of
preparing them in the various provinces and discover common, simple and cheap
dishes which all can take without upsetting the digestive tract…. What can be and
should be aimed at are common dishes for common people.

([19341 1949, 28-29)

Although well aware of the problems inherent in putting a radical program of diet
reform into practice on a national scale, I think the underlying logic of Gandhi’s
utopian vision of public health points toward a daily minimal requirement which is
also the optimum of collective national strength. Thus in 1935 Gandhi published the
findings of Dr. Aykroyd, the Director of Nutritional Research at Coonoor, who
claimed that a well-balanced diet need not cost any more than 2 annas per day, or
Rs. 4/- per month: sixteen ounces of soya bean, six ounces of buttermilk, two ounces
of arhardal, an ounce of jaggery, and so forth, in smaller and smaller increments, of
spinach, amaranth, potatoes, colacasia, and coconut oil, thus tabulating a perfect, cost-
effective, simple diet.

Colonialism, Science and National Health

With only a slight shift in perspective one might rightly conclude that Gandhi
was not so much obsessed by sex and food as by a discourse of science which allowed
sex and food to become social, moral, and political facts of life, as well as biological
ones. As I have shown, time and time again one reads in this regard, not about
hermeneutics or philosophy, but about experiments and the attendant authority of
objective science as a way of knowing the Truth about society through self-
examination. Science in general, but experimentation in particular, was a peculiar


discourse in the transnational context of late imperialism, but Gandhi was clearly
convinced that detached, compassionate objectivity provided the means by which to
get at Truth (CW 1:82-86).’9 Reflecting on the utopian scope of possible research on
health from the vantage point of his own limitations, he wrote, “if those who have
independent experience and have some scientific training would conduct experiments
in order to find physical and spiritual values of different fruits, they would no doubt
render service in a field which is capable of limitless exploration” (CW 34:185).

Any number of clear examples of Gandhi’s meticulous, scientific logic may be
found with regard to the relative merits of pulses, eggs, salt, brown bread, fruits and
nuts, for example (CW 11:493, 502; 14:170; cf. 12:424-25; 33:379; 34:120-21;
35:479-80), but writing on the subject of fasts in 1924 (CW 29:315-19; cf. 34:185;
36:158), and reflecting on the nature of his search for Truth, Gandhi remarked that
“[{life is but an endless series of experiments … in my experimentation I must involve
the whole of my kind….” (CW 25:199).20

For Gandhi science was convincing, at least in part, because of the degree to
which it made possible a means for rethinking the problem of social scale-
impoverished villages, racist school boys, endemic promiscuity, and overpopulation-
outside a framework of tradition or history. Science was a means by which to translate
the traditional roots of charisma, as well as experiments with Truth, into modern
public health for “the whole of [his] kind.” I say modern public health, rather than
enigmatic faddism, for it is hard to imagine Gandhi’s agenda for national moral reform
as simply anachronistic, given his virtual disregard for the authority of anything, God
notwithstanding, other than direct, contemporary personal experience-systematic,
trial-and-error, ashram-as-laboratory, publish-your-results, empirical experience
which could locate Truth precisely outside the murky interstices of modernity and
tradition (CW 35:305).

In this regard it is important to consider Gandhi’s rather unique attitude toward
famine relief in India, clearly an issue which involved food, but less in terms of gross
nutrition for the mass of starving aid-recipients than in terms of the particular
biomoral agency of food and food transactions in the nationalist struggle. Writing for
the Indian Opinion in March 1908, Gandhi placed the blame for the Indian famine

19Although linked, indubitably, with Europe-and to an intellectual legacy going back
to the Enlightenment and beyond-science derives modern authority from its putative
objectivity. Its position, as a way of knowing, lies outside of ideology, history and culture; it
is putatively immune, that is, from hegemony, while still clearly implicated in a complex
genealogy of power/knowledge. While working to “escape” both tradition and modernity,
Gandhi could not avoid the entanglement of this genealogy.

20As he put it in a speech at the opening of Tibbi medical college-somewhat self-
consciously given his view on institutionalized medicine-“I would like to pay my humble
tribute to the spirit of research that fires the modern scientists” (CW 19:357-58; cf. 26:299).
And again, speaking to students at the inauguration of the Khadi Vidyalaya in 1941, he said,
“[mlake your mind and intellect scientific, so that you … will search for new things for the
betterment of the country” (CW 74:203). An analogous point is made in a letter to Jamnadas
Gandhi, while reflecting on why it might be necessary to go against conventional wisdom,
tradition and religious teaching and give up milk. “The best test is this: Does the thing appeal
to reason, leaving aside the question whether or not it was considered in the past” (CW 12:147;
cf 15:32, 43, 46, 71, 74; 28:240).

Appealing to reason, and reflecting on the relative merits of various dairy products, Gandhi
responded to a simple query by D. B. Kalelkar by saying, “as regards [the question of) cow’s
milk, I want to write not a letter, but a book for you (CW 30:333; 74:247, 360-61; cf.
55:210, 214).


“with us [the Indians], our chief fault being that we have very little truth in us” (CW
8:157). He then chastised the Natal Indians for their “habit of deceitfulness” and
general lack of honesty. Following on this he built up to his main point.

Some readers may wonder what the connection is between fraudulent practice . .. on
the one hand and famine on the other. That we do not perceive this connection is in
itself an error.

Our examples [of deceit and corruption] are only symptoms of a chronic disease
within us…. It would be a great and true help indeed if, instead of sending money
from here or being useful in some other way, we reformed ourselves and learnt to be
truthful…. [Glood or bad actions of individuals have a corresponding effect on a
whole people.

(CW 8:157)

And so, based on his faith in the efficacy of naturopathy in particular, and what
might be called the political demography of public health and self-discipline, Gandhi
pointed out that it was only possible to treat a disease of the body politic by first
healing oneself. “How can we help?” he asked rhetorically in response to reports of
the 1911 Indian famine where “hundreds of thousands” were starving in Gujarat.
“The first way is to restrain our luxurious ways, our pretensions, our pride and our
sharp practices and crave God’s forgiveness for the sins we have committed” (CW
11:182). Saving, collecting, and then sending money to the famine stricken was
obviously a pragmatic issue here, as was faith in God, but underlying both of these,
I would argue, was Gandhi’s growing awareness of how, to put it graphically, middle-
class constipation in the Transvaal, and the systematic cure and prevention of
constipation among other things (CW 12:102-4), was part and parcel of famine relief
in Gujarat. This kind of thinking on the biomoral imperative of public life and
personal health is clearly apparent when, in early 1921, Gandhi reflected on the
relationship between the consumption of liquor and mass starvation as both were
implicated in “self-purification” and noncooperation.

To become one people means that the thirty crores must become one family. To be
one nation means believing that, when a single Indian dies of starvation, all of us are
dying of it and acting accordingly. The best way of doing this is for every person to
take under his charge the people in his immediate neighbourhood.

(CW 19:285)

Seeing here the precise intersection of science, public health, and moral political
action, brings us directly to the question of satyagraha, militant nonviolence.

Gandhi himself was not completely satisfied that either the indigenous term or
its somewhat oxymoronic English gloss really captured the essence of his experimental
program. For this reason he disavowed the gloss “passive resistance” as denoting
narrow-minded weakness and violence by default. The term satyagraha was Gandhi’s
invention based on the conjunction of two words: satya, meaning truth, and agraha
meaning firmness. The connection between satyagraha and brahmacharya is critical,
and in his autobiography Gandhi pointed out that the latter provided the means-
the only means-for realizing the former.

Writing in his autobiography, Gandhi locates a discovery of celibacy’s power in
the midst of the Zulu rebellion’s violent face-to-face carnage, and his role as sergeant
major in a voluntary ambulance corps assigned to provide medical services to the
wounded rebels. These “rebels” were not so much soldiers, or even men who had been
wounded in battle, as peasant sympathizers who had been either taken prisoner and


whipped on grounds of suspicion or else those who were “friendly” but had been shot
by accident. These were men upon whom the violence of empire did not impinge
even with the dubious legitimacy of war. The violence took shape as a result of what
Gandhi characterizes as the more vivid horror of a hamlet-to-hamlet “man-hunt”
(1957, 315), “where there was no resistance that one could see” (1957, 314). For most
of the Zulus who needed treatment the main problem was not in the magnitude of
their injuries, but in the convergence of violence and racism. Through passive neglect
Zulu wounds were left to fester, and Gandhi’s task, as an Indian, was to nurse the
“rebels” back to health-to heal wounds inflicted by mistake in a much larger context
where violence was unmistakably embodied. Walking across the “solemn solitude”
of the sparsely populated veld, following on the tracks of mounted infantry from one
danger spot to another, Gandhi came to the following conclusion about celibacy:

I clearly saw that one aspiring to serve humanity with his whole soul could not do
without it. It was borne in upon me that I should have more and more occasions for
service of the kind I was rendering, and that I should find myself unequal to the task
if I were engaged in the pleasures of family life and in the propagation and rearing
of children….

Without the observance of brahmacharya service of the family would be inconsistent
with service of the community. With brahmacharya they would be perfectly

(1957, 316)

While it may at first glance seem incongruous, I think it is perfectly logical for
Gandhi to discover the anatomy of militant nonviolence in a context where the
brutality of Empire took such unspeakable form. One is apt to lose touch with the
sheer physicality of violence, as it were, and become numb to the working of terror
and hate when it is not experienced first hand. Celibacy emerged, therefore, as the
only possible response to the horror of violence which Gandhi saw because it cross-
cut that problematic space between ideology and biology which terror so clearly
brought to light.

Aside from the fact that one can see, in countless letters written throughout his
life, Gandhi’s virtual obsession with hands-on healing (CW 11:351-56; 12:146, 269-
71; 28:78-79; 29:199; 30:99; 31:341, 394; 52:106; 80:160, 330-31)21-with the
alleviation of suffering, that is-the relationship between violence and health becomes
all the more apparent when looking at Gandhi’s later life.22 Amidst a rising tide of
communal violence in Noakhali and elsewhere, and the impending partition of India
and Pakistan, Gandhi often put himself directly into situations of violent
confrontation. In part he sought to bring his charisma to bear in order to restore peace,
and, as Parekh has pointed out, the events in Noakhali in particular resulted in a
great deal of soul-searching on Gandhi’s part. His response was characteristic, for in
searching to find out why “the spell of ahimsa” was not working, why Hindus and
Muslims continued to kill one another, Gandhi experimented by sleeping naked with

2″Gandhi was quite aware of his enigmatic reputation, and in a letter to the ailing
Prabhashankar Pattani referred to himself-tongue in cheek while making reference to the
yogic physician Swami Kuvalyananda and to an anonymous water therapist-“as … also …
one of [those] quacks” (CW 37:364).

22Speaking on the subject of nonviolence and healing others, Gandhi pointed out that it
would be one’s moral obligation, and an exercise in selfless service, to nurse anyone, even the
likes of General Dyer, back to health (CW 19:179).


his grandniece in order to test the full extent of his self-control-to both discover
and deploy the ultimate power of nonviolence (CW 79:212-13, 215, 222, 238).
Writing after a silence of three months in the first of a series of five articles on
brahmacharya, published in the Harijan between June 8 and July 27, 1947, Gandhi
explains himself: “To resume writing for the Harijian under these adverse conditions
would be ordinarily considered madness. But what appears unpractical from the
ordinary standpoint is feasible under divine guidance. I believe I dance to the divine
tune. If this is delusion, I treasure it” ([19471 1958, 165). But then, rather than talk
of God and divinity in the face of impending violence, Gandhi confines himself to a
topic “of eternal value”: celibacy. Paraphrasing the Gita, he describes at length the
character traits of a perfect celibate: health, longevity, tirelessness, brightness,
steadfastness, and neatness ([19471 1958, 166); a description which is articulated, in
the earlier epic, on the battlefield just before another terrible war of brother against

Like the Zulu rebellion, the violence leading up to the partition of India and
Pakistan demanded, in Gandhi’s view, an embodied response which would
substantiate, and not just theorize or even operationalize, Truth. But what comes out
most strongly in the five essays on brahmacharya is not the insight they provide into
the soul of a great man, but their public character as discourse on national health. In
writing about sex amidst violence on a horrible, subcontinental scale, Gandhi was not
just trying to find the power within himself to make a difference, he was articulating
the means by which national health could be achieved in the same enigmatically
personal terms in which violence itself was manifest. Just as society was degenerating
into the mindless brutality of communal rage, of neighbor killing neighbor, so the
nation had to be regenerated one person at a time. One drop of vital fluid conserved,
one might even say-in terms of hydraulic ratios and metaphors of opposing flow-
for every sixty drops of blood spilled. Writing two months before Independence, and
one week after spelling out eleven modes of integral discipline to “conserve and
sublimate the vital fluid, . .. one drop of which has the potential to bring into being
a human life” Gandhi put it this way: “The first thing is to know what true
brahmacharya is, then to realize its value and lastly to try and cultivate this priceless
virtue. I hold that true service of the country demands this observance” ([19471 1958,
166; see also CW 35:305). And this, keeping in mind the complex metaphors of
production, reproduction, suffering, and public service, was very hard work: “A man
striving for success in brahmacharya suffers pain as a woman does in labor” (CW

I do not think, in expressing these views-or the seemingly fantastic, parallel
sentiment that a nation becomes immortal when the death of any one is felt by every
one as the loss of an only son (CW 16:230)-that Gandhi was so much a prisoner of
some kind of transcendent, humanistic hope, as a unique product of imperial times,
times in which the nature of hope itself-and humanity as such-needed to be recast.
Looking toward the future of India, Gandhi escaped from the iron cage of rationality
and blind faith into a science of his own creation which held out the possibility, at
least, for public health to have a cumulative effect; for an anatomy of charisma to also
be the simple arithmetic of demographic reform.23

23As Gandhi put it, in a directly related context, there was social, economic, and moral
power inherent in the geometric growth from a base of two million spinning wheels, to a final
goal of one for each of India’s fifty million families (CW 19:557; cf. 26:213). National self-
purification on a scale “so high that we would regain that birthright of ours which we have


What makes this project seem enigmatic is the fact that Gandhi defined the
problem of violence, and the goal of nonviolence, in terms which are at once global
and intimate, imperial and personal, as well as biological and moral. In seeking to
address this problem he refused to locate the Truth of nonviolence on any level of
analysis which only rationalized and did not also embody that relationship. In other
words, a complex myth of science-which was itself incarceratory on the level of self-
knowledge while delimiting the terms of empowered freedom-made it possible for
Gandhi to escape from the confining limits of abstract hope into the patient praxis of
decolonizing bodies.

lost” was also made possible when and if “crores of people” were to wear khadi (CW 21:370-
71). Economics aside, it is important to note that in Gandhi’s view the wearing of foreign
cloth was a “a kind of disease,” (CW 22:45), and wearing khaddar was not just homespun
politics, it was a natural cure or tonic (CW 30:17; cf. 12:38-39), if not, by any means, a
panacea (CW 23:459). Significantly, spinning also gave “one the peace of mind [needed) for
observing brahmacharya” (CW 27: 141; 30:450-52).

List of References

ARNOLD, DAVID. 1993. Colonizing the Body: State Medicine and Epidemic Disease in
Nineteenth-Century India. Berkeley and Los Angeles: University of California Press.

ASHE, GEOFFREY. 1968. Gandhi. New York: Stein and Day.
BONDURANT, JOAN. [19581 1965. Conquest of Violence: The Gandhian Philosophy of

Conflict. Princeton: Princeton

University Press.

BROWN, JUDITH. 1989. Gandhi: Prisoner of Hope. New Haven: Yale University Press.
BUREAU, PAUL. 1920. L’indiscipline des moeurs; 6tude de science sociale. Paris: Bloud and

. 1925. Towards Moral Bankruptcy. Translated by Mary Scharlieb. London:

Constable and Company.
CAPLAN, PAT. 1987. “Celibacy as a Solution? Mahatma Gandhi and Brahmacharya.”

In The Cultural Construction of Sexuality. London: Tavistock Publishers.
DEVANESEN, CHANDRAN D. S. 1969. The Making of the Mahatma. New Delhi:

Orient Longmans.
ELLIS, HAVELOCK. 1910. Sex in Relation to Society. Philadelphia: F. A. Davis.

. [19381 1946. Psychology of Sex: A Manualfor Students. New York: Emerson

ERIKSON, ERIK H. 1969. Gandhi’s Truth: On the Origins of Militant Nonviolence. New
York: W. W. Norton.

Fox, RICHARD G. 1989. Gandhian Utopia: Experiments with Culture. Boston: Beacon

GANDHI, MOHANDAS K. 1958-80. The Collected Works of Mahatma Gandhi. 80
vols. Delhi: The Publications Division, Ministry of Information and Broadcasting,
Government of India.

1955. Ashram Observances in Action. Ahmedabad: Navajivan Publishing

. [1927-291 1957. An Autobiography: The Story of My Experiments With Truth.
Boston: Beacon Press.

. [19271 1958. Self-restraint vs Self-indulgence. Ahmedabad: Navajivan
Publishing House.


. 1964a. The Law of Continence. Edited by Anand T. Hingorani. Bombay:
Bharatiya Vidya Bhavan.

1964b Through Self-Control. Edited by Anand T. Hingorani. Bombay:
Bharatiya Vidya Bhavan.

. 1965. The Health Guide. Edited by Anand T. Hingorani. Bombay: Pearl

. [19491 1987. Diet and Diet Reform. Edited by Bharatan Kumarappa.
Ahmedabad: Navajivan Publishing House.

. [19131 [19481 1992. Key to Health. Ahmedabad: Navajivan Publishing

GANGADHAR, D. A. 1984. Mahatma Gandhi’s Philosophy of Brahmacharya. Delhi:

HARE, WILLIAM LOFTUS. 1926. “Generation and Regeneration.” The Open Court,

KAKAR, SUDHIR. 1990. Intimate Relations: Exploring Indian Sexuality. Chicago:
University of Chicago Press.

KINGSFORD, ANNA. 1881. The Perfect Way in Diet; a Treatise Advocating a Return to
the Natural and Ancient Food of Our Race. London: Paul, Trench.

. 1912. Addresses and Essays on Vegetarianism. London: Watkins.
KISHWAR, M. 1985. “Gandhi on Women.” Economic and Political Weekly 20:40, 41.
LORIMER, ROWLAND. 1976. “A Reconstruction of the Psychological Roots of

Gandhi’s Truth.” Psychoanalytic Review 63:191-207.
MARRIOTT, McKIM. 1990. India Through Hindu Categories. New Delhi, Newbury

Park, London: Sage Publications.
MORRIS, BRIAN. 1985. “Gandhi, Sex and Power.” Freedom 46.
NANDY, ASHIS. 1980. At the Edge of Psychology. Delhi: Oxford University Press.

.1983. The Intimate Enemy: Loss and Recovery of Self Under Colonialism. Delhi:
Oxford University Press.

PAREKH, BHIKHU. 1989. Colonialism, Tradition and Reform: An Analysis of Gandhi’s
Political Discourse. New Delhi, Newbury Park, London: Sage Publications.

PAUL, S. 1989. Marriage, Free Sex and Gandhi. Delhi: Prism India Paperbacks.
PAYNE, ROBERT. 1969. The Life and Death of Mahatma Gandhi. New York: E. P.

RAMANA MURTI, V. V. 1970. Gandhi: Essential Writings. New Delhi: Gandhi Peace


The Traditional Roots of Charisma. Chicago: University of Chicago Press.
RUSSELL, BERTRAND. 1928. Marriage and Morals. New York: H. Liveright.
SALT, HENRY STEPHENS. 1886. A Plea for Vegetarianism, and Other Essays.

Manchester: Vegetarian Society.
. 1899. The Logic of Vegetarianism; Essays and Dialogues. London: Ideal

THOREAU, HENRY DAVID. 1895. “Chastity and Sensuality.” In Essays and Other

Writings of Henry David Thoreau. Edited by W. H. Dircks. London: W. Scott.
VAN VLIET, C. J. 1962. Conquest of the Serpent: A Way to Solve the Sex Problem.

Ahmedabad: Navajivan Publishing House.
WILLIAMS, HOWARD. 1883. The Ethics of Diet: A Catena of Authorities Deprecatory of

the Practice of Flesh Eating. London: Pitman.
WOLFENSTEIN, E. V. 1967. The Revolutionary Personality. Princeton: Princeton

University Press.

  • Article Contents
  • p. 301
    p. 302
    p. 303
    p. 304
    p. 305
    p. 306
    p. 307
    p. 308
    p. 309
    p. 310
    p. 311
    p. 312
    p. 313
    p. 314
    p. 315
    p. 316
    p. 317
    p. 318
    p. 319
    p. 320
    p. 321
    p. 322

  • Issue Table of Contents
  • The Journal of Asian Studies, Vol. 55, No. 2 (May, 1996), pp. i-vi+267-536
    Front Matter [pp. ]
    The Temple of Confucius and Pictorial Biographies of the Sage [pp. 269-300]
    Gandhi’s Body, Gandhi’s Truth: Nonviolence and the Biomoral Imperative of Public Health [pp. 301-322]
    Putting the Mandala in its Place: A Practice-based Approach to the Spatialization of Power on the Southeast Asian `Periphery’–The Case of the Akha [pp. 323-358]
    Revolution and Rank in Tamil Nationalism [pp. 359-383]
    Ties That (Un)Bind: Families and States in Premodern Southeast Asia [pp. 384-409]
    Communications to the Editor [pp. 410-415]
    Book Reviews
    Asia General
    Review: untitled [pp. 417-418]
    Review: untitled [pp. 418-420]
    Review: untitled [pp. 420-421]
    Review: untitled [pp. 422-423]
    Review: untitled [pp. 423-424]
    Review: untitled [pp. 424-425]
    Review: untitled [pp. 425-428]
    Review: untitled [pp. 429-430]
    Review: untitled [pp. 430-432]
    China and Inner Asia
    Review: untitled [pp. 432-433]
    Review: untitled [pp. 433-435]
    Review: untitled [pp. 435-436]
    Review: untitled [pp. 436-438]
    Review: untitled [pp. 438-439]
    Review: untitled [pp. 440-441]
    Review: untitled [pp. 441-442]
    Review: untitled [pp. 442-444]
    Review: untitled [pp. 444-445]
    Review: untitled [pp. 445-447]
    Review: untitled [pp. 447-448]
    Review: untitled [pp. 448-450]
    Review: untitled [pp. 450-451]
    Review: untitled [pp. 451-453]
    Review: untitled [pp. 453-454]
    Review: untitled [pp. 454-455]
    Review: untitled [pp. 455-457]
    Review: untitled [pp. 457-458]
    Review: untitled [pp. 458-459]
    Review: untitled [pp. 460-461]
    Review: untitled [pp. 461-462]
    Review: untitled [pp. 462-463]
    Review: untitled [pp. 463-465]
    Review: untitled [pp. 465-466]
    Review: untitled [pp. 466-467]
    Review: untitled [pp. 467-469]
    Review: untitled [pp. 469-470]
    Review: untitled [pp. 470-472]
    Review: untitled [pp. 472-474]
    Review: untitled [pp. 474-476]
    Review: untitled [pp. 476-478]
    South Asia
    Review: untitled [pp. 478-482]
    Review: untitled [pp. 483-485]
    Review: untitled [pp. 485-486]
    Review: untitled [pp. 486-488]
    Review: untitled [pp. 488-489]
    Review: untitled [pp. 489-490]
    Review: untitled [pp. 490-492]
    Review: untitled [pp. 492-493]
    Review: untitled [pp. 493-495]
    Review: untitled [pp. 495-496]
    Review: untitled [pp. 496-498]
    Review: untitled [pp. 498-499]
    Review: untitled [pp. 499-500]
    Review: untitled [pp. 501-502]
    Review: untitled [pp. 502-503]
    Review: untitled [pp. 503-505]
    Review: untitled [pp. 505-506]
    Review: untitled [pp. 506-507]
    Review: untitled [pp. 507-508]
    Review: untitled [pp. 508-509]
    Review: untitled [pp. 510-512]
    Review: untitled [pp. 512-513]
    Southeast Asia
    Review: untitled [pp. 513-515]
    Review: untitled [pp. 515-516]
    Review: untitled [pp. 516-518]
    Review: untitled [pp. 518-520]
    Review: untitled [pp. 520-522]
    Review: untitled [pp. 522-523]
    Review: untitled [pp. 523-525]
    Review: untitled [pp. 525-526]
    Review: untitled [pp. 526-528]
    Review: untitled [pp. 528-530]
    Review: untitled [pp. 530-532]
    Errata: An American in Japan, 1945-1948: A Civilian View of the Occupation [pp. 532]
    Errata: Obituary for J. Henry Korson [pp. 532]
    Other Books Received [pp. 533-536]
    Back Matter [pp. ]

… …..

ij)t’ ~c) ‘J ‘SQclAd it \,J 1 l)-i’li(VIe +-LUA leI e , ­








1 .

Cannibalism cross-culturally

In recent years … cultural anthropologists have … begun to
give the topic [cannibalism] serious analytic attention. This de­
velopment stems partly from the discovery of new facts and
partly from the realization that cannibalism – like incest, aggres­
sion, the nuclear family, and other phenomena of universal hu­
man import – is a promising ground on which to exercise certain
tpeoretical programs. 1

Anthropological debate on the subject of cannibalism has revolved
around three theoretical programs, each of which provides a dis­
tinctly different lens for viewing the details of cannibalism. Psy­
chogenic hypotheses explain cannibalism in terms of the satisfac­
tion of certain psychosexual’ needs. The materialist hypothesis
presents a utilitarian, adaptive model- people adapt to hunger or
protein deficiency by eating one another. The third approach fol­
lows a hermeneutical path rather than a hypothetico-deductive
model in conceptualizing cannibal practice as part of the broader
cultural logic of life, death, and reproduction.

In this chapter I show that cannibalism is not a unitary phenom­
enon but varies with respect to both cultural meaning and cultural
content. Cannibalism is never just about eating but is primarily a
medium for nongustatory messages – messages having to do with
the maintenance, regeneration, and, in some cases, the foundation
of the cultural order. In statistical terms, cannibalism can be tied
to hunger, but hunger is not necessarily tied to cannibalism (see
discussion of Table


in this chapter). The job of analysis, I sug­
gest, requires a synthetic approach, one that examines how mate­
rial and psychogenic forces are encompassed by cultural systems.
We must look, as Geertz says, at how generic potentialities (and, I



would add, concerns stemming from material realities) are focused

in specific performances. 2

The complexity of cannibal practice cross-culturally

The discussion that follows is based on an examination of the
sample of 156 societies I employed in an earlier study of female
power and male dominance. This group otTers scholars a repre­
sentative sample of the world’s known and best-described soci­
eties. The time period for the sample societies ranges from 1750
B. C. (Babylonians) to the late 1960s. These societies are distrib­
uted relatively evenly among the six major regions of the world as
defined by cross-cultural anthropologists. Additionally, the soci­
eties represented vary in level of political complexity and type. of
subsistence technology. 3 ~~

Of the 156 societies examined, 10


yielded information that I
deemed sufficient ~nough to judge whether cannibalism could be
classified as present or absent. One-third (34 percent) of this
sample yielded information indicating the presence of cannibal­
ism. Descriptions of cannibalism come from several types of
sources: interviews with people who have observed cannibalistic
practices in their own society; eyewitness accounts left by mission­
aries; tribal traditions; and accounts of travelers. Reports of can­
nibalism are unevenly distributed in various cultural areas of the
world. Most come from North America and the PacifIc Islands,
with reports from Africa and South America being next in the
order of frequency. Only two cases have been reported in the
Circum-Mediterranean area and no cases have been reported for
the whole of East Eurasia (see Table 1).

The descriptions of cannibalism can be classified according to
three general categories: (1) ritual cannibalism is practiced, that is,
human flesh is regularly consumed in ritual settings; (2) ritual can­
nibalism is not reported but institutionalized cannibalism is men­
tioned in other contexts (i.e., reports of famine, reports of past
practice, legend, or hearsay); (3) ritual cannibalism is not reported,
but fantasized incidents of cannibalism are feared and take the form
of belief in cannibal sorcerers or witches.

A variety of themes appear in reports of cannibalism. The role
of hunger is frequently mentioned, and most people believe that
cannibalism may occur during times of extreme hunger and fa­


Cannibalism cross-culturally

Table 1. Geographical distribution of reports ofcannibalism


Present Absent Row totals

Geographical area No. % No. %

No. %

Sub-Saharan Africa

East Eurasia
Insular Pacific
North Al1lcriCl
South and Ccntral

Column totals




37 (34%)










72 (66%)












109 (100%)




mine. However, hunger cannibalism is generally treated as revolt­
ing and reprehensible, the ultimate antisocial act, in some cases
punishable by death. Tuzin provides an excellent descri ption of
this attitude in his discussion of the Arapesh response to Japanese
hunger cannibalism as the ultimate unthinkable act, one that im­
plied a deranged, anguished- abandonment of humanity.4 Tuzin
also mentions, however, that other groups in New Guinea treated
hunger cannibalism as commonplace. 5

The food value of human flesh is referred to in many reports
from the Pacific. It is not clear, however, whether such reports are
the authors’ fantasy or actual fact. Quoting from a nineteenth­
century account, Sahlins notes that Fijian chiefs of the last century
did not regard the human victim “in the shape of food,” since
cannibalism was “a custom intimately connected with the whole
fabric of their society.” Nevertheless these chiefs told the Europe­
ans “that they indulged in eating (human flesh) because their coun­
try furnished nothing but pork, being destitute of beef and all
other kinds of meat.”6 Reports from the Pacific commonly equate
human with animal flesh. The Orokaiva gave as their reason for
consuming human flesh their “desire for good food.” All victims
acquired in an intertribal raid were consumed. Human corpses
were handled as if they were animals slain in the hunt. Corpses of
grown men were tied by their hands and feet to a pole and carried
face downward. Slain children were slung over the warrior’s shoul­


Introduction Cannibalism cross-culturally
der in the manner of a hunter carrying a dead wallaby, with each
hand of the body tied to each foot. 7 Lindenbaum reports that the
Fore equated pigs and humans and applied the Melanesian pigdin
term for meat and small game to the human flesh consumed by
women. 8 Despite the reputed equation of human flesh with meat
in some cases, the actual consumption in these cases has cultural
connotations beyond gustatory considerations. For example,
among the Orokaiva the primary reason for acquiring cannibal
victims in intertribal raids was to compensate for the spirit of an
Orokaiva man killed in such a raid. Fore concepts revolved around
the notion that human meat, like pig flesh, helps some humans

In many reports, the events associated with cannibalism refer
not to hunger but to the physical control of chaos. For example,
the victim is cast ~s the living metaphor for animality, chaos, and
the powers of darkness – all those~ things people feel must be
tamed, destroyed, or assimilated in the interest of an orderly social
life. Cannibalism is then associated with a destructive power that
must be propitiated or destroyed, and the act of propitiation or
destruction is directly tied to social survival. The power is vari­
ously located. It may be within animals or enemies, or may be
harbored as a basic instinct in humans. When projected onto ene­
mies, cannibalism and torture become the means by which pow­
erful threats to social life are dissipated. To revenge the loss of one’s
own, the victim taken in warfare is tortured and reduced to food
in the ultimate act of domination. At the same time, by consuming
enemy flesh one assimilates the animus of another group’s hostile
power into one’s own.

Other reports tie cannibalism to a basic human instinct that
must be controlled for the sake of internal social survival. In these
cases cannibalism provides an idiom for deranged and antisocial
behavior. For example, in their most secret and supefl1Jturally
powerful ritual society, the Bella Coola performed a Cannibal
Dance in which they enacted their view of human nature. The
Bella Coola believed that during the per.formance of this ritual the
cannibal dancer became possessed by an animal force that caused
the dancer to want to bite people and filled him or her with an
insatiable desire for human flesh. 9 This force was controlled in the
dancer with ropes, bathing, and a droning kind of singing. 10 The
close connection between the cannibal dancer and the Bella Coola

gods adds a supernatural dimension to the Bella Coola perception
of the cannibal instinct of humans. In staging the cannibal ritual,
the Bella Coola found ~ way to channel powerful forces into so­
ciety and to order those forces for social purposes.

Human sacrifice with its associated cannibalism was the means
by which the Aztec gained access to the animating forces of the
universe. For the Aztec “the flowing of blood [was] eq uivalent to
the motion of the world.” “Human sacrifice,” Sahlins says, “was
… a cosmological necessity in the Aztec scheme, a condition of
the continuation of the world.” II The Aztec feared that when the
gods became hungry their destructive powers would be unleashed
against humanity. To keep the mystical forces of the universe in
balance and to uphold social equilibrium, the Aztec fed their gods
human flesh. By the act of consecration the sacrificial victims were
incarnated as gods. Through eating the victim’s flesh, men entered
into communion with their gods, and divine power was imparted
to men.

Exocannibalism (the cannibalism of enemies, slaves, or victims
captured in warfare), characterizes the majority of cases. In the few
instances of endocannibalism (the cannibalism of relatives) human
flesh is a physical channel for communicating social value and pro­
creative fertility from one generation to the next among a group
of humans tied to one ano’ther by virtue of sharing certain sub­
stances with common ancestors. Endocannibalism recycles and re­
generates social forces that are believed to be physically constituted
in bodily substances or bones at the same time that it binds the

living to the dead in perpetuity.

These sketchy descriptions illustrate the diversity in the cultural

content of cannibal practice. More recent ethnographic descrip­
tions of cannibalism reach the same conclusion. Even within

the same society, cannibalism may be diversely constituted, as

Poole’s description of Bimin-Kuskusmin cannibalism illustrates.

For Bimin-Kuskusmin,

the idea of cannibalism implicates a complex amalgam of practice and
belief, history and myth, and matter-of-fact assertion or elaborate meta­
phor. The subject enters into crass sexual insults, ribald jokes, and re­
vered sacred oratory. It is displayed in the plight of famine, the anguish
of mourning, and the desperation of insanity. It marks aspects of the
social life-cycle from the impulses of the unborn to the ravages of the
ancestors. It is projected outward as a feature of the ethnic landscape and

6 7

Cannibalism cross-culturallyIntroduction

inward as an idiom of dreams, possession states, and other personal fan­
tasy formations. In different contexts it may be seen as an inhuman,
ghoulish nightmare or as a sacred, moral duty. But always it is encom­
passed by the order of ritual and the tenor of ambivalence. The Bimin­
Kuskusmin have no single term for “cannibalism,” for the ideas that are
implicated are constructed for particular purposes of discourse that em­
phasize different dimensions of the phenomenon.


The complexity of cannibalism as a cultural practice means that
to reduce it to a dichotomous variable robs it of all cultural con­
tent.13 Nevertheless I proceed with this exercise as a means for
determining whether the kinds of exogenous forces posited by
material and psychogenic hypotheses are statistically associated
with the practice of consuming human flesh. In doing so I do not
intend to suggest £hat culture must conform to material ~con­
straints, but rather, as Sahlins states, ::that it does so according to
a definite symbolic scheme which is never the only one possible.” 14
Thus, if hunger is a material force to be reckoned with in societies
practicing cannibalism, as Table 5 suggests, I argue that we must
look at the effects of hunger and ask how these effects are culturally
constituted. The fact that hunger is just as likely to be present in
societies that do not practice cannibalism demonstrates Sahlins’s
point that more than one symbolic order may constitute the effects
of a given material force. Thus, hunger is encompassed by a cul­
tural order that includes cannibal practice in some cases and by
some other symbolic scheme, which mayor may not include a
physical referent to eating, in others.

The information presented in Tables 1-5 is based solely on re­
ports of cannibalism falling in the category of institutionalized can­
nibalism. Reports of cannibalism as fantasy, as a past event, or as
a periodic occurrence during times of famine are not included. The
reason for limiting the cases to the purported regular consumption
of human flesh derives from the stipulations on the data posed by
the materialist hypothesis. Since the main causal variable posited
by the materialist explanation is the ongo-ing satisfaction of hunger
or protein deficiency, obviously the data must reflect actual as op­
posed to fantasized or infrequent consumption of human flesh. (


subsequent chapters, this restriction on the data will not apply and
the discussion will include the fear of cannibalism, whether or not
cannibalism is thought to be actually practiced. Additionally, in
these chapters I will not be concerned with whether the consump­


tion of human flesh actually takes place, because my focus will be
on interpreting the rituals in which human flesh is purportedly
consumed.) .

The requirement that the data reflect actual instances of canni­
balism brings to mind Arens’s charge that since “no one has ever
observed this purported cultural universal,” we must be skeptical
about its actual existence. 15 A search of the literature convinces me
that Arens overstates his case. Although he is correct in asserting
that the attribution of cannibalism is sometimes a projection of
moral superiority, he is incorrect in arguing that cannibalism has
never existed. Contrary to his assertion that no one has ever ob­
served cannibalism, reliable eyewitness reports do exist. In re­
sponse to Arens, Sahlins excerpts some of the nineteenth-century
eyewitness reports from the journals of Pacific travelers. 16 Addi­
tionally, eyewitness reports presented in The Jesuit Relations con­
tradict Arens’s assertion that “[ t]he collected documents of the
Jesuit missionaries, often referred to as the source for Iroquois cru­
elty and cannibalism, do not contain an eyewitness description of
the latter deed.” 17

One of the most compelling eyewitness reports I have encoun­
tered was penned in 1879 by a native of the Cook Islands who was
among the first Polynesian missionaries. Upon learning to write
from European missionaries, he kept a log of his travels and wrote
many letters, some of which described the consumption of human
flesh. One particularly lurid but descriptive example comes from
a report of a war that broke out in New Caledonia soon after his
arrival there as a missionary.

I followed and watched the battle and saw women taking part in it. They
did so in order to carry off the dead. When people were killed, the men
tossed the bodies back and the women fetched and carried them. They
chopped the bodies up and divided them…. When the battle was over,
they all returned home together, the women in front and the men behind.
The womenfolk carried the flesh on their backs; the coconut-leaf baskets
were full up and the blood oozed over their backs and trickled down their
legs. It was a horrible sight to behold. When they reached their homes
the earth ovens were lit at each house and they ate the slain. Great was
their delight, for they were eating well that day. This was the nature of
the food. The fat was yellow and the flesh was dark. It was difficult to
separate the flesh from the fat. It was rather like the flesh of sheep.

I looked particularly at our household’s share; the flesh was dark like


r .


sea-cucumber, the fat was yellow like beef fat, and it smelt like cooked
birds, like pigeon or chicken. The share of the chief was the right hand
and the right foot. Part of the chief’s portion was brought for me, as for
the priest, but I returned it. The people were unable to eat it all; the legs
and the arms only were consumed, the body itself was left. That was the
way of cannibalism in New Caledonia.


More recent eyewitness evidence is reported by Poole, who wit­
nessed acts of 13imin-Kuskusmin mortuary cannibalism ;U1<.1 by Tuzin, who describes eyewitness evidence given him by Arapesh



The f:lct th:lt Arens overst:ltes his C:lse should not be taken to

mean that the thirty-seven cases of cannibalism reported in Table
1 represent undisputed examples of actual cannibalism. The eth­
nographies upon which I relied are the best available for use in
cross-cultural research based on a standard sample. The data on
cannibalism, however, are uneven, ranging from lengthy descrip­
tions of ritual cannibalism reconstructed from informants’ recol­
lections of the past to a few sentences describing the consumption
of the hearts of enemies. Keeping in mind the problematic n:lture
of the data, the reader is cautioned to look for suggestive trends in
the tables rather than irrefutable demonstrations of relationships.

Saga 11 ‘s psyclzogellic hypotheses

I begin by considering the hypotheses in Sagan’s study of canni­
balism that can be examined within a cross-cultural framework.
These are not the only dimensions to Sagan’s argument. For ex­
ample, he builds a good case for the role of emotional ambivalence
in cannibal practice, an argument I shall return to in Chapter 2,
where I suggest that, although Sagan’s contribution is important
and useful, it is limited by his particular reading of Freud.

Sagan contends that cannibalism “is the elementary form of in­
stitutionalized aggression.”20 Employing the Freudian frustration­
aggression hypothesis and the idea that “bral incorporation is the
elementary psychological response to anger and frustration, Sagan
hypothesizes that cannibalism is characteristic of a primitive stage
of social development. “The undeveloped imagination of the can­
nibal,” he says, will deal with frustration through oral aggression,
because the cannibal “is compelled to take the urge for oral incor-

Cannibalism cross-culturally

poration literally. He eats the person who, by dying, has aban­
doned him.”21 Or, he eats the enemy whose very existence may
deny him strength in order to incorporate that strength into his
own body. When it occurs in more advanced social systems, Sagan
suggests that cannibalism is a regressive response to social disin­
tegration, for in these cases, he says, “it is inevitable that the sat­
isfaction of aggressive needs sinks to a more primitive level.” This
happcned in Nazi Germany, “a society in a statc of psychotic
breakdown.” The civilizing forces broke down under the strain
Germany experienced before the Nazis took power. Although not
true cannibalism, Sagan says, the destruction of millions of people,
the lamp shades of human skin, and similar practices concentrated
on the body, exemplify the reversion to primitive aggression.


Citing the work of the Whitings, Sagan hypothesizes that ex­
tended nursing, a long period of sleeping with the mother, and
father absence yield children who are overly dependent on their
mothers and hence more prone to frustration and oral aggression.
The adult male who carries this unconscious dependence upon
infantile and childhood supports and who is also expected to be
masculine and brave will need to display his masculinity and his
independence of feminine support: “He will eat people, he will kill
people, he will make war, he wjll enslave others, and he will dom­
inate and degrade women.”23

Sagan’s discussion suggests that as the elementary form of insti­
tutionalized aggression, cannibalism will occur among the simpler
societies, in advanced societies faced with a disintegrating social
identity, and in societies in which infant dependence upon the
mother is prolonged. We can frame these suggestions in terms of
several variables and correlate them with reports of the presence
or absence of cannibalism, admitting, however, that this exercise
does not do justice to Sagan’s more complex ideas.

The first variable measures the level of political complexity.
Twenty-five of the thirty-seven societies with reported cannibal­
ism are politically homogeneous, meaning that the highest level of
jural authority is the local community. Thus, cannibalism is more
likely to be present in politically homogeneous than heteroge­
neous societies (see Table 2). However, this information does not
support Sagan’s hypothesis that cannibalism is a primitive form of
aggression because of the fact that more than half (56 percent) of



Table 2. Relationship between level ofpolitical
sovereignty and cannibalism


Present Absent Row totals
Levels of political

sovereignty No. % No. % No. %

Nothing above local


44 32 56 57 100

One jural level above
community 4 23 13 77 17 100

Two jural levels
above community 4 44 5 56 9 100

Three or more jural
levels above

Column totals
~ 4
37 (34%)

15 22
72 {66%)



109 (100%)

• 100

the simpler societies do not practice cannibalism. The most that
can be said from the information presented in Table 2 is that can­
nibalism is more likely to be found in the simpler societies.

From Sagan’s discussion of maternal dependency and oral
aggression, it is reasonable to assume that cannibalism is associated
with such factors as a lengthy postpartum taboo against sexual
intercourse and male aggression, including aggression against
women. However, these variables are not associated with the
cross-cultural incidence of cannibalism in simple societies. There
is no statistically significant relationship between the length of the
postpartum sex taboo, the variable usually employed as an indi­
cator of maternal dependency, and the occurrence of cannibalism
in politically homogeneous societies. Neither is there any relation­
ship between the number of indicators of male aggression and the
incidence of cannibalism in these societies (see Tables 3 and 4).

However, in politically heterogeneous societies (with at least one
jural level above the local community), a significant association
between the length of the postpartum sex- taboo and cannibalism
emerges. In Sagan’s terms, this means that maternal dependency
is related to oral aggression (as measured by the presence of can­
nibalism) in more complex societies. It is also true that in more
complex societies there is a significant relationship between male
aggression against women and cannibalism (see Tables 3 and 4).

Cannibalism cross-culturally

Table 3. Relationship between length ofpostpartum sex taboo and
cannibalism in politically homogeneous and heterogeneous societies


Length of postpartum
sex taboo


No. %


No. %

Politically homogeneous societies
Up to 6 Months
From 6 Months to

12 63 16 67 28

more than 2 years
Column totals







Politically heterogeneous societies
Up to 6 months
From 6 months to

3 30 22 73 25

more than 2 years
Column totals







Note: For politically homogeneous societies phi = .04, not significant. For po­
litically heterogeneous societies phi = .39, P = .007. No information for
twenty-six societies.

Elsewhere I have shown that male aggression against women is
significantly associated with food stress. I argue that male aggres­
sion is a reaction to stress as’ males seek to dominate controlling
material forces by dominating the bodies of women and female
reproductive functions. However, I qualify this conclusion by
showing that male aggression against women is more likely to be
a solution to stress in societies displaying a symbolic orientation
to the male creative principle. Thus, adaptation to stress does not
always include the subjugation of women and I argue for the ne­
cessity of examining cultural factors that may shape a people’s re­
action to stress. 24 The same comments apply to the results dis­
played in Tables 3 and 4. Although male aggression and maternal
dependency are related w the presence of cannibalism in politically
heterogeneous societies, it is clear from these tables that both of
these variables may occur in the absence of cannibalism, suggest­
ing that we must look beyond the behaviors measured by these
variables in order to comprehend the incidence of cannibalism.

A similar argument is called for when examining Table 5, which



Relationship between male aggression and cannibalism inTable 4.
politically homogeneous and heterogeneous societies


AbsentPresent Row

Male aggression % total
% No.No.scale’

Politically homogeneous societies I
I0-3 indicators of 42 187 32 11male aggression

4 or 5 indicators of 58 3015 68 15male aggression 100 4822 100 26Column totals
Politically heterogeneous s~cieties

0-3 indicators of .’ 68 172 25 15male aggression

4 or 5 indicators of

7 32 136 75male aggression 3022 1008 100Column totals

Note: For politically homogeneous societies phi = .11, not significant. For polit­
ically heterogeneous societies phi = .39, P = .02. No information for thirty-
one societies .
•A Guttman scale formed by five indicators: (1) men’s houses, (2) machismo, (3)
interpersonal violence, (4) rape, (5) raiding other groups for wives. See Sanday

(1981, Appendix F) for details.

indicates a signifIcant relationship between cannibalism and food
stress. Most (29, or 91 percent) of the societies for which there are
reports of cannibalism experience occasional hunger or famine or
protein defIciency. Although hunger is intimately associated with
the practice of cannibalism, we cannot conclude that hunger con­
stitutes cannibal practice. As Table 5 demonstrates, many societies
(43, or 60 percent) that experience food stress show no evidence
of cannibalism; thus, here again, we must look to culture to under
stand the constitution of cannibal practice.

The data are inconclusive with respect to Sagan’s psychogenic
hypotheses. Sagan’s claims are reductionist and, like the materialist
approach, ignore the symbols mediating the experience of oral
frustration and the act of oral aggression in cannibalism. Sagan’s
stress on cannibalism and male aggression as a reaction to oral
frustration (as measured by maternal dependency and food stress)

Cannibalism cross-culturally

Table 5. Relationship between food stress and cannibalism in
politically homogeneous and heterogeneous societies


Present Absent

Food stress No. % No. % total

Politically homogeneous societies
Food is constant 2 9 9 31 11
Occasional hunger or

famine or protein

Column totals






Politically heterogeneous societies
Food is constant 1 9 16 41 17
Occasional hunger or

famine or protein

Column totals





Note: For politically homogeneous societies phi = .26, p = .03. For politically
heterogeneous societies phi = .28, p = .02. No information for nine societies.

is relevant, as the results shown in Tables 3-5 illustrate. However,
I argue that we must examine the underlying ontological struc­
tures that render maternal dependency, food stress, and associated
acts of male aggression relevant to the practice of cannibalism in
some cases and not in others since, as Tables 3-5 also indicate,
these factors are just as likely to be present in the absence of can­
nibalism. In Chapter 2, I present the analytic framework that
incorporates these considerations. In the remaining part of this
chapter I examine the materialist hypothesis, Sahlins’s culturalist
response, and several other approaches that are useful for compre­
hending the social and cultural context of cannibal practice.

The materialist approach of Michael Harner and Marvin Harris

The materialist hypotheses proposed by Harris and Harner to ex­
plain the scale of Aztec human sacrifice focus on hunger and pro­
tein deficiency. Harner claims that ecological and demographic


I Introduction Cannibalism cross-culturallyfacts explain the scale of Aztec human sacrifice. In the Aztec case Harner sees it is not known what amount of fatty acids is required by the
an extreme development, under conditions of environmental circum­
scription, very high population pressure, and an emphasis on maize ag­
riculture, of a cultural pattern that grew out of a Circum-Caribbean and
Mesoamerican ecological area characterized by substantial wildgamc
degradation and the lack of a domesticated herbivore…. IntensifIcation
of horticultural practices was possible and occurred widely; but for the
necessary satisfaction of essentIal protein requirements, cannibalism was
the only possible solution…. From the perspective of cultural ecology
and pOpUI:Hioll pressure theory, it is possible to llndcrst:ll1d :ll1d respect
the Aztec emphasis on human sacrifice as the natural alld ration:J! re­
sponse to the material conditions of their existence. 25

Citing an unpublished estimate by a leading authority on. the
demography of Central Mexico arourl.d the time of the Conquest,
Harner says that 1 percent of the total population, or 250,000, were
sacrificed per year in Central Mexico in the fifteenth century. As
to what was done with the bodies, Harner relies on accounts writ­
ten by conquistadores such as Bernal Diaz and Cortes and on the
post-Conquest description penned by Sahagun. :2(,

Some reports refer to eJting humJn flesh in a nonsacritlcial con­
text. Cortes writes that one of his 111en leading a punitive e\:pedi­
tion came across “loads of maize and roasted children which they
[Aztec soldiers 1 had brought as provisions and which they left be­
hind them when they discovered the Spaniards coming.” c7 Simi­
larly, Sahagun mentions that Aztec merchants discovered traveling
in enemy territory were killed and served “up with chili sauce.”
According to Duran, the flesh of the cIptive caten after sacrifice
was not part of the rite itself but “was considered [to be] ‘leftovers’
and was returned to the captor as a reward for having fed the
dei ty.” 28

Such rewards were important because captors were recruited
from the ranks of commoners who rarely ate meat or poultry.
They got their protein from a “floating lOubstance” on the surface
of lakes, from amaranth, and from the regular diet of maize and
beans. Famines were common and every year people faced the
threat of shortage. A prolonged famine in 1450, for example,
forced the rulers of the Three-City League to distribute the surplus
grain that had been stored for ten years. 29

The scarcity of fats caused another dietary problem. Although


human body, fats

In contrast to the commoners, the nobility and the merchant
class fed on a rich diet of protein in the form of wild game. Human
flesh, too, was reserved for “illustrious and noble people.” Thus,
during good times human flesh may not have been nutritionally
essential for the nobility. Harner suggests, however, that the Con­
sumption of human flesh probably fluctuated and made its greatest
contribution to the diet when protein resources were at their low­
est ebb. The privilege of eating human flesh provided good insur­
ance against hunger during times of famine, when the nobility as
well as the commoners could suffer significantly. 31

Commoners could partake of human flesh and wild game by
taking captives single-handedly in battle. Upon capturing a total
of three war prisoners, commoners received the gustatory privi­
leges of the nobility and were raised to the position of “master of
the youths.” They also became eligible to host a cannibal feast for
their blood relatives and dine at Moctezuma’s palace on imported
wild game. These were the rewards in an economy of scarce meat.
By rewarding successful warriors in this manner, the Aztec rulers
11l0tiv:1ted the poor to participate in offensive military operations.
They pumped up an aggressive war machine with the promise of
meat. “[U]nderlying the competitive success of that machine,”
Harner says, “were the ecological extremities of the Valley of Mex­ico. “.12

Marvin Harris describes preconquest political necessities in the

Valley of Mexico along with several other examples to demon­
strate a more general relationship in human society “between ma­
terial and spiritual well-being and the cost/benefits … for increas­
ing production and controlling population growth.”3J In the case
of the Aztecs, their material well-being was threatened by Occa­
sional periods of famine caused by depletion of the Mesoamer­
ican ecosystem after centuries of intensification and population
growth. Their spiritual well-being depended on sacrifice and can­
nibalism. The severe depletion of animal protein reSOurces in the
Valley of Mexico, he claims,


I Introduction Cannibalism cross-culturallymade it uniquely difficult for the Aztec ruling class to prohibit the con­
sumption of human flesh and to refrain from using it as a reward for transmission was the notion of regeneration and reproduction.

loyalty and bravery on the battlefield. It was of greater advantage to the
ruling class to sacrifice, redistribute, and eat prisoners of war than to use
them as serfs or slaves. Cannibalism therefore remained for the Aztecs an
irresistible sacrament, and their state-sponsored ecclesiastical system
tipped over to favor an increase rather than a decrease in the ritual butch­
<:ring o( c.lptivcs :1I1d the redistrihution of hUlllan tlcsh.'4

Sahlins’s culturalist rejoinder to Hams and Harner

Sahlins sees the “Western business mentality” at the heart of Har­
ris’s view of Aztec cannibalism. In Harris’s utilitarian view, every­
thing in the social superstructure is governed by its economic func­
tion so that the me;nings other peop)e give to their lives are
nothing more than the material rationalizations we give to our
own. “Once we characterize meaningful human practices in these
ideological terms,” Sahlins says, “we shall have to give up all an­
thropology, because in the translation everything cultural has been
allowed to escape.”35

The cultural content Harris ignores is the stupendous system of
Aztec sacrifice. Sahlins approaches this content head on: He does
not attempt to dodge its complexities. Staying close to his subject
matter, he illuminates the logic of sacrifice and shows how canni­
balism fits within this logic. Aztec cannibalism can only be under­
stood within the broader system of Aztec sacrifice for by itsel f
cannibalism did not exist for the Aztec. It is true that hum:m Acsh
was consumed, but neither was it ordinary human flesh nor was it
eaten in an ordinary meal. Cannibalism as a cultural category
among the Aztec was invented by anthropologists. For the Aztec,
the consumption of human flesh was part of a sacrament bringing
humans into communion with the gods. The Aztec focused not
on the consumption of flesh but on the sacred character of the
event. 36 _

Sahlins points out that the logic of Aztec sacrifice is not unique.
It is found in many other societies and conforms to Hubert’s and
Mauss’s classic explanation of the nature and function of human
sacrifice. Aztec sacrifice brought the sacrificer, “sacrifier,” and the
victim into union with the divine. The consumption of the con­
secrated victim transmitted divine power to man. Underlying this


The gods were renewed through the offering, and the sacrifier (the
one who has provided the victim but not necessarily the one who
sacrifices it) gained divine power by giving up his claim to the
victim. The entire process began with mutual adoption between
Aztec victim and sacrifier. When the warrior took a prisoner, he
declared: “He is as my beloved son.” The captive replied: “He is as
my beloved f.1ther.”J7 Thus, the victim offered up by the Aztec
sacrifier was his own child.

The reproductive imagery is manifest in the parallelism drawn
between the mother and the warrior. The warrior’s job was to
nourish the Sun with the blood of adopted captives borne by the
warrior to the sacrificial altar. The mother in childbirth was lik­
ened to the warrior engaged in battle. If she died, she shared the
warriors’ fate and went to the House of the Sun. When the mother
bore a child, the midwife shouted war cries, “which meant that
the woman … had taken a captive.”38 Thus, male and female alike
Contributed to the physical reproduction of the Aztec universe.

Giving their children to the gods was a cosmological necessity:
It was a condition for the continuation of the world. Without
proper nourishment the gods could not work on behalf of hu­
mans. The gods depended on sacrifice for energy. Without it the
Sun would not come up, the sky would fall down, and the universe
would return to its original state of chaos. The gods depended on
humans and humans depended On the gods. The steepness of the
Aztec pyramid steps paralleled the course of the sun from dark to
light and back to dark. As the victim climbed the steps, he or she

was the Sun climbing to its midday zenith. Rolled down the west­
ern steps of the temple, the victim, like the sun, was going to his

or her grave. The Sustenance given to the gods in the offering and

to humans in their houses ensured the regeneration of everyone. 39

Sacrifice was also a sociocultural necessity. It was so implicated

in the particulars of social relations, politics, and economics, that

without sacrifice, the web of human social interactions would

come apart. Fundamentally, “Aztec culture was reproduced by hu­
man sacrifice.” Just as the main relations of the Aztec universe were

renewed by the blood of captives, so were the relations on the
social plane, for in the sacrificial act the logic of both was repre­
sented. Men were like the gods whose original self-destruction set
the sun in motion. According to the principle ofsacrifice, the Aow­



ing of blood was equivalent to the motion of the world. Without
it all would come to an end. 40

Enemies could not be subjugated or exterminated because they
supplied the lifeblood of the state. Sahlins agrees that the structure
of the empire was conditioned by the system of human sacrifice.
But his explanation goes beyond material considerations or cost­
benefIt analysis. He notes that the high Aztec god Tezcatlipoca has
as another name, “Enemy.” The figure of this god embodied the
power of the enemy. Supernatural power was often conceived as
being external to society: “What is beyond society, escaping its
order, is precisely what is greater than it.” The ritual value of ene­
mies lay in the greater spiritual power they brought to society. To
have annexed and subjugated enemy territory would have meant
destroying the lifebl~od of the state. The greater supernatural
power of the en em y helps to explain th~-initial ease of the conquest
and why the subsequent hostilities were so bloodthirsty. The
Spanish were conceived as different, more powerful enemies, and
hence more powerful gods. The Spanish were unaware of their
own worth as victims. ~1

The physical production and reproduction of cosmological and social

Sahlins’s analysis of Aztec cannibalism is at once a critique of the
idea that human cultures are formulated out of practical activity
and utilitarian interests and an example of another approach to the
study of culture. Harner and Harris believe that culture is precip­
itated from the rational activity of individuals pursuing their own
best interest. The assumption underlying such utilitarianism is that
humans seek to maximize benefits relative to costs. Sahlins’s rea­
soning instead focuses on the symbolic ;1I1d the l11eJningful. The
distinctive quality of man is “not that he must live in a material
world … but that he does so according to a meaningful scheme
of his own devising.” The decisive quality of culture is not that it
“must conform to material constraints” but that it constitutes these
constraints in a meaningful symbolic order:

[NJature is to culture as the constituted is to the constituting. Culture is
not merely nature expressed in another form. Rather the reverse: the
action of nature unfolds in the terms of culture; that is, in a form no
longer its own but embodied as meaning. Nor is this a mere translation.


Cannibalism cross.eulturally IThe natural fact assumes a new mode of existence as a symbolized fact,

its cultural deployment and consequence now governed by the relation
between its meaningful dimension and other such meanings, rather than
the relation between its natural dimension and other such facts.


A striking feature of Sahlins’s analysis of Aztec cannibalism is
his illumination of the role of the sacrificial complex in the social
and cosmological reproduction of the Aztec universe. Men and
women contributed to the physical reproduction of the cosmos in
a variety of ways: They (along with children) contributed their
lifeblood to nourish hungry gods; men conveyed the victim to the
sacrificial stone; and women bore new victims in childbirth. The
relations of the social order were sustained and regenerated
through the idiom of sacrifice and cannibalism. For example,
noble titles were conferred on those who contributed sacrificial
victims, humans became gods through the sacrificial rites, and the
states supplying victims were politically separated from those
counted as allies.

More than an idiom for regenerating order and structure, the
sacrificial complex was also deeply implicated in the founding of
Aztec society (see Chapter 8). The dialectic between submission
in sacrifice and dominance in the gruesome rites that followed rit­
ually marked the development of the Aztec state from its begin­
ning, when the migrating hunters who were the ancestors of the

Aztec first settled in the Valley of Mexico. When the Aztec nobility

felt defeated, as they did during the famine of1450, they admitted

their submission by increasing the scale of sacrifice and asserted

their dominance in arrogantly pretentious cannibal feasting. In

myth and history, the Aztec social and political order was consti­
tuted in terms of struggle. Sacrifice and cannibalism, I suggest,

were the primordial metaphors symbolizing dominance and sub­

The chartering of a social order and its reproduction are an im­
portant part of Sahlins’s analysis of Fijian cannibalism as well (see
Chapter 7). Sahlins presents a myth of the origin of cannibalism
that has to do with the origin of Culture. Like Aztec cannibalism,
fijian cannibalism is part of the mythical charter for sociery. In
practice, Fijian cannibalism could not be separated from the or­
dered circulation of the principal sources of social reproduction,
which established and perpetuated the developed Fijian chiefdom.
The chiefdom was organized “by an elaborate cycle of exchange



of raw women for cooked men between a basic trio of social cum
cosmic categories: foreign warriors, immigrant chiefs, and indig­
enouS members of the land.”” Wives and cooked men are both
reproductive. The wives are directly “life-giving”; the cannibal
victims are life-giving in that their bodies provide a tangible chan­44

nel for the exchange of mana between men and gods. The system

of exchange culminating in sacrifIce and cannibalism constituted

“an organization of all of nature as well as all society, and of pro­
duction as well as polity.”45 Sahlins concludes that “the historical

practice of cannibalism can alternately serve as the concrete refer­
ent of a mythical theory or its behavioral metaphor.” 4(, Thus, Fijian

cannibalism, like that of the Aztec, is part of the foundation of the
social order. Fijian c~nnibalism also served as a tangible symbol of
dominance. The Fijian chief who offeE~d victims to his people le­
gitimated his chiefly dominance. In the gruesome rites that fol­
lowed the chiefly.offering, his male and female subjects gave vent

to more lurid displays of dominance.
Although Annette Weiner docs not address the issue of canni­

balism, her analysis of reproduction is relevant to this discussion
because of her emphasis on the speciflc resources that “objectify
the general societal process of reproduction, documenting and le­
gitimizing the fundamental condition whereby ego and ‘others’
are tied togeth r.”4i By reproduction, Weiner means “the way so­
cieties come to

terms with the processes whereby individuals give

social identities and things of value to others and the way in which
these identities :1nd values come to be replaced by other individuals
and regenerated through generations.” 4l-\ The specific resources
that mark relations acroSS the generations must be material objects
with some physical property of durability. possibilities mentioned
by Weiner arc substances or objects taken from the corpsc’ itsclL4
or material objects used in formal exchange events. ‘) Weind’s
comparison of the Bimin-Kuskusmin use of bones as the concrete
referent in acts of social reproduction w~th the Trobriand employ­
ment of bundles of banana leaves raises some interesting hypoth­

eses regarding the social concomitants of cannibalism.
The fundamental problematic posed by social reproduction.

Weiner says, is “(H1 0w can one draw on the resources and sub­
stances of others while maintaining and regenerating one’s o\\’n
resources and substances” without becoming “other” ?50 The
Bimin-Kuskus min essentially cut off relations with the other after

Cannibalism cross-culturally

the reproductive potential of the other has been employed to beget
children. For them the other always remains essentially suspect,
and the substances of the other (namely, affines) are rigidly sepa­
rated from the substance of the lineage. 51

Poole’s analysis of Bimin-Kuskusmin models of procreation,
death, and personhood supports Weiner’s discussion of reproduc­
tion and regeneration. Through acts of mortuary cannibalism. the
procreative powers of the dead are recycled within the Bimin­
Kuskusmin lineage and clan, whereas the spirit of the newly dead,
provided that it meets the test for proper ancestorhood. takes its
place among the clan ancestral spirits that are responsible for nur­
turing the manifestation of the clan spirit in the bodies of future
generations. 52 When a man or woman becomes an ancestor, Poole
says, “the mortal individual is substantially dissolved in most re­
spects, and the wider social bonds founded on eroding substance
are significantly sundered.” 53 The person who becomes an ances­
tor leaves a legacy in the form of children, departed ancestral
(called .finiik) spirit, bone, bone marrow, and procreative power.
This legacy “constitutes the substantial core of the cycle of birth,
death, and rebirth, and this cycle turns inward on the clan as the social
category that is forever reconstituted in the Bimin-Kuskusmin ideology of
societal regeneration JJ (emphasis mine);54 Thus, the clan stands alone
in the symbolism of death and rebirth – it is the clan that is per­
petuated. The symbol of the continuity and perpetuity of the clan
“is cast in the substantial symbols of bone in and on living persons,
in shrines, cult houses, and ossuaries, and in ritual performance.”
including ritual anthropophagy. 55 This inward-turning character
of Bimin-Kuskusmin acts of social reproduction can be compared
– providing that the quite different level of political complexity is
t:1kcn into account – with the Aztec state, which, as noted earlier,
3dopted a policy of nonexpansion. The inward-turning nature of
the Aztec state was the means by which its hegemony was main­

The Trobriand solution does not display the exclusive inward
orientation of the Bimin-Kuskusmin. Labor and production of
yams and women’s wealth are directed within the lineage, but re­
lationships are not cut off with others, such as affines, fathers, and
spouses. Bundles of banana leaves objectify the reproductive sig­
nificance of women at the same time that they give economic val­
Idation to relations between individuals of different lineages. Thus,



“bundles provide for the linking of networks of relationships that
last for three or more generations.”56 Trobriand bones, like Bimin­
Kuskusmin bones, remain within the ritual contexts of ancestors.
As significant objects, however, bones “never enter the economic
or political domain, for bones do not validate relations external to
the ancestral domain.”57

This difference between turning inward as opposed to connect­
ing ties with affines in mortuary ceremonies is one of the social
concomitants Strathern relates to the presence of cannibalism in
New Guinea Highlands and Fringe Highlands societies. Focusing
on marriage exchange and prominence of pig herds, Strathcrn
notes that, where cannibalism is present, two factors are also pres­
ent: (1) “the idea of ‘turning back’ or of repeating marriage” is
accepted just as “the idea of ‘turning back’ to eat one’s own kind
is not regarded as wrong”; (2) “herds of domestic pigs, which
could be used as substitutes for the exchange and consumption of
persons, are less prominent.”58

Most Bimin-Kuskusmin marriages are intratribal. Marriages
between tribes are usually marriages with women from enemy
groups. The fear and antagonism between groups is accentuated
because no attempt is made to regenerate these relationships
through time, as Weiner notes in the Trobriand case. 59

Bimin-Kuskusmin cannibalism and endogamous structure can
be contrasted with the marriage system of the Melpa of the West­
ern Highlands. The Melpa abhor cannibalism, relegating it to the
secret practices of evil witches. The Melpa have elaborate rules
against marrying kin, and against repeating marriages between
small groups. These prohibitions occur in conjunction with an
obvious stress “on proliferating exchange ties, on facing outwards
to an expanding network, and on a continuous substitution of
wealth items, pork and shell valuables (or nowadays cash), for the
person. In this context, cannibalism stands for an unacceptable
‘turning back’, and is thus symbolically equated with incest.”6o

Bimin-Kuskusmin pig herds are tiny by highland standards, ac­
cording to Poole, and there certainly is not the elaborate network
of exchange documented for the Melpa. 61 Nor do pigs figure
prominently in Bimin-Kuskusmin mortuary rituals, as they do
among the Melpa. Melpa mortuary rites transfer the spirit of the
corpse into the world of the ghosts by means of a pig sacrifIce
designed to ensure the goodwill of the new ghost and the com­

Cannibalism cross-culturally

munity of gr.osts. Eating the pig flesh coincides with the release
of the deceased’s .soul. The pigs, Strathern concludes, are substi­
tutes for the person’s body: “[T]he pork is eaten instead of the de­
ceased.” The funerary pig sacrifices are presented to the ghosts “in
order to persuade them to accept a new ghost, and to the de­
ceased’s maternal kin, in substitution for the flesh which will rot
and return to the earth,” where it fertilizes and thereby regenerates
the soil of the clan territory.62 The bones of the corpse are kept by
the paternal kin and placed in special houses. Thus, through the
medium of pig flesh, the deceased’s spirit is replaced; what the
Uimin-Kuskusmin accomplish through mortuary cannibalism the
Melpa accomplish through pig sacrifices.

Strathern notes that the practice of cannibalism in the New
Guinea Highlands is associated with sparsely popUlated fringe re­
gions where large herds of domestic pigs are absent. However, he
cautions against jumping to the conclusion that protein-hunger is
causally related to the practice of cannibalism, because where pigs
are absent, alternative Sources of protein are available in wild
game, including feral pigs. Furthermore, the Hua, the Gimi, and
the Fore are reported to have practiced cannibalism and alI of these
groups keep herds of domestic pigs. However, in areas where ag­
ricultural intensification has proceeded to its greatest lengths, can­
nibalism is absent. 63


As the most recent ethnographic studies of cannibalism confirm,
cannibalism is not a unitary phenomenon but varies both in mean­
ing and cultural content. The cross-cultural data point to at least
six patterns in the practice of cannibalism:

1. Famine cannibalism is frequently mentioned.
2. Cannibalism may be fl?otivated by competition between

groups and the desire to avenge the death of someone lost

111 war.

3. Mortuary cannibalism is part of the physical regeneration
of fertile substances required to reproduce future genera­
tions and maintain ties with the ancestors.

4. Cannibalism is a behavioral referent of a mythical charter

for society and, with other social and cosmological cate­



gories, is a condition for the maintenance and reproduc­
tion of the social order.

5. Cannibalism is a symbol of evil in the socialization of per­

6. Cannibalism is part of the cultural construction of person­

As Poole’s ethnography of Bimin-Kuskusmin cannibalism shows,
several patterns may characterize the expression of cannibalism in
one society; or, only one of the patterns described above may be
represen ted.

The explanations of cannibalism are also diverse. The data pre­
sented in this chapter are inconclusive with respect to the claims
of psychogenic and materialist hypotheses. However, I do not dis­
count the role of Psychogenic and matepalist forces and in the’
following chapters I examine the interrelationship between mate­
rial forces and the psy.chological states predicated by rituals of can­
nibalism. The relationship between food stress and cannibalism
leads me to suggest that, like male control of female bodies, can­
nibalism is part of a hegemonic strategy deVeloped in reaction to
a perception of controlling natural or political forces in some cases.
This strategy, however, cannot be separated from the system of
symbols that predicates a people’s understanding of their being­
in-the-world and formulates their strategies vis-a-vis social regen­
eration, reproduction, and dominance. More thall just a reaction
to external conditions, cannibalism is a tangible symbol that is part
of a system of symbols and ritual acts that predicate consciousness
in the formulation of the social other and reproduce consciousncss
in the ritual domination and control of the social other. Whcre
domination and control are subordinate to accommodation and
integration, however, cannibalism is absent regardless of the na­
ture of the food supply.


2 . Analytic framework

[I]nsist as we may upon the distinct character of cultural action,
we are invariably forced to the conclusion that the cultural, too,
is merely a part of nature. Whatever we do, we do as warm_
blooded, mammalian animals, exemplifying natural effect in all
of our actions. In our own terms, then, culture is nature har­
nessing nature, understanding nature, and coming to know it­self. I

This chapter introduces the conceptual framework that guides the
analysis and presentation of data in the follOWing chapters. Fifteen
case studies provide the material for the more detailed discussion.
These case studies are identified in Table 6 by the name and loca­
tion of the society; and by whether cannibalism is social (that is,
under the Control of group decisions) or is antisocial (that is, OUt­
side group Control). Table 6 also identifies the type of data on
which my analysis is based, the type of cannibalism practiced, and
the chapters that treat the case studies in more detail.

The fifteen cases were chosen because of the reliability and de­
tailed nature of the data. Some are drawn from the fieldwork Con­
ducted by ethnographers in the past decade. In one case, that of
the Bimin-Kuskusmin, instances of mortuary cannibalism were
observed by the ethnographer. In three Cases (the Hua, Gimi, and

Goodenough Islanders), the ethnographer provides detailed infor­
mation On the practice of cannibalism, as reported by informants,

before it Was prohibited by government and missionary officials.

Three additional examples of ritual cannibalism are drawn from
descriptions reconstructed by anthropologists from the aCCOunts
of missionaries and travelers. Although these descriptions are de­
railed and vivid, the data are not specific on such topics as the
theory of procreation and conception, which is essential for under­





~ .









t ..~







~\ r\~ LA …..~Io(l.AAMA. \ (.l.MN’lA-~’?r



In the lower montane forests of the Eastern Highlands of Papua New
Guinea, a population of some 14,000 slash-and-burn horticulturalists
known as the Fore (pronounced FOR-AY! tend gardens of sweet potato,
taro, yam, corn, and other vegetables. They also grow sugarcane and
bananas, keep pigs, and, in the sparsely populated regions near their south­
ern boundaries, still hunt for birds, mammals, reptiles, and cassowaries.
Unlike the open country to the north around Kainantu or Goroka, where
long-established grasslands prevail, this part of the Eastern Highlands con­
sists of mixed rainforest broken by small clearings and grasslands of no great
age. The forest includes oak, beech, Ficus, bamboo, nut-bearing Castanop­
sis, feathery Albitzia, red-flowered hibiscus, and many other species used
for food, medicines, and stimulants, as well as salt, fibers, and building
materials. 1 Pandanus grows at higher altitudes. The ground is covered with
a wealth of edible shrubs, delicate tree ferns, fungi, and creepers. Red, white,
and salmon-colored impatiens sparkle in the shafts of sunlight beside forest
paths, and ferns, orchids, and rhododendron grow as epiphytes in the canopy
overhead. The forest rings with the sound of birds feeding on tall fruit trees.

The Fore-speaking population lies in the wedge created by the Kratke
Mountains to the north, and the Lamari and Yani Rivers to the east and
west. Although the terrain ranges in altitude from the mountains at 9,000
feet to southern valleys at 2,000, gardens and hamlets are scattered across
the zone between 7,500 and 3,500 feet, where the population has access t


Traditional Fore hamlet near the edge of
the forest. photo and legend by Dr. E. R.
Sorenson from The Edge of the Forest,
Smithsonian Institution Press, 1976.




u.s, COD~)



~ ::l
« OJ

Kuru Region ~

The Fore and Their Neighbors

SOURCE: Adapled from E. Richard Sorenson. The Edge of the Forest
(Washington. D.C: Smithsonian Ins!. 1976), p 20 © SmIthsonian

both montane and lowland encironments. Hamlets typically consist of sev­
enty to 120 people, living in twelve to twenty houses, and their adjacent
gardens. Surrounded on three sides by populations speaking Gimi, Keiagana,
Kanite, Kamano, Auyana, Awa and Kukukuku lalso known as AngaJ, the
Fore are reluctantly penetrating the uninhabited region to the south. Stories
of illness and hardship characterize their view of existence in these frontier

The Fore represent the most southerly extension of the East New
Guinea Highland linguistic stock,2 but they have much greater genetic
heterogeneity than most linguistic groups in the Eastern Highlands. The


c, with a remarkably flexible kinship system, do not constitute an iso­
c,d breeding population. Genetic studies show their close association

. ,h populations in two directions. To the northwest, they are most closely

l1ciated with the Kamano, Gimi, and Keiagana, and to the southeast with

. Awa, Auyana, and Tairora.3 Kukukuku populations across the Lamari

Jky and the Yar-Pawaian groups beyond the uninhabited zone appear to

long to different linguistic, genetic, and cultural communities.

The Fore are afflicted with a rare disease. Since record-keeping began
1957, three years after the Australian administration established a patrol
,L at Okapa, some 2,500 people in this region have died from kuru, a
l:JCute degenerative disorder of the central nervous system. Appro xi­
ctcly 80 percent of all kuru deaths have occurred among Fore-speaking

.;lple, with the remaining 20 percent striking neighboring populations. In
,~ ~arly years of investigation, over 200 patients died annually, which at

‘. ~l time approached I percent per annum of the affected population.4 In
~nt years, kuru rates have steadily declined, and in 1977 only 31 persons

:J of the disease. Following several decades in which kuru was the major
‘se of death among the Fore, the disease is rapidly disappearing.

My main focus is on the 8,000 South Fore. Identified by Australian
:anment officials in the 1950’S as a single census division within the
:1pa Subdistrict, South Fore is separated from North by a low mountain
~e (Wanevintil that hinders but does not preclude contact between the

:, populations. Marriage partners, trade goods, food, refugees, illnesses,
d ideas move between North and South, but South Fore social life is
‘lj3ed on the lands sloping southward from the mountain barrier. There
I) dialect groups (Atigina and PamousaJ with a high frequency of cognate

.’ rds are recognizable. The two southern dialects have more in common
::n either has with Ibusa, the dialect of the North Fore.6 It is among the
.Jeh Fore that the incidence of kuru has been highest. Between 1957 and
;S, over 1,100 kuru deaths occurred in a South Fore population of 8,000,
I most cases reported for 1976 and 1977 come from this region. Since

;’11 is predominantly a disease of adult women-the childbearers, pig
.Jers, and gardeners-its effects on Fore society have been particularly
:mging. When the incidence of kuru reached a peak, in the 1960’S, the

.! th Fore believed their society was coming to an end. And indeed, with
‘ic high female mortality and low birth rates, in the early 1960’S their
:nbers were truly declining. South Fore were aware that the disease was
:::ing them hardest.

This book discusses the Fore response to kuru in the 1960’S, when the
,demic was at its height. In Chapter 2, I trace the interest of Western
,.’ntists in the disease, from the time they learned about it in the early
jO’S to the present. Chapter 3 surveys Fore medical disorders, and shows
L apart from kuru, their health status resembles that of kuru-free popu­

:ons in the Eastern Highlands of New Guinea. Chapter 4, an analysis


J~ 1\
/ j’-……., V \ V \……. I

…….. v \V1\ALL AREAS
\ ….-\

\ – …………
./1\ /!\

[7 \ V \ /\

‘”V ‘/ \ –
~V “‘­


‘\….­ ~
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60





LL o 0





Number of Persons Dying of Kuru (since 1963) for each year of birth from 1945.

SOURCE: Adapted from Hornabrook & Moir 1970

of South Fore kinship, indicates that Fore establish ties by demonstrat­
ing commitment to a relationship. Kinship is as often based on common
interest and support as it is on heredity.

In Chapters 5 and 6, I present Fore views of the cause of disease, and
suggest that Fore beliefs are appropriate to a particular way of life and period
of time-that is, to communities of partially intensive swidden horticul­
turalists in the 1960’S. In the three decades since the Australian administra­
tion at Port Moresby sent the first patrol through Fore territory, both their
mode of existence and Fore beliefs about it have undergone rapid change,
allowing us an opportunity to observe the ways that philosophical systems
depend on context. A new way of life gave rise to greater manipulation of
the environment, and to differences in rank requiring the coercion of others
in order to maintain an elevated position. New diseases also took their toll.
As these changes occurred, ghosts of the dead and spirits of the forest re­
ceded, displaced by growing numbers of sorcerers.

The early 1960’S were crisis years for the Fore. They hunted for sorcer­
ers and consulted curers IChapter 7), and finally they called great public


j·..::etings (Chapter 8). There, they denounced the performance of sorcery
,at was decimating their women and creating a wasteland. Sorcerers were
:;id to be the agents of all that was wrong with the human condition. They
:ere seen as the negative of all that is fine, good, and moral, as instruments
i impoverishment and decline, and as a burden on the community (Chapter
\. Notions of sorcery, witchcraft, and pollution emerge as ideologies of
J!ltainment, by which wielders of power attempt to degrade their oppo­

;::l1ts, coerce social inferiors, and allocate resources. Sorcerers, witches, and
‘:lluters therefore have universal attributes. They appear in different man­
,cstations and with varying powers of retaliation in New Guinea and
lscwhere. The direction from which they project their feared energies is a

-lLle to an asymmetrical interchange, either between individuals or between
_:~ions (Chapter 10).

In the search for the cause of kuru, Western science unraveled a
~ological mystery; the solution has applications for neurological afflictions
“,;It range far beyond the borders of New Guinea. The Fore analysis of the
;nblem revealed more about victimization, and told us more about our­
..:ives. While their medical observations were frequently accurate, they
‘ere embedded in social codes, and in messages about the nature of exis­

_,:nce. In their statements about the cause of disease, the Fore also consid­
;”t.:d those conditions under which a threatened society might ultimately



As he passed through the hamlets at Arnusi during an Australian govern­
ment patrol in the South Fore region, Patrol Officer McArthur made a sig­
nificant notation. “Nearing one of the dwellings,” he wrote in August 1953,
“I observed a small girl sitting down beside a fire. She was shivering vio­
lently and her head was jerking spasmodically from side to side. I was told
that she was a victim of sorcery and would continue thus, shivering and
unable to eat, until death claimed her wi thin a few weeks.”1 Although the
disease had been mentioned in earlier government reports,2 this was the
first official description of kuru, a fatal neurological disorder very common
among Fore, and present to a lesser degree among neighboring groups, but
unknown elsewhere in the world. The discussion that follows presents kuru
as a disease entity as perceived by Western observers, and follows the evolu­
tion of medical thought over more than two decades. Western scientists
now consider kuru to be a slow virus infection spread by the ingestion of
human flesh. This view contrasts with that of the Fore, who remain con­
vinced that the illness results from the malevolent activities of Fore


A Fore word meaning trembling or fear, kuru is marked primarily by
symptoms of cerebellar dysfunction-loss of balance, incoordination
(ataxia), and tremor. An initial shivering tremor usually progresses to com­
plete motor incapacity and death in about a year. Women, the prime victims

8 9


N (
\ .— ……
\,.,'” “‘ \

j-/. KagL; I

I’ – .Ibusa I
Klanosa. /

‘….. NORTH /Uwaml.
……. A”‘:’ande FORE 1’/

( .
,- _./’- – –.. \. J”‘” •Pusarasa(

–_….. -./….. – ‘-_ -Okapa \
……. ~ \

‘\ . .Yasubl \
” .AmuSI \ Yagusa ‘.
Tamogavisa. A ‘K· • .Okasa\ mora elagasa \

) Igltaru. • _ Wanitabe • Kasokana
/ Kume· • …….,

….. Yagareba • ‘–‘
(. •. Wanda \
\ Menlilesa •. Kamila • Abomotasa

Intamatasa. Walsa ,I
, • • Takal lIesa. )

1’1 Umasa • Kelabe I
I .• Purosa • Awarosa

Ivak,. IUva’. I • • Agakamatasa
,- _…. / Onssa Mougal· I Agamusa

/ , __ / Amusa /./-…. _-!.. ” ./t Kasalal SOUTH // ­
( Paili. FORE “”./
I ….. /

/ /”



I _/,,~

( ~ CENSUS DIVISIONS __ .r- … , ….., /’
………… ~/

…… I
…… ,; o 1 2 3 4 5 miles
, I , ! ! I– ………. _/1 Scale

·~8 Settlements

the disease, may withdraw from the community at the shock of recogniz­
.~ the first symptoms-pain in the head and limbs, and a slight unsteadi­
55 of gait. They resume their usual gardening activities a few weeks later,
Ilggling to control their involuntary body movements until forced by

,):;s physical incoordination to remain at home, sedentarily awaiting

The clinical progress of kuru is remarkably uniform. It has been di­
vided into three stages by Dr. Carleton Gajdusek, who has made extensive
clinical studies of the disease:

The first, or ambulant, stage is usually self-diagnosed before
others in the community are aware that the patient is ill. There
is subjective [self-perceived] unsteadiness of stance and gait
and often of the voice, hands and eyes as well. Postural insta­
bility with tremor and titubation [body tremor while walking]
and ataxia of gait are the first signs. Dysarthria [slurring of
speech] starts early and speech progressively deteriorates as the
disease advances. Eye movements are ataxic…. A convergent

A woman in the primary
stage of kuru, steadied by
her husband. Note abo
normal position of her left
arm and hand.


strabismus [crossed eyes] often appears early in the disease and
persists. Tremors are at first no different from those of slight
hypersensitivity to cold; the patient shivers inordinately. In­
coordination affects the lower extremities before progressing to
involve the upper extremities. Patients arising to a standing
posture often stamp their feet as though angry at them. In at­
tempting to maintain balance when standing, the toes grip and
claw the ground more than usual. Very early in the disease the
inability to stand on one foot for many seconds is a helpful
diagnostic clue…. In the latter part of this first stage, the
patient usually takes a stick to walk about the village unaided
by others.

The second, or sedentary, stage is reached when the
patient can no longer walk without complete support. Tremors
and ataxia become more severe and a changing rigidity of the

Pregnant young kuru victim
goes to work in her garden
supported by stick. She was
dead a few months later, less
than a year after the first
symptoms appeared. Photo
by Dr. D. Carleton Ga;dusek.

A middle-aged kuru vic­
tim braces herself with
both arms to maintain
balance while sitting.
Photo by Dr. D. Carle­
ton Ga;dusek.

An elderly kuru victim
who can no longer walk
waits for the other
women to return from
their gardens. Despite
the heat of the sun, she
feels chilled.


limbs often develops, associated with widespread [repetitive
muscular spasms], or sometimes shock-like muscle jerks and
occasionally coarser [irregular, involuntary] movements, espe­
cially when the patient is thrown into an exaggerated startle
response by postural instability, or by sudden exposure to noise
or bright light. Deep tendon reflexes are usually normal. Al­
though muscle activity is poorly maintained there is no real
weakness or muscle atrophy. Emotional lability, leading to out­
bursts of pathological laughter [is] frequent, sometimes even
appearing in the first stage of the disease, and smiling and
laughter are terminated slowly…. Some patients, especially
adolescent and young adult males, become depressed, and a
rare patient develops a pathological belligerence [in response
to] disturbances by family members or others. Mental slo.wing
is apparent, but severe dementia is conspicuously absent. No
sensory changes have been noted….

The third, or terminal, stage is reached when the patient
is unable to sit up without support, and ataxia, tremor and
dysarthria become progressively more severe and incapacitat­
ing. Tendon reflexes may become exaggerated…. Some cases
show characteristic … defects of posture and movement. Ter­
minally, urinary and faecal incontinence develop and dysphagia
[difficulty swallowing] leads to thirst and starvation and
the patient becomes mute and unresponsive. Deep ulcera­
tions [of the skin over bony prominences and] pneumonia
appear in this stage and the patient finally succumbs, usually
emaciated, but occasionally quickly enough t-o be still well


_.t first, kuru was thought by Westem observers to be a psychosomatic
)henomenon, “directly associated with the threat and fear of what was
:,clieved to be a particularly malignant form of sorcery.”4 A provisional
,Iedical diagnosis of the first case sent to the Australian govemment hospi­
.•1 at Kainantu for close observation in r955 elicited a diagnosis of “acute
;?steria in an otherwise healthy woman.”s In r957, Drs. Vincent Zigas
:orking for the Papua New Guinea Department of Health) and Gajdusek

\(rom the United States National Institutes of Health) began an intensive
Ludy of the disease, and Gajdusek was soon to write in a note to the
,! thropologist Ronald Bemdt:

We cannot yet claim any clues to its pathogenesis, and
infectious and toxic factors which might be responsible for its
etiology have thus far eluded us. However-and most unfortu­
nately for us-all the guidance is pointing toward the vast
group of chronic-progressive-heredo-familial degenerations of

the central nervous system…. We have recently had the assis­
tance and advice of Dr. Sinclair, Director of Psychiatry from
the Royal Melbourne Hospital, and he agrees with our current
opinion that fatal kuru … cannot by any stretch of the imagi­
nation be identified with hysteria, psychoses or any known …
psychologically-induced illnesses … the evidence for direct
central nervous system damage is far too great in the strabis­
mus, and pictures … of advanced neurological disease shown
by the advanced cases.6

Later the same year, 1957, Gajdusek and Zigas published their first
medical assessment. They emphasized the high incidence of kuru in cer­
tain families and hamlets, its localization to the Fore and adjacent peoples
with whom they intermarried, and its predilection for children and adult
women. 7

The boundaries of the kuru region as defined by these investigators in
1957 have changed little since that time, although the region of high inci­
dence has gradually been contracting.8 The kuru region comprises most of
the Okapa Subdistrict of the Eastern Highlands District, a population of
over 40,000, belonging to nine language groups and representing about one­
fifth of the population of the Eastern Highlands District. The Lamari River
to the southeast and a large expanse of uninhabited country to the south­
west sharply separate the southernmost Fore villages, which are regularly
afflicted with kuru, from the Kukukuku and Yar populations, who have
never experienced the disease. Elsewhere, the boundaries of kuru incidence
are not sharply defined. To the east, Awa and Auyana peoples rarely contract
kuru. North of Fore, kuru has occurred in Usurufa villages and in the adjoin­
ing part of Kamano. Some Yate and Yagaria peoples to the northwest have
been affected, and to the west kuru is found in those parts of Kanite,
Keiagana, and Gimi which border Fore territory.

Because kuru seemed to run in families and was localized to a small
interrelated population, a genetic basis for the disease was suspected. In the
late 1950’S, it was proposed that kuru was a hereditary disorder, determined
by a single autosomal gene that was dominant in females but recessive in
males. 9 The implications of such a hypothesis were somber. By the mid­
195 0 ‘S, Highland men had been encouraged to participate as migrant labor­
ers under the provisions of the Highlands Labour Scheme, administered
from Goroka. Each worker signed a two-year contract. Although the gov­
ernment employed some of these men on public works, the majority were
under contract to private copra, cocoa, and rubber plantations in coastal and
island regions of Papua New Guinea, where labor w:lS scarce. The laborers
received food, clothing, lodging, transport, and medical attention. Half their
low wages was deferred and paid in a lump sum at the termination of the
contract. Government officials and kuru investigators debated whether it
would now be possible to erect an invisible fence aroWld the Fore, to pre­

14 15

vent their participation in the Highlands Labour Scheme, and to discourage
the exodus of the affected population from the region. Only if such a plan
were found feasible and morally acceptable, it was said, could other peo­
ples be protected against the lethal kuru gene. In the meantime, Fore
would continue to transmit the disease one to the other until their tragic

The investigation of kuru in the 1950’S was hampered by a lack of
information about Fore kinship. As we shall see in Chapter 4, many of the
supposedly related kuru victims were not closely related biologically, but
were kin in an improvised, non-biological sense. An analysis of the Fore
kinship system does not support a purely genetic interpretation of the dis­
ease. Moreover, as John Mathews, a physician whose study of kuru began in
1963, noted:

This purely genetic model, if true, implied that kuru must
have been of remote evolutionary origin and that it ought to
have been in epidemiological equilibrium. It was soon apparent
that kuru was too common and too fatal to be a purely genetic
disorder unless the hypothetical kuru gene was maintained at
high frequency by a mechanism of balanced polymorphism.
There was no evidence to support the latter suggestion. to

In other words, an inevitably fatal genetic disorder could not reach the
incidence kuru then had among the South Fore without soon killing off the
host population, unless the gene for kuru in some other way conferred a
selective survival advantage.

Anthropological evidence gathered in 1962 by Robert Glasse and my­
self from dozens of Fore informants indicated that kuru had spread slowly
through Fore villages within living memory, and that its progress through
Fore territory followed a specific, traceable route. Entering from a Keiagana
village to their northwest around 1920, the disease, according to Fore tes­
timony, proceeded down their eastern border, and then swung westward
into central South Fore. From this point, it turned again to the north and
also continued to move south. Its appearance in the extreme south was thus
relatively late, and many people gave persuasive accounts of their first en­
counter with the disease.

Owata lnot his real namel of Wanitabe was about 55 years of age when
he described his experience with kuru:

When I was a young boy, I didn’t know anything about kuru. I
was initiated [at 9 or 10 years] and I still hadn’t seen kuru. It
wasn’t until I was married that I first saw it. That was true of
many places around here. I visited Purosa and Aga Yagusa, and
it wasn’t there. I heard rumors of it at Kasokaso before it came
to Wanitabe. My mother died of kuru at Wanitabe. I was mar­
ried then, but it was before I had any children. She wasn’t the

first to die of it here; a few other women died of it before her.
When I heard it arrived at Kamila, I went to Kamila to

look at it. Men asked at first, /lIs it sickness or what? 1111 Then
they said that men worked it [Le., caused it by sorcery]. We
fought against Kamila after this. We were angry with them be­
cause all the women were dying of kuru. We asked them where
we would get wives from if this continued…. Then some men
of Wanitabe purchased it [Le., paid for knowledge of the sorcery
technique] from Kamila, and now we get it here too. Now it
has spread everywhere. In the past, we fought only with bow
and arrow. Then kuru came and killed the women one by one. I
can’t see of course, since I am blind, but I hear others talking
about it, and they say it kills everyone now.

This places the arrival of kuru at Kamila in the late 1920S, and at
Wanitabe by about 1930. The first cases at Purosa, six miles south of
Wanitabe, are also associated with a sorcery purchase from Kamila at about
the same time, in the early 1930’S. A week after the death from kuru of a
twenty-year-old Purosa youth in 1962, his mother llnatal, his mother’s
mother IAsa’inal, and her husband ITano) speak of the first appearance of the
disease at Purosa:

Asa’ina (grandmother): Kuru came to Purosa only recently. I
had carried all my children and my hair was white before
it came here.

Question: Was Inata married?
Inata (mother): Yes, I had given birth to all my five chil­

dren…. my first child, a son, was about ten years old.
Inata (in response to question): We were afraid when we first

saw kuru. We asked the men what kind of sickness it
was, and they told us it was kuru…. When it first ar­
rived, only one woman would get it, then a little later,
another. Now, since the tetegina [red people, that is,
“whites”] have arrived, plenty of people get the disease.

Question: Can you remember the names of the first people

here to get kuru?

Tano (husband of Asa’ina, interjecting): These two can’t re­
member. I can. The first woman to get it here was Agiso.
She lived in a house on this hill. We were free of it in the
past. Then we heard rumors of this trembling thing, this
kuru, at Kasokana. From Kasokana it came to Wanikanto
where four women got it. Still we didn’t have it, we had
only heard of it. Then the men of Kamila had four
women who died of it, and still we had only heard of it.
Then Agiso, who was a Kamila woman, came here to live
and she wasn’t here long before she died of it. She left her


The site of the first case of kuru in the region is a
UWAMI, maner of controversy between the peoples involved

(KEIAGANA)– — – — -‘-.,




1942 ………..

~41 \






/'” 1933 ‘-­


1935 – KUME ‘- I 1927 \

‘)940:;:>1’. \ WAISA / 1927 ~ASOKANA
1930 1922

1937 ~ PUROSA 1946 ~

./ 1933 AWAROSA

IVAKI ~ ~ 1940


Epidemiological Map of Kuru

SOURCE: Adapted from J.D. Mathews 1971, Page 134.
(unpublished thesis)

husband at Kamila and came to marry a man here at
Purosa. She was a young woman when she died, about
eighteen or twenty years old [her age is conveyed by
comparing her to another girl now living]. The next to die
of it was a woman called Alakanto … and we saw that it
had come to us and we wondered who had purchased it
and brought it here…. I was married with one child, a


daughter called Tabelo who was about five years old [in­
dicating a child of similar age in the group] when Agiso
died…. This was a child of my first wife. I married
Asa’ina later.

From scores of such accounts there emerges a broad chronology of the
spread of kuru, with its arrival in some northwestern and southeastern areas
convincingly dated as late as the 1940’S.

Fore accounts had a ring of epidemiological accuracy. They noted the
initial incidence of kuru among women, and describe its subsequent shift to
children of both sexes and to adult men. They also indicated uncertainty in
the diagnosis of early cases. At Umasa, the first case occurred in a woman
who had arrived recently from the North Fore. A young widow, she had been
inherited in marriage by her husband’s age-mate, and people at Umasa were
puzzled by her illness. Noting that her tremor resembled the swaying of the
casuarina tree, they supposed that she had a shaking disorder called cassow­
ary disease, by the further analogue that cassowary quills resemble waving
casuarina fronds. They fed the victim pork and casuarina bark, a
homeopathic treatment that gave little relief. When the woman’s brothers
came to visit her, they, having already seen the disease, told the people of
Umasa what it was.

Many people first called the disease negi nagi, a Fore term meaning
silly or foolish person, because women afflicted with the ailment laughed
immoderately. In those early days, our informants said, they joked and took
sexual levities with the sick women as they do with those who manifest
temporary mental derangement or bizarre behavior. When it became appar­
ent that the victims were uniformly dying, they were forced to conclude
that the matter was more serious than they had thought, and that sorcerers
were at work. Early medical reports also emphasized the sufferers’ emo­
tional lability, leading to the unfortunate characterization of kuru in the
Australian press as “the laughing death.” Only a minority of the patients
examined in the 1960’S were said to smile or laugh inappropriately; it is
possible that the clinical features of the disease have changed.12


In 1962 and 1963, Robert Glasse and I presented evidence gathered in two
extended stays among the Fore that kuru had spread through the Fore popu­
lation in recent times, and that its high incidence in the early 1960’S was
related to the cannibal consumption of deceased kuru victims. We also
provided evidence that for the South Fore, the depletion of women was a
recent phenomenon. 1J Our cannibalism hypothesis seemed to fit the
epidemiological evidence. The first Australian government patrols in the
late 1940’S reported cannibalism throughout the entire kuru region. By


T \)5 I, the Berndts, living on the North Fore borders, noted that government
‘ntcrvention had put a stop to cannibalism in that area, although it was still
‘I ;Icticed surreptitiously farther afield. The South Fore confirmed the

:’.cmdt’s observation. One elderly man from Wanitabe said in 1962 that the
,:xhortations of the first patrol (1947) were disregarded. “We hid and ate
“cople still. Then the luluais [government-appointed localleaders]and tul­
,~11s [their appointed assistants] tried to stop us, but we hid from them, too.
‘Je only stopped when the big road came through from Okapa to Purosa
I l) 55].” Thus, in the South Fore, the area with the highest incidence of
.uru, cannibalism had continued later than in the North.

When a body was considered for human consumption, none of it was
:iscarded except the bitter gall bladder. In the deceased’s old sugarcane
~:nden, maternal kin dismembered the corpse with a bamboo knife and
:wne axe. They first removed hands and feet; then cut open the arms and
,\~gs to strip out the muscles. Opening the chest and belly, they avoided
~;pturing the gall bladder, whose bitter contents would ruin the meat. After
:vering the head, they fractured the skull to remove the brain. Meat, vis­

,era, and brain were all eaten. Marrow was sucked from cracked bones, and
‘1metimes the pulverized bones themselves were cooked and eaten with
~! cen vegetables. In North Fore, but not in the South, the corpse was buried
If several days, then exhumed and eaten when the flesh had “ripened” and

he maggots could be cooked as a separate delicacy.14
Thus, little was wasted, but not all bodies were eaten. Fore did not eat

lcople who died of dysentery or leprosy, or who had had yaws. Kuru victims,
:!Owever, were viewed favorably, the layer of fat on those who died rapidly
:i~ightening the resemblance of human flesh to pork, the most favored pro­
.:.:in. Nor were all body parts eaten by everyone. For instance, the buttocks
It Fore men were reserved for their wives, while female maternal cousins

{cccived the arms and legs. Most significantly, not all Fore were cannibals.
Although cannibalism by males occurred more frequently in the North,
,()uth Fore men rarely ate human flesh, and those who did (usually old men)

‘,::li.d they avoided eating the bodies of women. Young children residing apart
; rom the men in small houses with their mothers, ate what their mothers
‘:lVe them. Initiated boys moved at about the age of ten to the communal
;(luse shared by the adult men of the hamlet, thus abandoning the lower­
:!::lSS world of immaturity, femininity and cannibalism. As will be discussed

: II greater detail in Chapter 10, men in this protein-scarce society claimed
[:1′: preferred form of protein (wild boar, domestic pigs), whereas women
,upplemented their lesser allotment of pork with small game, insects, frogs,
‘nd dead humans. Women who assisted a mother in childbirth ate the
,y!acenta. Both cannibalism and kuru were thus largely limited to adult
“,,’omen, to children of both sexes, and to a few old men, matching again the
:pidemiology of kuru in the early 1960’S.

As mentioned earlier, body parts were not randomly distributed. The

Young girl holds up rat she caught and
will cook with the bagged vegetables at
her feet.

corpse was due those who received pigs and valuables by rights of kinship
and friendship with the deceased (primarily maternal kin), and the gift ‘had
to be reciprocated. Pig and human were considered equivalent. The death of
a breeding sow in 1962 evoked the following speech of mourning: “This was
a human being, not a pig. One old woman among us has died.” Pig and
human were dismembered and allocated in similar fashion. Among South
Fore, a man’s brain could be eaten by his sister, and in the North, by his
sister as well as his son’s wife and maternal aunts and uncles. A woman’s
brain, perhaps the most significant body matter in transmission of the dis­
ease, was said to be given to her son’s wife or her brother’s wife.

Ethnographic accounts of the consumption of the first kuru victim in
a certain location also describe cases four to eight years later among those
who had eaten the victim. IS Moreover, the average risk of kuru in wives of
kuru victims’ brothers was three to four times as great as that in a control
group of women who were not related either genetically or by marriage to
kuru victims. Furthermore, the risk of kuru in females related to kuru
victims by marriage only (4 I percent) was almost as high as the risk in
females genetically related to kuru victims (5 I percent), 16 This conforms to
the stated regulation that brothers’ wives receive the victim’s brain, and the


lpportunity of these women to participate in kuru cannibalism along with
Lhe victim’s mother, sisters, and daughters. The distribution of human flesh
~\!r consumption thus crossed genetic lines much as the distribution of
!~uru did.


Fore had not been cannibals for long. Within the zone of cannibal peoples to
the east and south of Goroka, they may have been among the last to include
human flesh in their diet. Cannibalism was adopted by the Kamano and
;~eiagana-Kanite before it became customary among Fore. North Fore say
they were imitating these northern neighbors when they became cannibals
around the tum of the century, while among South Fore, cannibalism began
as recently as fifty or sixty years ago, or about a decade before the appear­
:mce of kuru there. Old people in the South Fore, whose memory of the
matter appears unclouded, describe their attraction to human flesh. There
was no thought of acquiring the power or personality of the deceased. Nor is
it correct to speak of ritual cannibalism, although many medical and jour­
nalistic accounts do so.17 While the finger and jaw bones of some relatives
were retained for supernatural communication, Fore attitudes toward the
hodies they consumed revolved around their fertilizing, rather than their
moral, effect. Dead bodies buried in gardens encouraged the growth of
crops. In a similar manner human flesh, like pig meat, helps some humans
regenerate. The flesh of the deceased was thought particularly suitable for

As a Wanitabe man born about 1890 said: “The Ibusas [North Fore]
were visiting the Kamano and saw them stealing and eating good men. They
heard it was sweet to taste, and tried it themselves. I was about ten years old
when we heard these stories.” North of Fore, then, aggressive exocan­
nibalism, or the eating of dead enemies, appears to have been the prevailing
practice. Among South Fore, however, it was usual to eat kin or people of
one’s own residential group after they had died (endocannibalism).

A variable enthusiasm for human flesh runs from Goroka through the
Fore area, coming to an abrupt halt in the southeast, where the Awa, in
contrast to their Fore neighbors, were not cannibals at all. This is a gradient
that matches an environmental shift from grassland groups for whom hunt­
ing plays but a small part in the diet18 to the Fore, who have not yet denuded
their land of forest and for whom until recently wild game was readily
available. By 1962, traditional gifts of wild protein between matrilateral kin
were rare in the South Fore; possum and cassowary were being replaced by
chicken and canned fish purchased at the local trade store. The last wild
boar injury in Wanitabe occurred around 1940, and by 1970 South Fore
groups who had supplied feathers for the northward trade began to buy them
from the forest-dwelling Kukukuku further south. A less agricultural recent

A hamlet and ad;oining gardens recently
carved out of the forest. Grass and scrub are
overtaking the older, abandoned site above.
Photo and legend by Dr. E. R. Sorenson from
The Edge of the Forest, Smithsonian Institu­
tion Press, 1976.


“;lSt is portrayed in Fore stories of men and their humanoid dogs encounter­
~lg possum, birds, snakes, and flying foxes. Fore also have an elaborate
:uological classsification system, which represents a relict of vanishing
:l\lnting habits. 19

Population increases in the region and the conversion to the sweet
;Jil(ato as a dietary staple thus appear to have lead to the progressive re­
:Iloval of forest and animal life, to cultivation methods involving more
~omplete tillage of the soil, and to the keeping of domestic pig herds, which
~()mpensate for the loss of wild protein. As the forests protein sources be­
::ame depleted, Fore men met their needs by claiming prior right in pork,
.’:hile women adopted human flesh as their supplemental habus, a Melane­
:;ian pidgin term meaning “meat” or “small game.” Fore still refer to the
human corpse and the stillborn infant as “the true habus of women.” Men
.tt Awarosa, who insisted that cannibalism was a female habit, argued that

LEFT One stage in slash-and-bum
agriculture: felled trees are drying
before the garden is set afire.

BELOW A mixed garden of sweet
potato, surgarcane, and beans.

in this southeastern Fore region there was “plenty of habus in the forest for
men.” They noted, in addition, that “if we men ate people, we would fall ill
with respiratory disorders and our flesh would waste away,” a rationale they
also gave for their initial rejection of chicken/lmore will be said about these
attitudes in relation to pollution in Chapter 10). Traditional male curers
guarded their powers and are said never to have practiced cannibalism.

The case of the non-cannibal, grass-dwelling Awa does not weaken the
supposition that human flesh is ingested as a relevant source of dietary
protein. 20 Fore at Awarosa report the neighboring Awa as saying they have
never been cannibals. “We have no forests of our own,” they reportedly told
the Fore, “so we give you Our sisters in marriage, and in exchange we eat
your habus.” The Awa SOurces of protein were pigs given them as brideprice,
and subsequent gifts of wild protein received as birth payments each time
their sisters gave birth to children. Ronald Berndt also takes seriously Fore
statements on the value of human flesh. Noting that in the 1950’S pigs were
not plentiful, he records a story in which Fore first taste human flesh. “This
is sweet,” they said. “What is the matter with us, are we mad? Here is good
food and we have neglected to eat it.”21

Epidemiological evidence reported in the mid-1960’S indicated that
the age and sex distribution of kuru was changing. Young children were less
often affected, and the disorder was mOre often seen in adolescents and
young adult men and women, as well as in older women. Moreover, the
overall incidence fell in all areas except the Gimi.22 A purely genetic expla­
nation of kuru no longer seemed plausible.


After the first clinical descriptions of kuru were published, this unusual
disorder attracted considerable international attention. In England, W J.
Hadlow, working on scrapie, a degenerative disease of the central nervous
system in sheep, pointed to the remarkable similarities in the clinical and
pathological features of kuru and scrapie. Moreover, the disease of sheep
was transmissible by inoculation.23 Unlike most infectious disorders, which
have a relatively short incubation period, scrapie did not become manifest
until many years after inoculation. Stimulated by the parallel with scrapie,
Gajdusek and his coworkers at the National Institutes of Health in
Bethesda, Maryland, injected the brains of chimpanzees with brain material
from Fore patients who had died of kuru, and in 1966 they reported that
after incubation periods of up to fifty months, the chimpanzees had de­
veloped a clinical syndrome astonishingly akin to human kuru.24 Kuru, like
scrapie, thus appeared to be a viral disease of extraordinarily long incuba­
tion, a “slow virus infection.”

This finding lent support to our idea that the disease had reached
epidemic proportions among the Fore as a result of the eating of dead kuru


iictims. That hypothesis had also assumed that kuru would not strike those
:,om after the abandonment of cannibalism, which in South Fore occurred
:s a result of government and missionary intervention in the middle to late
I9S0’S.25 The prediction now appears substantiated by the virtual disappear­
:nce of kuru among children, and by the earlier decline in childhood cases
,1110ng North Fore, where government influence suppressed cannibalism
‘cars earlier than in the South or in the Gimi, where childhood kuru occur­
cd until 1970.26 The Gimi, even more remote from government influence

:han South Fore, continued as cannibals for longer.
Epidemiological data gathered between 1970 and 1977 strengthen the

.iypothesis that kuru is a disease transmitted by cannibalism and caused by
) slow virus with an extremely long period of incubation. There has been a
,:ontinued decline in the annual incidence of the disease, particularly in
ll’males. The greater decline of new cases in females can be explained by the
[Jct that those who ate human flesh as adults were predominantly women,
and they have already died of kuru. They leave behind an increasing major­
i ty of new cases resulting from childhood ingestion of the virus, a condi tion
:or which both sexes were equally at risk since cannibal flesh was consumed
:qually by male and female children. With the passage of time, the sex ratio
,)j new kuru victims should thus approach pari ty. This has already occurred
in North Fore, where government influence eliminated cannibalism earlier,
:md a similar trend is now appearing in the South.27 Moreover, while there
‘Ncre two twelve-year-old patients With kuru in 1970, the youngest current
~ase in May 1978 was more than 20 years old. Thus both childhood and
,dolescent cases have disappeared completely. The only Fore and Gimi cur­
ently coming down With kuru are those who participated in cannibal meals

“rior to 1955.
Recent data also allow us to delineate the behavior of the virus more

:nccisely. Since the youngest victim is now twenty-five, while the youngest

.’Cars, but the pattern already known depicts an extraordinary infectious
lilness in which symptoms may appear decades after the causal event.

While the means by which the disease was transmitted thus seems
,'[arified, kuru continues to provoke scientific curiosity. Recent research has
:fJcused on the pathogenesis of slow virus infections, on documenting of
:,;w epidemiological trends, and on attempts to establish the kuru virus in
-issue culture.


The pathogenic agent responsible for the disease has recently been
:~,olated and transmitted to spider, capuchin, squirrel, rhesus, woolly, and
,narmoset monkeys, as well as to chimpanzees, yet the virus itself has

roved elusive. It seems to elicit in its host none of the usual immune

responses. Kuru does not produce detectable antibodies. Nor has the virus
been depicted under the electron microscope. As the first chronic or sub­
acute degenerative disease of humans proven to be a slow virus infection,
however, kuru has stimulated the search fo~ virus infections in other sub­
acute and chronic human diseases, particularly of the nervous system. Mul­
tiple sclerosis is the most common central nervous system disorder believed
(though in this case not yet proven) to be caused by a slow virus infection.311

The evidence for other neurological diseases is more conclusive. For exam­
ple, it now appears that Creutzfeldt-Jakob disease, one of the presenile
dementias (mental deterioration at a relatively young age) that occur spo­
radically and in familial patterns in humans throughout the world, is also
transmissible to chimpanzees and monkeys, and is caused by a virus with
properties much like those of the kuru virus.31 Moreover, the incidence of
Creutzfeldt-Jakob disease among Jews of North African and Middle Eastern
origin in Israel is thirty times the rate for Jews of European origin. Since
only the former customarily eat the eyeballs and brains of sheep, scrapie­
infected sheep tissue has been suggested as the source of infection.32

Two other rare disorders of the central nervous system-subacute
sclerosing panencephalitis (SSPE) and progressive multifocalleukoenceph­
alopathy (PML)-have been shown to be due to slow virus infection. 33
The kuru model may also apply to amyotrophic lateral sclerosis, Al­
zheimer’s disease, and other presenile or senile dementias. 34

Thus, as research proceeds, the concept of a related group of diseases
of viral etiology has emerged. These are all virally transmitted diseases of
the brain, infections that do not provoke the typical inflammatory response,
caused by viruses with very unconventional properties. The kuru agent
remains stable on storage at 70°e. for many years and after freeze-drying. It
is not totally inactivated when subjected to a temperature of 85°e. for thirty
minutes. 35 Fore cooking methods therefore did not destroy the kuru virus.
After the brain of a dead person was removed from the skull, the tissue was
squeezed to a pulp and steamed in bamboo cylinders at temperatures that
would not completely inactivate the virus, since in high altitudes water
boils at 90-95 0 No serological tests for the virus have been found. There is •
no evidence of an immune response, and no antibodies have been detected.
Nor is there evidence of an antigen related to any of the more than fifty
known virus antigens. 36 Yet the kuru-scrapie agents persist in laboratory
cultures of infected brain tissue, and they are readily transmissible in
extremely low dilutions by intravenous, intramuscular, or subcutaneous
injection, and from tissues other than brain (pooled liver, kidney, spleen,
and lymph nodes).37

Kuru has not yet been transmitted to animals via the gastrointestinal
tract. Gajdusek therefore has suggested that a likely route of infection from
contaminated brain was through the skin, entering either through cuts and
sores or upon being rubbed by unwashed hands into the nose or eyes.38

26 27

Research continues on the question of susceptibility to the agent, and
on refining our know ledge of its properties. Kuru is already an established
l.1ndmark in neurology and virology. In neurology, it is the first human
degenerative disease shown to be caused by a virus. In virology, it is the first
;1Uman disease shown to be caused by a novel kind of viral agent. 39 The
implications of the discovery of the slow virus etiology of kuru for the
understanding of other diseases have only begun to be explored. The Fore
experience will be remembered for decades to come as investigators use the
kuru model in their search for the cause of disease in other populations of
the world.


,(1 this day, Fore universally believe that kuru is caused by malicious sor­
·.:rcrs in their midst. Early observers of the Fore population were struck not
-‘lily by the concern of the people with this strange and dramatic disease,
.Jut by their more general focus on sorcery. In the report of August 1953
;uoted at the beginning of this chapter, Patrol Officer McArthur wrote: “In
‘-] i s area, I regard sorcery as a powerful foe …. Its results are serious. Even
nce the last patrol in December 1952, it has caused one tribal fight and two

” <;crtions of ground [evacuations of hamlets] .... There are ... a large t:mber of sorcerers."

Fore have a powerful reputation as sorcerers among other populations
J the reginn.~o As far away as Kainantu and Hcnganofi, forty miles and two
i three days’ walk from Wanitabe, people believe that methods and ingre­
imts can be obtained from Fore,41 while their immediate neighbors view
,ile with considerable anxiety. Gimi have a particular fear of sorcery
i);mating from their eastern neighbors,~2 and Kamano confide that they
“ke special care not to throwaway scraps of food (which can be used
:Jinst them in sorcery bundles I while parties of Fore are visiting. Keiagana
!ll1it to a similar caution about providing Fore with potential sorcery
:nerials, and when traveling in Fore territory they deposit sweet potato
!cl sugarcane skins in their string bags, to discard on the return home.
;llth Fore at Ilesa, by contrast, pay no heed to their food scraps or feces
!lile visiting the Awa, but resume Vigilance as soon as they return to home

ri tory. Not only do neighboring peoples fear Fore, but Fore fear one

Patrol Officer Colman’s 1955 report describes South Fore hamlets
;ricaded behind wooden constructions and impenetrable canegrass.
:uss the entrance corridor to the hamlet lies a small gate. “When all the
“:ple are inside after dark,” he writes, “this gate is closed and generally a
‘try is posted. These precautions stop intending sorcerers from entering

.: hamlet…. Some of the men’s houses have an additional encircling
;ckade for the same reason.” Commenting on sanitation and hygiene, he


adds: liThe fault in most native areas is a shallow latrine,” but Fore anxiety
to keep excreta from potential sorcerers results in the construction by South
Fore of ” a latrine hole that seems bottomless.”43

In recent years the Fore reputation for sorcery has become widespread.
New Guinea and Australian newspapers carry occasional accounts of
sorcery-related deaths in the Okapa region,44 and in 1973 the government’s
Law Department inquired into allegations of fifty to sixty sorcery-linked
deaths a year at Okapa said to be caused by professional killers who were
being paid up to five hundred dollars for murder “contracts.”4S

The distribution of kuru lends credence to the belief that Fore sorcery
is vastly more powerful than that of their neighbors. While kuru is present

Barricaded hamlet en­
trance. Photo and legend
by Dr. E. R. Sorenson
from The Edge of the
Forest, Smithsonian Institu­
tion Press, 1976.


: n surrounding populations (as mentioned earlier, 20 percent of kuru deaths
.::lch year occur among other peoples), the prevalence of the disease is mark­
:dly higher among Fore, particularly South Fore. In 1964 it was estimated
,llat Gimi males had only one chance in twenty-five of dying from kuru,
_’:hile the risk for South Fore males exceeded one in five. For females, the
tllain victims of the disease, the difference was even greater. Eighty-four
:’~rcent of Gimi women had a chance of surviving the reproductive period
:”:ithout a fatal attack of kuru, but fewer than one in ten South Fore women
:l1ight do so. The average life expectancy for South Fore women born in the
mid- 1 960’s was estimated at little over twenty years.

46 That the South Fore
were engaged in dangerous sorcery seemed incontestable.




Apart from kuru, the diseases suffered by Fore are similar to those found in
many other Highland peoples. Some of these populations show striking
fluctuations in size from year to year, as do the Fore, which points to infec­
tious disease as a major determinant of mortali ty among Highlanders.\

During the late 1930’S and early 1940’S, a number of epidemics swept
southward through the Fore region-mumps, measles, whooping cough,
and dysentery. Many people died, but the new diseases were not regarded by
Fore as new forms of sorcery, although the loss of certain important men lay
behind later sorcery accusations between some local groups. The simul­
taneous incapacitation of large numbers of people is what Fore recall most
vividly about these epidemics. Disorganization of labor was at times so
great that normal agricultural activities were halted, and the ripening com
and cucumbers are said to have rotted while people lay recovering in their
houses. Aware that the dysentery epidemic of 1943 had swept down upon
thein like a great wind from the north, South Fore at Purosa responded by
refusing visitors access to their hamlets, and by persuading fellow residents
to remain at home until the epidemic had passed. With clinical perception,
Fore noted that the second wave of some diseases, sUfh as mumps, was less
serious than the first. In 1959 and 1962, however, influenza epidemics
caused many deaths, especially among children under the age of five.

Discounting kuru, the commonest problems afflicting Fore at present
are upper-respiratory infections, bronchitis, pneumonia, diarrhea, gastroen­
teritis, and complications of childbirth. Meningitis and tetanus also occur,
as does anemia in association with hookworm, closely spaced pregnancies,

30 31


Calculate the price of your paper

Total price:$26
Our features

We've got everything to become your favourite writing service

Need a better grade?
We've got you covered.

Order your paper